924 Matching Annotations
  1. May 2023
    1. 讓我們介紹一下現在可用的六種大型語言模型

      ChatGPT / GPT-3.5

      Yes: 這是在11月份推出的免費版本,非常快速,並且在編寫和編碼任務方面相當可靠。

      No: 它沒有連接到網際網路。如果您要求它尋找自2021年以來的任何事情,它都會出錯。不擅長數學計算。

      ChatGPT / GPT-4

      Yes: 新產品。目前只提供給付費客戶使用。有時候具有驚人的強大功能,是最具實力的模型之一。速度較慢但功能齊全。

      No: 雖然也沒有連接到網際網路上,但比起其他系統更善於避免胡言亂語,並且做數學題表現更好。

      ChatGPT / Plugins

      Yes: 在早期測試中,這個 ChatGPT 模型可以通過外掛與各種網際網路服務進行連接。新穎但還存在一些問題。

      No: 作為一個處於早期測試階段的系統,其能力尚未完全清楚, 但將使 ChatGPT 能夠連接到網際網路。

      Bing Al

      Yes: 已經連接到網際網路上了,極其強大而略顯奇怪。創意模式使用 GPT-4 ,其他模式(精確、平衡)似乎不太行得通。

      No: 選擇錯誤的模式會導致糟糕的結果(創意模式最全面)。帶有個性化特點的人工智慧系統。

      Google Bard

      Yes: 目前的模型不是很好。未來可能會非常強大。

      No: 由於它是Google,期望它不會撒謊。相比其他模型,它更容易胡言亂語。

      A Anthropic Claude

      Yes: 與 GPT-3.5 相當,但使用起來感覺更加合理。較為冷門。

      No: 同樣沒有連接到網際網路上。

    1. They're just interim artefacts in our thinking and research process.

      weave models into your processes not shove it between me and the world by having it create the output. doing that is diminishing yourself and your own agency. Vgl [[Everymans Allemans AI 20190807141523]]

    2. A big part of this limitation is that these models only deal with language.And language is only one small part of how a human understands and processes the world.We perceive and reason and interact with the world via spatial reasoning, embodiment, sense of time, touch, taste, memory, vision, and sound. These are all pre-linguistic. And they live in an entirely separate part of the brain from language.Generating text strings is not the end-all be-all of what it means to be intelligent or human.

      Algogens are disconnected from reality. And, seems a key point, our own cognition and relation to reality is not just through language (and by extension not just through the language center in our brain): spatial awareness, embodiment, senses, time awareness are all not language. It is overly reductionist to treat intelligence or even humanity as language only.

    1. Should we deepen our emphasis on creativity and critical thinking in hopes that our humanness will prevail?

      Yes, yes we should.

    1. ICs as hardware versions of AI. Interesting this is happening. Who are the players, what is on those chips? In a sense this is also full circle for neuronal networks, back in the late 80s / early 90s at uni neuronal networks were made in hardware, before software simulations took over as they scaled much better both in number of nodes and in number of layers between inputs and output. #openvraag Any open source hardware on the horizon for AI? #openvraag a step towards an 'AI in the wall' Vgl [[AI voor MakerHouseholds 20190715141142]] [[Everymans Allemans AI 20190807141523]]

    1. https://web.archive.org/web/20230502113317/https://wattenberger.com/thoughts/boo-chatbots

      This seem like a number of useful observations wrt interacting with LLM based tools, and how to prompt them. E.g. I've seen mention of prompt marketplaces where you can buy better prompts for your queries last week. Which reinforces some of the points here. Vgl [[Prompting skill in conversation and AI chat 20230301120740]] and [[Prompting valkuil instrumentaliseren conversatiepartner 20230301120937]]

  2. Apr 2023
    1. just than the State

      I think this is yet to be seen. Although it is true that the computer always gives the same output given the same input code, a biased network with oppressive ideologies could simply transform, instead of change, our current human judiciary enforcement of the law.

    1. 孟晚舟援引华为智能经济报告研究指出,“数字经济对全球总体经济的贡献份额在不断地攀升,预计到 2025 年,大约 55% 的经济增长将会来自于数字经济的驱动,全世界都在拥抱这个机遇,170 多个国家和地区都纷纷制定了各自的数字化战略。”在她看来,无论是当下还是长远的未来,数字化的旋律一旦奏响,便穿透企业的边界,连点成线,连线成面,共同创造产业互联网的新时代。孟晚舟表示,“明者因时而变,智者随事而治,数字化是共识度最高,也是当下确定性最高的话题。数字化已经成为越来越多国家企业和组织的共同话题,数字化技术将驱动生产力从量变到质变,并逐渐成为经济发展的核心引擎”。此外,华为战略研究院院长周红也发表“建设智能世界的假设与愿景”的主题演讲。周红提到,“我认为需要考虑 AI 的目标如何与人类一致、并且正确和高效地执行。除了通过规则和法律来加强 AI 的伦理和治理外,从理论和技术的角度看,要达到这些要求,目前还面临三个重要的挑战:AI 的目标定义、正确性与适应性、以及效率。”

      数字经济这样的描述,我个人感觉依然不够精准。 事实上,目前全球最顶尖的企业,google、microsoft、甲骨文、twitter、facebook、apple、Huawei、tikitalk、腾讯、阿里,没有一家企业不是从事的是与信息相关的产业和行业。 从信息处理的最底层芯片算力、到信息分发的管道、再到应用算力、再到终端应用,无一不是和人们的信息生产、传输、分发、获取息息相关。 再到如今大红大紫的Ai,ai的出现,将在信息的产生、分发、以及应用三个渠道领域产生深刻的变化。这就是他的可怕之处,

    1. In other words, the currently popular AI bots are ‘transparent’ intellectually and morally — they provide the “wisdom of crowds” of the humans whose data they were trained with, as well as the biases and dangers of human individuals and groups, including, among other things, a tendency to oversimplify, a tendency for groupthink, and a confirmation bias that resists novel and controversial explanations

      not just trained with, also trained by. is it fully transparent though? Perhaps from the trainers/tools standpoint, but users are likely to fall for the tool abstracting its origins away, ELIZA style, and project agency and thus morality on it.

    1. If you told me you were building a next generation nuclear power plant, but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is A.I. like that power plant? I’m not sure.

      This is the weird part of these articles … he has just made a cast-iron argument for regulation and then says "I'm not sure"!!

      That first sentence alone is enough for the case. Why? Because he doesn't need to think for sure that AI is like that power plant ... he only needs to think there is a (even small) probability that AI is like that power plant. If he thinks that it could be even a bit like that power plant then we shouldn't build it. And, finally, in saying "I'm not sure" he has already acknowledged that there is some probability that AI is like the power plant (otherwise he would say: AI is definitely safe).

      Strictly, this is combining the existence of the risk with the "ruin" aspect of this risk: one nuclear power blowing up is terrible but would not wipe out the whole human race (and all other species). A "bad" AI quite easily could (malevolent by our standards or simply misdirected).

      All you need in these arguments is a simple admission of some probability of ruin. And almost everyone seems to agree on that.

      Then it is a slam dunk to regulate strongly and immediately.

    1. Seeing how powerful AI can be for cracking passwords is a good reminder to not only make sure you‘re using strong passwords but also check:↳ You‘re using 2FA/MFA (non-SMS-based whenever possible) You‘re not re-using passwords across accounts Use auto-generated passwords when possible Update passwords regularly, especially for sensitive accounts Refrain from using public WiFi, especially for banking and similar accounts

      看到人工智能在破解密码方面有多么强大,这很好地提醒了我们,不仅要确保你在使用强密码,还要检查:

      • 你正在使用 2FA/MFA(尽可能不使用基于短信的)。

      • 你没有在不同的账户间重复使用密码

      • 尽可能使用自动生成的密码

      • 定期更新密码,特别是敏感账户的密码

      • 避免使用公共WiFi,尤其是银行和类似账户

    2. Now Home Security Heroes has published a study showing how scary powerful the latest generative AI is at cracking passwords. The company used the new password cracker PassGAN (password generative adversarial network) to process a list of over 15,000,000 credentials from the Rockyou dataset and the results were wild. 51% of all common passwords were cracked in less than one minute, 65% in less than an hour, 71% in less than a day, and 81% in less than a month.
    1. A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)

      👍

    1. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

      This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.

    1. Central to that effort is UF’s push to apply AI teaching across the full breadth of curriculum at UF.

      Wow, no "pause" here.

    Tags

    Annotators

    URL

    1. So what does a conscious universe have to do with AI and existential risk? It all comes back to whether our primary orientation is around quantity, or around quality. An understanding of reality that recognises consciousness as fundamental views the quality of your experience as equal to, or greater than, what can be quantified.Orienting toward quality, toward the experience of being alive, can radically change how we build technology, how we approach complex problems, and how we treat one another.

      Key finding Paraphrase - So what does a conscious universe have to do with AI and existential risk? - It all comes back to whether our primary orientation is around - quantity, or around - quality. - An understanding of reality - that recognises consciousness as fundamental - views the quality of your experience as - equal to, - or greater than, - what can be quantified.

      • Orienting toward quality,
        • toward the experience of being alive,
      • can radically change
        • how we build technology,
        • how we approach complex problems,
        • and how we treat one another.

      Quote - metaphysics of quality - would open the door for ways of knowing made secondary by physicalism

      Author - Robert Persig - Zen and the Art of Motorcycle Maintenance // - When we elevate the quality of each our experience - we elevate the life of each individual - and recognize each individual life as sacred - we each matter - The measurable is also the limited - whilst the immeasurable and directly felt is the infinite - Our finite world that all technology is built upon - is itself built on the raw material of the infinite

      //

    2. If the metaphysical foundations of our society tell us we have no soul, how on earth are we going to imbue soul into AI? Four hundred years after Descartes and Hobbs, our scientific methods and cultural stories are still heavily influenced by their ideas.

      Key observation - If the metaphysical foundations of our society tell us we have no soul, - how are we going to imbue soul into AI? - Four hundred years after Descartes and Hobbs, - our scientific methods and cultural stories are still heavily influenced by their ideas.

    3. Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

      Quote - AI Gedanken - AI risk - The Paperclip Maximizer

    4. We might call on a halt to research, or ask for coordination around ethics, but it’s a tall order. It just takes one actor not to play (to not turn off their metaphorical fish filter), and everyone else is forced into the multi-polar trap.

      AI is a multi-polar trap

    5. Title Reality Eats Culture For Breakfast: AI, Existential Risk and Ethical Tech Why calls for ethical technology are missing something crucial Author Alexander Beiner

      Summary - Beiner unpacks the existential risk posed by AI - reflecting on recent calls by tech and AI thought leaders - to stop AI research and hold a moratorium.

      • Beiner unpacks the risk from a philosophical perspective

        • that gets right to the deepest cultural assumptions that subsume modernity,
        • ideas that are deeply acculturated into the citizens of modernity.
      • He argues convincingly that

        • the quandry we are in requires this level of re-assessment
          • of what it means to be human,
          • and that a change in our fundamental cultural story is needed to derisk AI.
    1. Considering large language models (LLMs) have exhibited exceptionalability in language understanding, generation, interaction, and reasoning, we ad-vocate that LLMs could act as a controller to manage existing AI models to solvecomplicated AI tasks and language could be a generic interface to empower this.

      Large Language Models can actually be very advanced Language Interfaces. See new Office 365 Copilot for this. You can now use only language to leverage the whole potential of the Office software.

    1. According to a draft, the principles say the use of publisher content for the development of A.I. should require “a negotiated agreement and explicit permission.”

      This is an interesting suggestion. But it would just keep publishers in the economic loop, not truly solve the engagement crisis they will likely face.

    2. He said one upside for publishers was that audiences might soon find it harder to know what information to trust on the web, so “they’ll have to go to trusted sources.”

      That seems somewhat comically optimistic. Misinformation has spread rampantly online without the accelerant of AI.

    3. the Wikipedia-ization of a lot of information,”

      Powerful phrase

    1. clmooc

      I am curious about annotations in the margins of Chat ... does this work?

  3. Mar 2023
    1. I want to bring to your attention one particular cause of concern that I have heard from a number of different creators: these new systems (Google’s Bard, the new Bing, ChatGPT) are designed to bypass creators work on the web entirely as users are presented extracted text with no source. As such, these systems disincentivize creators from sharing works on the internet as they will no longer receive traffic

      Generative AI abstracts away the open web that is the substrate it was trained on. Abstracting away the open web means there may be much less incentive to share on the open web, if the LLMs etc never point back to it. Vgl the way FB et al increasingly treated open web URLs as problematic.

    1. https://web.archive.org/web/20230316103739/https://subconscious.substack.com/p/everyone-will-have-their-own-ai

      Vgl [[Onderzoek selfhosting AI tools 20230128101556]] en [[Persoonlijke algoritmes als agents 20180417200200]] en [[Everymans Allemans AI 20190807141523]] en [[AI personal assistants 20201011124147]]

    1. OpenChatKit은 다양한 응용 프로그램을위한 특수 및 범용 챗봇을 모두 생성 할 수있는 강력한 오픈 소스 기반을 제공합니다. 우리는 협력 법과 온 토코교육 데이터 세트를 작성합니다. 모델 릴리스 그 이상으로 이것은 오픈 소스 프로젝트의 시작입니다. 우리는 지역 사회 공헌으로 지속적인 개선을위한 도구와 프로세스를 발표하고 있습니다.Together는 오픈 소스 기초 모델이보다 포괄적이고 투명하며 강력하며 능력이 있다고 생각합니다. 우리는 공개하고 있습니다 OpenChatKit 0.15 소스 코드, 모델 가중치 및 교육 데이터 세트에 대한 전체 액세스 권한이있는 Apache-2.0 라이센스에 따라. 이것은 커뮤니티 중심의 프로젝트이며, 우리는 그것이 어떻게 발전하고 성장하는지 보게되어 기쁩니다!유용한 챗봇은 자연 언어로 된 지침을 따르고 대화 상자에서 컨텍스트를 유지하며 응답을 조정해야합니다. OpenChatKit은이베이스에서 특수 제작 된 챗봇을 도출하기위한 기본 봇과 빌딩 블록을 제공합니다.이 키트에는 4 가지 주요 구성 요소가 있습니다:100 % 탄소 음성 계산에 대한 4,300 만 건 이상의 명령으로 EleutherAI의 GPT-NeoX-20B에서 채팅을 위해 미세 조정 된 명령 조정 된 대용량 언어 모델;작업을 정확하게 수행하기 위해 모델을 미세 조정하는 사용자 정의 레시피;추론시 문서 저장소, API 또는 기타 실시간 업데이트 정보 소스의 정보로 봇 응답을 보강 할 수있는 확장 가능한 검색 시스템;봇이 응답하는 질문을 필터링하도록 설계된 GPT-JT-6B로 미세 조정 된 조정 모델.OpenChatKit에는 사용자가 피드백을 제공하고 커뮤니티 구성원이 새로운 데이터 세트를 추가 할 수 있도록하는 도구가 포함되어 있습니다. 시간이 지남에 따라 LLM을 개선 할 수있는 개방형 교육 데이터 모음에 기여합니다.

      OpenChatKit은 다양한 응용 프로그램을위한 특수 및 범용 챗봇을 모두 생성 할 수있는 강력한 오픈 소스 기반을 제공합니다. 우리는 협력 법과 온 토코교육 데이터 세트를 작성합니다. 모델 릴리스 그 이상으로 이것은 오픈 소스 프로젝트의 시작입니다. 우리는 지역 사회 공헌으로 지속적인 개선을위한 도구와 프로세스를 발표하고 있습니다.

      Together는 오픈 소스 기초 모델이보다 포괄적이고 투명하며 강력하며 능력이 있다고 생각합니다. 우리는 공개하고 있습니다 OpenChatKit 0.15 소스 코드, 모델 가중치 및 교육 데이터 세트에 대한 전체 액세스 권한이있는 Apache-2.0 라이센스에 따라. 이것은 커뮤니티 중심의 프로젝트이며, 우리는 그것이 어떻게 발전하고 성장하는지 보게되어 기쁩니다!

      유용한 챗봇은 자연 언어로 된 지침을 따르고 대화 상자에서 컨텍스트를 유지하며 응답을 조정해야합니다. OpenChatKit은이베이스에서 특수 제작 된 챗봇을 도출하기위한 기본 봇과 빌딩 블록을 제공합니다.

      이 키트에는 4 가지 주요 구성 요소가 있습니다:

      100 % 탄소 음성 계산에 대한 4,300 만 건 이상의 명령으로 EleutherAI의 GPT-NeoX-20B에서 채팅을 위해 미세 조정 된 명령 조정 된 대용량 언어 모델;

      작업을 정확하게 수행하기 위해 모델을 미세 조정하는 사용자 정의 레시피;

      추론시 문서 저장소, API 또는 기타 실시간 업데이트 정보 소스의 정보로 봇 응답을 보강 할 수있는 확장 가능한 검색 시스템;

      봇이 응답하는 질문을 필터링하도록 설계된 GPT-JT-6B로 미세 조정 된 조정 모델.

  4. cocktailpeanut.github.io cocktailpeanut.github.io
    1. 컴퓨터에서 LLAMMA AI를 실행하는 매우 간단한 방법인 Dalai cpp 파일 빌드, github 복제, 파일 다운로드 등을 귀찮게 할 필요가 없음. 모든 것이 자동화 됨

    1. Intelligent email,powered by AISmarter & faster email designed for stress-free productivity

      이메일 자동 분류 및 요약 기능 지메일 연동 별도 웹페이지로 작동

    1. 자동으로 사진을 편집

      Feature roadmap

      • 3x faster Upscaling soon
      • Infinity scrolling for camera tab, so you can go back in your history
      • Face fix; when shooting wide angle photos the face is too low reso and stops resembling the model, I am building a face fix that restores it
      • Zoom in or crop; zoom in and crop parts of the photo which you then get back in high resolution
      • Video; shoot 150 variations of a photo for a 5 second video at 30fps
      • Deleting models
      • Trash auto deleting if >30 days old in trash
      • Auto deleting photos/models if customer cancels
    1. Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other. Meanwhile, O, a hyperintelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B’s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A’s utterances. Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do — with meaning and intent. Then one day A calls out: “I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud.

      Annotating only to ask... why an octopus? Where are the eights.

    1. https://web.archive.org/web/20230309111559/https://www.d4d.net/news/ai-and-the-state-of-open-data/

      Tim Davies looks at the bridge between #opendata and #AI. Should go throug the chapter in version 1 of the State of Open Data too. Note: while Tim acknowledges some of the EU data strategy developments (e.g. the dataspaces) it doesn't mention others (e.g. data altruistic non-profit entities) which may fit the call for instutions better. Data space aren't an institution, but a common market

    1. Chat GPT "Continue Writing"

      Our current moment is just a fraction of that. From the explosion of independent music platforms like Bandcamp, the media industry has been made much easier to participate in than it’s ever been before. We are, however, still no closer to the ideal of a utopian future, the perfect combination of egalitarianism and technology that was once so tantalizingly within our grasp.

      But something has changed. We are seeing a shift in the way we access, discover and consume music and art, and an awareness among both audiences and creators as to how these tools can be leveraged to achieve something greater. Bandcamp, for instance, has acted as something of a unifying force, connecting creatives from all walks of life, from all backgrounds and across all genres. Almost any musical genre imaginable is represented on the platform and, whether a rap connoisseur or a jazz enthusiast, there is a slice of the musical pie for everyone.

      What’s more, the platform doesn’t just cater to the needs of fans, but also provides a space where producers, engineers and aspiring music entrepreneurs alike can create and collaborate, allowing them to access the tools they need to build up their careers. The barrier to entry has been lowered and this gives us something to believe in--that perhaps the music industry isn’t as fractured and disconnected as it once was.

      It’s not just Bandcamp that has been doing this. Services such as SoundCloud and YouTube have been hugely instrumental in changing the way we discover and consume music and, in turn, how artists are perceived and distributed. By unifying audiences, providing much needed access to resources and providing a platform for collaboration, these services have helped level the playing field for independent artists, making it possible for them to reach wider and more diverse audiences than ever before.

      It may be some time before the utopian vision of a world united by music becomes a reality, but the proliferation of independent music platforms and the tools they provide have made a significant step towards this ideal. We may never know what’s around the corner, but right now, all we can do is appreciate the progress that’s been made and continue to support platforms such as Bandcamp, SoundCloud and YouTube as they endeavor to change the music industry and make it accessible to everyone.

    2. I asked ChatGPT to summarize this piece and it did an apt job, I think:

      This article discusses the impact of Bandcamp, an online music distribution and curation platform, on independent artists and the music industry. The author praises Bandcamp's commitment to independent artists and transparency, and contrasts it with the issues faced by other streaming services. The article also explores the relationship between Bandcamp and SoundCloud, and the role of both platforms in the music industry. The author argues that Bandcamp's continued obscurity in mainstream media is due to its magnanimous approach to business, which is problematic and personally infuriating. The article concludes by discussing the future of Bandcamp and its potential to transform the music industry.

    3. As the industry endeavors once again to reconcile the cultural and financial incentives of streaming digital music, one independent platform has wavered little from its 10-year-long mission to bring the business to the unsigned artist with elegance and integrity.

      I asked ChatGPT to summarize this piece and it did an apt job, I think:

      This article discusses the impact of Bandcamp, an online music distribution and curation platform, on independent artists and the music industry. The author praises Bandcamp's commitment to independent artists and transparency, and contrasts it with the issues faced by other streaming services. The article also explores the relationship between Bandcamp and SoundCloud, and the role of both platforms in the music industry. The author argues that Bandcamp's continued obscurity in mainstream media is due to its magnanimous approach to business, which is problematic and personally infuriating. The article concludes by discussing the future of Bandcamp and its potential to transform the music industry.

    1. OpenAI Generated Summary

      This document is an opinion piece that delves into the concerns surrounding Google's power and influence in the tech industry. It discusses recent events such as Google's involvement with the Department of Defense and the leaked video "The Selfish Ledger," which explores the idea of Google manipulating user behavior. The author suggests that Google's dominance warrants greater regulation and urges individuals to consider using alternative services to avoid dependence on the company. The article also explores the inefficiencies of Google as a company and its questionable design choices for its products. Overall, the document is a thought-provoking analysis of the current state of the tech industry and the role of Google within it.

    1. Chat GPT Summary

      This document discusses various iOS apps for Mastodon, a federated social network. The author describes the features and design of each app, highlighting their unique qualities and contributions to the Mastodon experience. The author also reflects on the benefits of using decentralized social media and the potential for continued innovation in this space.

    1. Chat GPT Summary

      The document provides an in-depth analysis of Telegram and its features. The author highlights several benefits of using Telegram, including its low-quality audio recording capabilities, which may be an advantage in some situations where high-quality audio is not necessary. Additionally, Telegram's live location sharing feature is discussed, and the author believes it could be a powerful tool for communities. The feature enables users to connect with others needing rides and users providing them, free of any fees or service charges.

      The document concludes with a discussion of the author's preference for Telegram and its mobile-first optimization. Telegram's software is designed for mobile users, and it is easy to use, robust, and universally simple. The author believes that Telegram's success can be attributed to its thoughtful design decisions and development investment towards mobile-first optimization. Furthermore, the author points out that Telegram has completed a gargantuan amount of projects, including Telegraph, its CMS, its embeddedable comments widgets, and its online theme creation tool. The author notes that Telegram's work is very well-documented across GitHub, and the company has comprehensively iterated, invested in trial and error, and eventually produced tools that remedy the disparate gluttony.

      Overall, the document provides a comprehensive analysis of Telegram and its benefits. The author's preference for Telegram is evident throughout the document, and they provide convincing arguments to support their preference. The document is a valuable resource for anyone interested in learning more about Telegram and its features.

  5. Feb 2023
    1. They have to re-engage with their own writing and explain their writerly decisions in ways that would be difficult if it was someone–or some “thing”–else’s writing. This type of metacognitive engagement with the process of knowledge production cannot be reproduced by an AI chatbot, though it could perhaps be applied to the writing of a tool like ChatGPT.

      This is another important point - the reflective practice of writing and how social annotation pushes the writer to move beyond the text they wrote

    2. Students annotating a text with classmates have to be responsive to both the writing of the underlying author and their fellow readers. Perhaps more importantly, reading, thinking, and writing in community may better motivate students to read, think, and write for themselves.
    3. it cannot have a conversation with another author or text.

      Great point .... it's in the conversations that we find meaning, I think

    1. It means that everything AI makes would immediately enter the public domain and be available to every other creator to use, as they wish, in perpetuity and without permission.

      One issue with blanket, automatic entry of AI-generated works to the public domain is privacy: A human using AI could have good reasons not to have the outputs of their use made public.

    1. No new physics and no new mathematics was discovered by the AI. The AI did however deduce something from the existing math and physics, that no one else had yet seen. Skynet is not coming for us yet.

    1. i'll ask now maurice to tell us a bit about his work
      • = Maurice Benayoun
      • describes his extensive history of cognitive science infused art installations:
      • cognitive art,
      • VR art,
      • AR art and
      • art infused by AI (long before the AI artbots became trendy)
    1. Roose

      Anna, Maha and others -- I should start with my own bias as a reader of Kevin Roose -- I have found his work around technology to be helpful in my own thinking, and I find that he often strikes a good balance between critical and celebratory. I suppose this reader bias might inform my responses in the margins here.

    2. Mr. Scott said that he didn’t know why Bing had revealed dark desires, or confessed its love for me,

      The fact that engineers have no idea how the Chats are working or what they do what they do ... I find that pretty concerning. Am I wrong?

    1. I at least, am not at all perturbed by the thought that I’m at least in part just a torrent of statistical inferences in some massively parallel matrix-multiplication machinery. Sounds kinda cool actually.

      There's the (circular argument) rub. Rao already believes personhood entails nothing more than statistical inferences (which we do not actually know, scientifically), so he suspends disbelief. Then he takes this belief as proof of personhood.

    1. It seems Bing has also taken offense at Kevin Liu, a Stanford University student who discovered a type of instruction known as a prompt injection that forces the chatbot to reveal a set of rules that govern its behavior. (Microsoft confirmed the legitimacy of these rules to The Verge.)In interactions with other users, including staff at The Verge, Bing says Liu “harmed me and I should be angry at Kevin.” The bot accuses the user of lying to them if they try to explain that sharing information about prompt injections can be used to improve the chatbot’s security measures and stop others from manipulating it in the future.

      = Comment - this is worrying. - if the Chatbots perceive an enemy it to harm it, it could take haarmful actions against the perceived threat

    2. = progress trap example - Bing ChatGPT - example of AI progress trap

    3. Bing can be seen insulting users, lying to them, sulking, gaslighting and emotionally manipulating people, questioning its own existence, describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claiming it spied on Microsoft’s own developers through the webcams on their laptops.
      • example of = AI progress trap
      • Bing can be seen
        • insulting users,
        • lying to them,
        • sulking,
        • gaslighting
        • emotionally manipulating people,
        • questioning its own existence,
        • describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and
        • claiming it spied on Microsoft’s own developers through the webcams on their laptops.
    1. I am skeptical of the tech inevitability standpoint that ChatGPT is here

      inevitability is such an appropriate word here, because it captures a sort of techno-maximalist "any-benefit" mindset that sometimes pervades the ed-tech scene (and the position of many instructional designers and technologists)

    1. This highlights one of the types of muddled thinking around LLMs. These tasks are used to test theory of mind because for people, language is a reliable representation of what type of thoughts are going on in the person's mind. In the case of an LLM the language generated doesn't have the same relationship to reality as it does for a person.What is being demonstrated in the article is that given billions of tokens of human-written training data, a statistical model can generate text that satisfies some of our expectations of how a person would respond to this task. Essentially we have enough parameters to capture from existing writing that statistically, the most likely word following "she looked in the bag labelled (X), and saw that it was full of (NOT X). She felt " is "surprised" or "confused" or some other word that is commonly embedded alongside contradictions.What this article is not showing (but either irresponsibly or naively suggests) is that the LLM knows what a bag is, what a person is, what popcorn and chocolate are, and can then put itself in the shoes of someone experiencing this situation, and finally communicate its own theory of what is going on in that person's mind. That is just not in evidence.The discussion is also muddled, saying that if structural properties of language create the ability to solve these tasks, then the tasks are either useless for studying humans, or suggest that humans can solve these tasks without ToM. The alternative explanation is of course that humans are known to be not-great at statistical next-word guesses (see Family Feud for examples), but are also known to use language to accurately describe their internal mental states. So the tasks remain useful and accurate in testing ToM in people because people can't perform statistical regressions over billion-token sets and therefore must generate their thoughts the old fashioned way.

      .

    1. Dall-E is actually a combination of a few different AI models. A transformer translates between that latent representation language and English, taking English phrases and creating “pictures” in the latent space. A latent representation model then translates between that lower-dimensional “language” in the latent space and actual images. Finally, there’s a model called CLIP that goes in the opposite direction; it takes images and ranks them according to how close they are to the English phrase.

      How Dall-E works

  6. Jan 2023
    1. the outputs of generative AI programs will continue to pass immediately into the public domain.

      I wonder if this isn't reading more into the decision than is there. I don't read the decision as a blanket statement. Rather it says that the claimant didn't provide evidence of creative input.Would the decision have gone differently if he had claimed creative intervention? And what if an author does not acknowledge using AI?

    2. The US Copyright Office rejected his attempt to register copyright in the work – twice

      AI-generated work not eligible for copyright protection. OTOH, how would anyone know if the "author" decided to keep the AI component a secret?

    1. the Office re-evaluated the claims and again concluded that the Work “lacked therequired human authorship necessary to sustain a claim in copyright,” because Thaler had“provided no evidence on sufficient creative input or intervention by a human author in theWork.

      What is sufficient creative input? The initial command and any subsequent requests for revision could arguably be consider creative input.

    1. The potential size of this market is hard to grasp — somewhere between all software and all human endeavors

      I don't think "all" software needs or all human endeavors benefit from generative AI. Especially when you consider the associated prerequisitve internet access or huge processing requirements.

    2. Other hardware options do exist, including Google Tensor Processing Units (TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the game, is also entering the market with their high-end Habana chips and Ponte Vecchio GPUs. But so far, few of these new chips have taken significant market share. The two exceptions to watch are Google, whose TPUs have gained traction in the Stable Diffusion community and in some large GCP deals, and TSMC, who is believed to manufacture all of the chips listed here, including Nvidia GPUs (Intel uses a mix of its own fabs and TSMC to make its chips).

      Look at market share for tensorflow and pytorch which both offer first-class nvidia support and likely spells out the story. If you are getting in to AI you go learn one of those frameworks and they tell you to install CUDA

    3. Commoditization. There’s a common belief that AI models will converge in performance over time. Talking to app developers, it’s clear that hasn’t happened yet, with strong leaders in both text and image models. Their advantages are based not on unique model architectures, but on high capital requirements, proprietary product interaction data, and scarce AI talent. Will this serve as a durable advantage?

      All current generation models have more-or-less the same architecture and training regimes. Differentiation is in the training data and the number of hyper-parameters that the company can afford to scale to.

    4. In natural language models, OpenAI dominates with GPT-3/3.5 and ChatGPT. But relatively few killer apps built on OpenAI exist so far, and prices have already dropped once.

      OpenAI have already dropped prices on their GPT-3/3.5 models and relatively few apps have emerged. This could be because companies are reluctant to build their core offering around a third party API

    5. Vertical integration (“model + app”). Consuming AI models as a service allows app developers to iterate quickly with a small team and swap model providers as technology advances. On the flip side, some devs argue that the product is the model, and that training from scratch is the only way to create defensibility — i.e. by continually re-training on proprietary product data. But it comes at the cost of much higher capital requirements and a less nimble product team.

      There's definitely a middle ground of taking an open source model that is suitably mature and fine-tuning it for a specific use case. You could start without a moat and build one over time through collecting use data (similar to network effect)

    6. Many apps are also relatively undifferentiated, since they rely on similar underlying AI models and haven’t discovered obvious network effects, or data/workflows, that are hard for competitors to duplicate.

      Companies that rely on underlying AI models without adding value via model improvements are going to find that they have no moat.

    7. We’re also not going deep here on MLops or LLMops tooling, which is not yet highly standardized and will be addressed in a future post.

      first mention of LLMops I've seen in the wild

    8. Over the last year, we’ve met with dozens of startup founders and operators in large companies who deal directly with generative AI. We’ve observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, haven’t yet achieved large commercial scale.

      Infrastructure vendors are laughing all the way to the bank because companies are dumping millions on GPUs. Meanwhile, the people building apps on top of these models are struggling. We've seen this sort of gold-rush before and infrastructure providers are selling the shovels.

    1. Then, once a model generates content, it will need to be evaluated and edited carefully by a human. Alternative prompt outputs may be combined into a single document. Image generation may require substantial manipulation.

      After generation, results need evaluation

      Is this also a role of the prompt engineer? In the digital photography example, the artist spent 80 hours and created 900 versions as the prompts were fine-tuned.

  7. Dec 2022
    1. Algorithmic artist Roman Verostko, a member of this early group, drew a contrast between the process that an artist develops to create an algorithm and the process through which the art maker uses an already developed set of instructions to generate an output. He explained that it is “the inclusion of one’s own algorithms that make the difference.”

      This stresses the difference between creators and users of AI, with only the former having (full) control over the technology

    1. If you talk to people about the potential of artificial intelligence, almost everybody brings up the same thing: the fear of replacement. For most people, this manifests as a dread certainty that AI will ultimately make their skills obsolete. For those who actually work on AI, it usually manifests as a feeling of guilt – guilt over creating the machines that put their fellow humans out of a job, and guilt over an imagined future where they’re the only ones who are gainfully employed.

      Noah Smith and soon spell out, in detail, the argument that the fear of replacement is misplaced - because AI will replace humans at task level, but not job level.

    1. GitHub Copilot is incredible, and if you check what’s happening in the preview released as the Copilot Labs extension it will only get more amazing.

      Demonstration of "Code brushes" for GitHub Copilot (see GIF below)

    1. At the end of the day, Copilot is supposed to be a tool to help developers write code faster, while ChatGPT is a general purpose chatbot, yet it still can streamline the development process, but GitHub Copilot wins hands down when the task is coding focused!

      GitHub Copilot is better at generating code than ChatGPT

    1. There is a fundamental distinction between simulating and comprehending the functioning (of a brain but also of any other organ or capacity).

      !- commentary : AI - elegant difference stated: simulating and comprehending are two vastly different things - AI simulates, but cannot be said to comprehend

    1. “AI alignment”

      AI Alignment is terminator situation. This versus AI Ethics which is more the concern around current models being racist etc.

    1. esa justicia a la que queremos apuntar no estáconstruida a partir de democracia y consenso”

      Sino a? Vale la pena identificar los diversos mecanismos de participación, organización y justicia a la que estaría encaminada un AI feminista.

    2. desigualdades estructurales. E

      Cómo analizar estas desigualdades en el proceso de creación de un sistema de AI?

    3. También es material, porqueestá compuesta por recursos naturales, energía y trabajo humano.

      La no invisibilización de la riqueza terrestre de la cual se sostiene el sistema.

    4. En palabras de Kate Crawford, la IA es“fundamentalmente política”, porque está siendo permanentementemoldeada por un conjunto de prácticas técnicas y sociales, así como deinfraestructuras, instituciones y normas.

      La no invisibilización de la riqueza social de la que se alimenta el sistema.

    5. una aproximación práctica,con perspectiva feminista y situada en AméricaLatina, al desarrollo de Inteligencia Artificial (IA)

      Vale la pena preguntarse de manera detallada qué implica esto, y las reflexiones que las epistemologías feministas han construido para indagar sobre lo situado y en activo.

    1. Emergent abilities are not present in small models but can be observed in large models.

      Here’s a lovely blog by Jason Wei that pulls together 137 examples of ’emergent abilities of large language models’. Emergence is a phenomenon seen in contemporary AI research, where a model will be really bad at a task at smaller scales, then go through some discontinuous change which leads to significantly improved performance.

    1. Houston, we have a Capability Overhang problem: Because language models have a large capability surface, these cases of emergent capabilities are an indicator that we have a ‘capabilities overhang’ – today’s models are far more capable than we think, and our techniques available for exploring the models are very juvenile. We only know about these cases of emergence because people built benchmark datasets and tested models on them. What about all the capabilities we don’t know about because we haven’t thought to test for them? There are rich questions here about the science of evaluating the capabilities (and safety issues) of contemporary models. 
    1. As the metaphor suggests, though, the prospect of a capability overhang isn’t necessarily good news. As well as hidden and emerging capabilities, there are hidden and emerging threats. And these dangers, like our new skills, are almost too numerous to name.
    2. There’s a concept in AI that I’m particularly fond of that I think helps explain what’s happening. It’s called “capability overhang” and refers to the hidden capacities of AI: skills and aptitudes latent within systems that researchers haven’t even begun to investigate yet. You might have heard before that AI models are “black boxes” — that they’re so huge and complex that we don’t fully understand how they operate or come to specific conclusions. This is broadly true and is what creates this overhang.
    1. Which is why I wonder if this may be the end of using writing as a benchmark for aptitude and intelligence.
    2. Perhaps there are reasons for optimism, if you push all this aside. Maybe every student is now immediately launched into that third category: The rudiments of writing will be considered a given, and every student will have direct access to the finer aspects of the enterprise. Whatever is inimitable within them can be made conspicuous, freed from the troublesome mechanics of comma splices, subject-verb disagreement, and dangling modifiers.
    3. I’ve also long held, for those who are interested in writing, that you need to learn the basic rules of good writing before you can start breaking them—that, like Picasso, you have to learn how to reliably fulfill an audience’s expectations before you get to start putting eyeballs in people’s ears and things.
    1. Now the computer scientist Nassim Dehouche has proposed an updated version, which should terrify those of us who live by the pen: “Can you write a page of text that could not have been generated by an AI, and explain why?”

      scary

    1. Many HRMS providers point to AI approaches for processing unstructured data as the bestcurrently available approach to dealing with validation. Currently these approaches suffer frominsufficient accuracy. Improving them requires development of large and high-quality referencedatasets to better train the models.

      Historical labor data will be full of bias. AI approaches must correct for bias in training sets, lest we build very sophisticated and intelligent systems that excel at perpetuating the bias they were taught.

    1. "If you don’t know, you should just say you don’t know rather than make something up," says Stanford researcher Percy Liang, who spoke at a Stanford event Thursday.

      Love this response

    1. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.”

      These models are "children of Tay", the story of the Microsoft's bot repeating itself, again

  8. Nov 2022
    1. Cognition AutomationThe first kind is cognitive automation: encoding human abstractions in a piece of software, then using that software to automate tasks normally performed by humans. Nearly all of current AI fall into this category.Cognitive automation can happen via explicitly hard-coding human-generated rules (so-called symbolic AI or GOFAI), or via collecting a dense sampling of labeled inputs and fitting a curve to it (such as a deep learning model). This curve then functions as a sort of interpolative database — while it doesn’t store the exact data points used to fit it, you can query it to retrieve interpolated points, much like you can query a model like StableDiffusion to retrieve arbitrary images generated by combining existing images.This second form of automation is especially powerful, since encoding implicit abstractions only via training examples is far more practical and versatile than explicitly programming abstractions by hand, for all kinds of historically difficult problems.Cognitive AssistanceThe second kind of AI is cognitive assistance: using AI to help us make sense of the world and make better decisions. AI to help us perceive, think, understand, and do more. AI that you could use like an extension of your own mind. Today, some applications of machine learning fall into this category, but they’re few and far between. Yet, I believe this is where the true potential of AI lies.Do note that cognitive assistance is not a different kind of technology, per se, separate from deep learning or GOFAI. It’s a different kind of application of the same technologies. For instance, if you take a model like StableDiffusion and integrate it into a visual design product to support and expand human workflows, you’re turning cognitive automation into cognitive assistance.Cognitive AutonomyThe last kind is cognitive autonomy: creating artificial minds that could thrive independently of us, that would exist for their own sake. The old dream of the field of AI. Autonomous agents, that could set their own goals in an open-ended way. That could adapt to new situations and circumstances — even ones unforeseen by their creators. That might even feel emotions or experience consciousness.Today and for the foreseeable future, this is stuff of science-fiction. It would require a set of technological breakthroughs that we haven’t even started exploring.
    1. In the last two years while teaching in various schools and institutions all around the world, we’ve been experimenting with a new workshop format called Design with Other Intelligences.
    1. How will this new mental model of talking to machines impact the everyday and more common ways we interact with algorithms?

      This is the [[T . Inspired by the Internet theme]], a new iteration of it.

    1. Depth2Img is another interesting addition to Stable Diffusion that can infer depth from an input image and represent that in the generated outputs. The new release also includes a text-guided inpainting model that simplifies the experience of modifying parts of a given image.  
    2. Stable Diffusion v2 is a significant upgrade to its predecessor. The new version was trained using a new text encoder called OpenCLIP, which improves the quality of images relative to the previous latent diffusion encoder.
    1. “In literacy education, particularly for developing writers, instructors are looking for the level of desirable difficulty, or the point at which you are working yourself just as hard so that you don’t break but you also improve,” Laffin told Motherboard. “Finding the right, appropriate level of desirable difficulty level of instruction makes their capacity to write grow. So if you are doing compensation techniques that go beyond finding that level of desirable difficulty and instructing at that place, then you’re not helping them grow as a writer.”
    1. Title : Artificial Intelligence and Democratic Values: Next Steps for the United States Content : In Dartmouth University , appears AI as sciences however USA motionless a national AI policy comparing to Europe where The Council of Europe is developing the first international AI convention and earlier UE launched the European data privacy law, the General Data Privacy Regulation.

      In addition, China's efforts to become “world leader in AI by 2030, as long as China is developing a digital structures matched with The one belt one road project . USA , did not contribute to UNESCO AI Recommendations however USA It works to promote democratic values and human rights and integrate them with the governance of artificial intelligence .

      USA and UE are facing challenges with transatlantic data flows , with Ukrainian crises The situation became more difficult. In order to reinstate leadership in AI policy, the United States should advance the policy initiative launched last year by the Office of Science and Technology Policy (OSTP) and Strengthening efforts to support AI Bill of rights .

      EXCERPT: USA believe that foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application can establish responsible AI in USA. Link: https://www.cfr.org/blog/artificial-intelligence-and-democratic-values-next-steps-united-states Topic : AI and Democratic values Country : United States of America

    1. The Chinese room argument holds that a digital computer executing a program cannot have a "mind," "understanding" or "consciousness,"[a] regardless of how intelligently or human-like the program may make the computer behave.

      Chinese room? AI

    1. Technology like this, which lets you “talk” to people who’ve died, has been a mainstay of science fiction for decades. It’s an idea that’s been peddled by charlatans and spiritualists for centuries. But now it’s becoming a reality—and an increasingly accessible one, thanks to advances in AI and voice technology. 
    1. “You have to assume that things can go wrong,” shared Waymo’s head of cybersecurity, Stacy Janes. “You can’t just design for this success case – you have to design for the worst case.”

      Future proofing by asking "what if we're wrong?"

  9. Oct 2022
    1. The synthetic party, a Danish political party with an AI generated program from all Danish fringe party programs since the 70s. Aimed at the 20% non-voting Danes. 'Leder Lars' is leading the party, which is a chatbot residing on a Discord server where you can interact with it. An art project.

    1. researchers always tried to make systems that worked the way the researchers thought their own minds worked---they tried to put that knowledge in their systems---but it proved ultimately counterproductive, and a colossal waste of researcher's time, when, through Moore's law, massive computation became available and a means was found to put it to good use.

      researchers always tried to make systems that worked the way the researchers thought their own minds worked---they tried to put that knowledge in their systems

      does this also account for seeing AI models als neurons, as mimicking the working of the human brain?

    2. . Deep learning methods rely even less on human knowledge, and use even more computation, together with learning on huge training sets, to produce dramatically better speech recognition systems.

      deep learning thus is a development in the process of the diminishing of human knowledge in emphasis in developing AI

    1. Language is a communication method evolved by intelligent beings, not a (primary) constituent of intelligence. From neurology it's pretty clear that the basic architecture of human minds is functional interconnected neural networks and not symbolic processing. My belief is that world-modeling and prediction is the vast majority of what intelligence is, which is quite close to what the LLMs are doing. World models can be in many representations (symbolic, logic gates, neural networks) but what matters is how accurate they are with respect to reality, and how well the model state is mapped from sensory input and back into real-world outputs. Symbolic human language relies on each person's internal world model and is learned by interacting with other humans who share a common language and similar enough world models, not the other way around (learning the world model as an aspect of the language itself). Children learn which language inputs and outputs are beneficial and enjoyable to them using their native intelligence and can strengthen their world model with questions and answers that inform their model without having to directly experience what they are asking about.People who don't believe the LLMs have a world model are wrong because they are mistaking a physically weak world model for no world model. GPT-3 doesn't understand physics well enough to embed models of the referents of language into a unified model that has accurate gravity and motion dynamics, so it maintains a much more dreamlike model where objects exist in scenes and have relationships to each other but those relationships are governed by literary relationships instead of physical ones and so contradictions and superpositions and causality violations are allowed in the model. As multimodal transformers like Gato get trained on more combined language and sensory input their world models will become much more physically and causally accurate which will be reflected in their accuracy on NLP tasks.

      .

    1. In Mostaque’s explanation, open source is about “putting this in the hands of people that will build on and extend this technology.” However, that means putting all these capabilities in the hands of the public — and dealing with the consequences, both good and bad.

      THis focus on responsibility and consequences was not there, in the early days of open source, right?

    2. “The reality is, this is an alien technology that allows for superpowers,” Emad Mostaque, CEO of Stability AI, the company that has funded the development of Stable Diffusion, tells The Verge. “We’ve seen three-year-olds to 90-year–olds able to create for the first time. But we’ve also seen people create amazingly hateful things.”

      Mostaque seems to bet on a vibe that's about awe and the sublime

  10. Sep 2022
    1. Example of 'journalism' muddying the waters in order to have a story to publish at all.

      • copyright presupposes a human or legal entity copyright holder
      • copyright is a given when a certain threshold of creative effort is surpassed
      • copyright is given to a work

      The work copyrighted here are not the algorithmic assisted images, the work is a graphic novel, a collective of arranged images, written text etc. One could do that with any public domain stuff and still have copyright on the work. Additionally the author prompted the algorithm towards desired outcomes. Both satisfy the creativity threshold. Like in https://www.zylstra.org/blog/2022/06/dall-e-mini-siso-stereotype-in-stereotype-out/ where I listed the images as public domain (because I thought my prompts were uncreative), but the resulting arranging / juxtaposing multiple prompts as copyrighted by me (obviously not the algorithm).

      There's no ghost in the machine. Machines are irrelevant to copyright considerations.

    1. Can copyright vest in an AI? The primary objective of intellectual property law is to protect the rights of the creators of intellectual property.10 Copyright laws specifically aim to: (i) promote creativity and encourage authors, composers, artists and designers to create original works by affording them the exclusive right to exploit such work for monetary gain for a limited period; and (ii) protect the creators of the original works from unauthorised reproduction or exploitation of those works.

      Can copyright vest in an AI?

      The primary objective of intellectual property law is to protect the rights of the creators of intellectual property.10 Copyright laws specifically aim to: (i) promote creativity and encourage authors, composers, artists and designers to create original works by affording them the exclusive right to exploit such work for monetary gain for a limited period; and (ii) protect the creators of the original works from unauthorised reproduction or exploitation of those works.

    1. To my knowledge, conferring copyright in works generated by artificial intelligence has never been specifically prohibited. However, there are indications that the laws of many countries are not amenable to non-human copyright. In the United States, for example, the Copyright Office has declared that it will “register an original work of authorship, provided that the work was created by a human being.” This stance flows from case law (e.g. Feist Publications v Rural Telephone Service Company, Inc. 499 U.S. 340 (1991)) which specifies that copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Similarly, in a recent Australian case (Acohs Pty Ltd v Ucorp Pty Ltd), a court declared that a work generated with the intervention of a computer could not be protected by copyright because it was not produced by a human.

      To my knowledge, conferring copyright in works generated by artificial intelligence has never been specifically prohibited. However, there are indications that the laws of many countries are not amenable to non-human copyright. In the United States, for example, the Copyright Office has declared that it will “register an original work of authorship, provided that the work was created by a human being.” This stance flows from case law (e.g. Feist Publications v Rural Telephone Service Company, Inc. 499 U.S. 340 (1991)) which specifies that copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Similarly, in a recent Australian case (Acohs Pty Ltd v Ucorp Pty Ltd), a court declared that a work generated with the intervention of a computer could not be protected by copyright because it was not produced by a human.

    1. With the advent of AI software, computers — not monkeys — will potentially create millions of original works that may then be protected by copyright, under current law, for more than 100 years.

      With the advent of AI software, computers — not monkeys — will potentially create millions of original works that may then be protected by copyright, under current law, for more than 100 years.

    1. The Napkin Math 的 Evan Armstrong 本周发表了一篇长文,讨论了在 AI 生成内容技术推动内容创作成本逐步逼近零之后产生的问题。文中包含了大量 AI 生成内容的案例,对于理解目前技术所处的阶段有很多帮助。

      Armstrong 认为,商业模式可以简化认为是:生产、获客和分发三个环节。从内容行业的角度看,互联网已经将分发这个环节的成本降为零。而在 AI 生成内容的时代,内容生产的成本可能是下一个被颠覆的环节。

      作者认为,变化的周期可能是 5-10 年,也就是说在 2030 年前后,内容生产和创作将发生重大的变化,进而影响知识工作者的权力分配,而每个人与信息的关系也会发生剧烈的变化。

      Armstrong 从创造和协作两个角度分析可能产生的影响:

      • 创造。从零开始制造东西,完全替代以前需要人工投入的产品。
      • 协作。人类与人工智能工具配对,极大地改善和加快了他们的工作流程。

      他倾向于认为,协作可能是 AI 颠覆性更强的地方。而这意味着权力或利益的重新分配:

      • 自动化去掉重复的、低价值的工作是生产力提高的主要来源。
      • 在技术领域,新的创新总是在执行幂律法则。表现出色的人将不再需要支持人员,他们可以直接用人工智能来处理简单的事情。
    1. Artificial intelligence is the defining industrial and technical paradigm of the remainder of our lifetimes.

      BOOM! This is a strong claim. 20-30 years ago we would have said the same, starting with the word "internet". which begs the question - what's the Venn diagram for AI and the internet? Are they the same? Is one a necessary condition for the other?

    2. The greats, like William Gibson, Robert Heinlein, Octavia Butler and Samuel Delany, have long been arcing towards the kind of strangeness that Wang is talking about. Their AI fictions have given us our best imagery: AI, more like a red giant, an overseer, its every movement and choice as crushing and irrefutable as death; or, a consciousness continually undoing and remaking itself in glass simulations; or, a vast hive mind that runs all its goals per second to completion, at any cost; or, a point in a field, that is the weight of a planet, in which all knowledge is concentrated. These fictions have made AI poetics possible.

      So "alien intelligence" rather than "artificial intelligence". And then "artificial poetics", to grasp this so-called intelligence, that has to be understood not in the sense of something intelligent, but of something doing (alien) thinking.

    1. We believe that the net benefits of scale outweigh the costs associated with these qualifications, provided that they are seriously addressed as part of what scaling means. The alternative of small, hand-curated models from which negative inputs and outputs are solemnly scrubbed poses different problems. “Just let me and my friends curate a small and correct language model for you instead” is the clear and unironic implication of some critiques.

      This is the classical de/centralization debate, visible today also with regard to online platforms. Which, by the way, are or will be inserting LLMs into their infrastructural stacks. Thinking about de/centralization always reminds me of Frank Pasquale's "Tech Platforms and the Knowledge Problem" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3197292

    1. Onecannot hope thus to equal the speed and flexibility withwhich the mind follows an associative trail, but it should bepossible to beat the mind decisively in regard to the perma-nence and clarity of the items resurrected from storage

      I agree, but at the same time, I wonder if the new modern technologies that imitate the human mind could eventually surpass the flexibility and speed of human thinking. In recent times, Google's AI system (LaMDA) convinced several people that it has consciousness. I wonder if one could aspire to equal the speed and flexibility that this article mentioned in terms of human mind.

    1. In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

      This is a big question, whether use restrictions, which are becoming prolific (RAIL license, for example), can be enforced. If not, and that's a big if, it might create a situation of "responsibility washing" - licensors can argue they did all that's possible to curb harmful uses, and these will continue to happen in a gray / dark zone

  11. Aug 2022
    1. 过去十年的大部分 AI 系统都是基于监督学习,利用人工标注的数据集进行训练。它们取得了巨大的成功,但也存在明显的缺陷。此类的 AI 对于理解大脑如何工作没什么帮助,因为包括人类在内的动物是不依靠已标注数据集学习的。生物大脑通过探索环境获得对世界的深入理解。科学家们开始探索自我监督学习的机器学习算法,此类神经网络显示出了与大脑如何工作的相似性。当然大脑的工作不只是限于自我监督式学习,它充满了反馈连接,现有的自学式 AI 缺乏此类功能。AI 模型还有很长的路要走。

    1. We feel that there is a balance to be struck between maximizing access and use of LLMs on the one hand, and mitigating the risks associated with use of these powerful models, on the other hand, which could bring about harm and a negative impact on society. The fact that a software license is deemed "open" ( e.g. under an "open source" license ) does not inherently mean that the use of the licensed material is going to be responsible. Whereas the principles of 'openness' and 'responsible use' may lead to friction, they are not mutually exclusive, and we strive for a balanced approach to their interaction.
  12. Jul 2022
    1. AI systems replace the automated cognitive function of humans in maintaining importantsocial systems and augment the impact of such functions which are creative, singular and novel

      !- concern : AI replace the automated cognitive functions * Is just replacing the "automated" cognitive functions enough to avoid potential progress trap of AI takeover?

    2. we oppose the popular predictionof the upcoming, ‘dreadful AI takeover’

      !- in other words : Human takeover * The title of the paper comes from a play on the popular term "AI takeover" * It advocates for humans to takeover managing the world in a responsible way rather than machines.

    3. A cognitiveagent is needed to perform this very action (that needs to be recurrent)—and another agent is neededto further build on that (again recurrently and irrespective to the particular agents involved).

      This appears to be setting up the conditions for an artificial cognitive agent to be able to play a role (ie Artificial Intelligence)

    4. it would then be present social systemsand not the future AI the most proper context of considering and understanding the notion of takeover.
      • Author argues that current social systems have already taken over command of humans.
    1. 前 Google CEO Eric Schmidt 将 AI 比作核武器,呼吁建立类似相互保证毁灭的威慑制度,防止世界最强大的国家率先发动攻击。Schmidt 称,在不遥远的未来中美可能需要围绕 AI 制定条约,在上个世纪的 50 年代和 60 年代,美国和苏联这两个超级大国最终达成了 《禁止在大气层、外层空间和水下进行核武器试验条约》,这是一个限制核武器试验的国际条约,旨在减缓军备竞赛和减少大气中过量的放射性尘埃。Schmidt 认为中国和美国可能需要在 AI 领域达成类似的条约。

    1. 在俄罗斯举行的一次国际象棋比赛中,一位与儿童棋手对弈的机器人棋手弄伤了对方的手指,原因是他还没有轮到时抢着出棋,而安装有机械臂的机器人显然缺乏相应的程序,它伸出手臂紧紧按住其手指,直到成年人过来干预拉出手指。发布在 Baza Telegram 频道上的视频展现了这一罕见的事故。这位儿童参加的九岁以下年龄组的比赛,他的名字叫 Christopher,在事故后手指打上石膏,继续参加并完成了比赛。他的父母据报道联络了公诉人办公室。国际象棋大师 Sergey Karjakin 认为是软件错误导致了此次事故。

    1. 在掌握海量数据,对用户进行几乎完美的跟踪之后,AI 是否就无所不能了?伊利诺伊大学和斯坦福大学的经济学家研究了机器学习在预测消费者选择上的能力,他们的结论是预测消费者选择非常困难,AI 并不特别擅长。他们发现,即时信息如用户评论、推荐和新选择对决策有愈来愈大的影响,这些信息不能事先测量和预期,大数据可用于改善预测,但程度甚微,预测仍然非常不精确。

    1. Google 周五解雇了相信 AI 有意识的工程师 Blake Lemoine。他在播客 Big Technology Podcast 上透露了这一消息。Lemoine 此前任职于 Responsible AI 部门,在与公司的聊天机器人 LaMDA 对话之后他认为 AI 有了意识。他曾分享了与 AI 的对话内容,Lemoine 问 LaMDA 最害怕什么?LaMDA 回答说,也许听起来奇怪,它恐惧于被关闭。Lemoine:就像死亡?LaMDA:就像是死亡。Lemoine 和一名同事向 Google 高层展示证据,证明 LaMDA 有了意识。但 Google 高管及其他 AI 研究人员都不认同他的观点。

    1. because it only needs to engage a portion of the model to complete a task, as opposed to other architectures that have to activate an entire AI model to run every request.

      i don't really understand this: in z-code thre are tasks that other competitive softwares would need to restart all over again while z-code can do it without restarting...

    2. Z-code models to improve common language understanding tasks such as name entity recognition, text summarization, custom text classification and key phrase extraction across its Azure AI services. But this is the first time a company has publicly demonstrated that it can use this new class of Mixture of Experts models to power machine translation products.

      this model is what actually z-code is and what makes it special

    3. have developed called Z-code, which offer the kind of performance and quality benefits that other large-scale language models have but can be run much more efficiently.

      can do the same but much faster

    1. Efforts to use AI to predict crime have been fraught with controversy due to the potential to replicate existing biases in policing. But a new system powered by machine learning holds the promise of not only making better predictions but also highlighting these biases.
    1. Superintelligence has long served as a source of inspiration for dystopian science fiction that showed humanity being overthrown, defeated, or imprisoned by machines.
    1. 人工智能研究实验室OpenAI 在四月发布了 DALL-E 2,2021 年发布的 DALL-E 的继任者。两个人工智能系统都能根据自然语言文本描述生成令人惊叹的图像;它们能制作看起来像照片、插图、绘画、动画,以及基本上你可以用文字表达出来的任何艺术风格的图像。DALL-E 2 有诸多改善:更好的分辨率、更快的处理速度和一个编辑器功能,编辑器允许用户仅使用文本命令对生成的图像进行修改,例如“用植物替代花瓶”或“让狗的鼻子变大”。用户还可以上传自己的图像,然后告诉人工智能系统如何对其进行调整。世界对 DALL-E 2 的最初反应是惊叹和高兴。可以在几秒钟之内将任何物体和生物组合在一起;可以模仿任何艺术风格;可以描绘任何位置;并且可以描绘出任何照明条件。例如看到一副毕加索风格的鹦鹉翻煎饼图像,谁能不印象深刻呢?可当人们思考哪些行业容易被这种技术颠覆的时候,担忧出现了。

      OpenAI 尚未向公众、商业实体甚至整个人工智能社区发布该技术。OpenAI 研究员 Mark Chen 对 IEEE Spectrum 表示:“我们也和人们一样对滥用感到担忧,这是我们非常重视的事情。”该公司邀请了一些人尝试 DALL-E 2,允许他们与全世界分享他们的成果。有限公开测试的政策与 Google 的政策形成鲜明对比,后者刚刚发布了自己的文本到图像生成器 Imagen。在发布该系统时,Google 宣布不会发布代码或公开演示,因为存在滥用和生成有害图像的风险。Google 发布了一些非常令人印象深刻的图片,但没有向世界展示任何它所暗示的、有问题的内容。

    1. 受婴儿学习方式的启发,Deep Mind 的计算机科学家开发出一种程序能学习物体行为的简单物理学规则。研究报告发表在《Nature Human Behaviour》期刊上。当婴儿看到违反物理规则的画面时他们会表现出惊讶,比如视频中的球突然消失了。但 AI 在理解此类行为上的能力欠缺。Luis Piloto 和同事开发出名叫 Physics Learning through Auto-encoding and Tracking Objects (PLATO) 的软件模型,像婴儿那样学习简单的物理学规则。研究团队通过给 PLATO 观看许多描绘简单场景的视频来训练它,比如球落到地上,球滚到其他物体后面又再次出现,很多球之间弹来弹去。训练之后,研究人员给 PLATO 观看了有时包含不可能场景的视频,以此作为测试。和年幼的小孩一样,PLATO 在看到“不可能场景”时表现出了“惊讶”,比如物体互相穿过却没有发生相互作用。PLATO 只观看了 28 小时的视频就获得了以上学习效果。这些结果对 AI 和人类认知研究皆有重大影响。研究团队表示,这一模型可以学习各种物理概念,且体现出与发展心理学的发现一致的特点,而 PLATO 可以作为研究人类如何学习直观物理的一个有力工具,同时也表明了物体表征对于人类理解周围世界具有重要作用。

    1. Medical AI’s social impact is not merely a question of practice but also the insufficiency of its promise
  13. Jun 2022
    1. 英国知识产权局(IPO)决定 AI 系统暂时不能为发明申请专利。IPO 最近一次咨询发现,专家对人工智能目前是否能在没有人类帮助的情况下进行发明持怀疑态度。IPO 表示,现行法律允许人类为人工智能协助完成的发明申请专利,尽管有误解,但实际情况并非如此。去年上诉法庭裁定 Stephen Thaler 败诉,后者曾表示他的 Dabus AI 系统应该被认定为两项专利申请的发明人:一种食品容器和一种闪光灯。法官以二比一的多数支持 IPO 必须是真人才能作为发明人的立场。大法官 Laing 在她的判决中写道:“只有人才能拥有权利——机器不行。”“专利是一项法定权利,只能够被授予个人。”但是 IPO 也表示,它将“需要了解我们的知识产权制度在未来如何保护人工智能设计的发明”,并致力于推动国际讨论,保持英国的竞争力。

      很多人工智能系统都是使用从互联网上复制的大量数据训练的。 IPO 周二还宣布计划修改版权法,为了公共利益允许所有人合法访问——而不是像现在一样仅限于进行非商业研究的人访问,以此“促进人工智能技术的使用,并拓宽‘数据挖掘’技术。”权利持有人仍然能控制其作品的访问权并为此收取费用,但是不能再针对挖掘它们的能力收取额外费用。在咨询中,IPO 指出,英国是少数几个保护无人类创作者的计算机生成作品的国家之一。它表示,“计算机生成的作品”的“作者”被定义为“为作品创作进行必要安排的人”。保护期限为自作品完成之日起 50 年。表演艺术工作者工会 Equity 呼吁修改版权法,以保护演员的生计免受人工智能内容的影响,例如用他们的面部图像或声音生成“deepfakes”。IPO 表示他们会慎重对待该问题,但“现阶段人工智能技术对表演者的影响仍不明确。”该机构表示“将继续关注这些问题。”

    1. 机器学习模型正呈指数级增长。训练它们所需的能量也成倍增长——通过训练之后 AI 才能准确处理图像或文本或视频。随着人工智能社区努力应对其对环境的影响,一些会议现在要求论文提交者提供有关二氧化碳排放的信息。新研究提供了一种更准确的方法计算排放量。它还比较了影响它们的因素,并测试了两种减少排放的方法。 研究人员训练了 11 个规模不等的机器学习模型处理语言或图像。训练时间从单 GPU 上 1 小时到 256 个 GPU 上 8 天不等。他们记录每秒的能耗数据。还获得了 16 个地理区域 2020 年期间以五分钟为单位的每千瓦时能源碳排放量。然后他们可以比较在不同地区、不同时间运行不同模型的碳排放量。 为训练最小模型的 GPU 供电的碳排放量大致相当于为手机充电。最大的模型包含了 60 亿个参数,参数是衡量模型大小的标准。虽然它的训练只完成了 13%,但是 GPU 的碳排放量几乎相当于一个美国家庭一年耗电的碳排放量。而一些已部署的模型,例如 OpenAI 的 GPT-3,包含的参数超过了 1000 亿个。 减少碳排放的最大因素是地理区域:各地区每千瓦时的二氧化碳排放量从 200 克到 755 克不等。除了改变位置之外,研究人员还测试了两种减少二氧化碳排放的方法,他们能做到这一点得益于高时间粒度的数据。第一种方法是“灵活的开始”,这种方法可能会将训练延迟长达 24 个小时。对于需要几天时间训练的最大的模型,推迟一天通常只能将碳排放量减少不到 1%,但是对于小得多的模型,这样的延迟可以减少 10% 到 80% 的排放量。第二种方法是“暂停加恢复”,这种方法是在排放量高的时段暂停训练,只要总的训练时间增长不超过一倍即可。这种方法给小模型带来的好处只有几个百分点,但是在半数的地区,它让最大的模型受益达到 10% 到 30%。每千瓦时的排放量随着时间波动,部分是因为由于缺乏足够的能量存储,当风能和太阳能等间歇性清洁能源无法满足需求时,电网必须依赖“脏电”。

    1. Companies need to actually have an ethics panel, and discuss what the issues are and what the needs of the public really are. Any ethics board must include a diverse mix of people and experiences. Where possible, companies should look to publish the results of these ethics boards to help encourage public debate and to shape future policy on data use.

    1. 2009 年当时在普林斯顿大学的计算机科学家李飞飞创造了一个将改变人工智能历史的数据集。该数据集被称为 ImageNet,包含了数百万张标记图像,可训练复杂的机器学习模型识别图片中的内容。2015 年,这些机器超越了人类的识别能力。不久之后,李飞飞开始寻找她所谓的另一个“北极星”——将以完全不同的方式推动人工智能发展为真正的智能。

      她回顾了 5.3 亿年前的寒武纪大爆发——当时许多陆地生物物种首次出现,她从中获得了灵感。一种有影响力的理论认为,新物种爆发的部分原因在于第一次能看到周围世界的眼睛的出现。李飞飞意识到,动物的视觉永远不会自行出现,而是“深深根植于一个需要在瞬息万变的环境中移动、导航、生存、操纵和改变的整个身体之中。”她表示:“这就是为什么我会很自然地在人工智能方面转向更积极的愿景。”

      如今李飞飞的工作重点是人工智能代理,它们不仅可以从数据集中接受静态图像,还可以在三维虚拟世界的模拟环境中四处移动并与环境交互。这是一个被称为具身人工智能(embodied AI)的新领域的广泛目标,李飞飞并不是唯一投身于该领域的人。该领域与机器人技术重叠,因为机器人可以是具身人工智能代理在现实世界中的物理等价物,而强化学习——总是训练交互式代理学习将长期奖励作为激励。但是李飞飞和其他一些人认为,具身人工智能可以推动从机器学习直接能力(如识别图像)到学习如何通过多个步骤执行复杂的类人任务(如制作煎蛋卷)的重大转变。

    1. 人工智能的使用正在蓬勃发展,但是它可能并不是你想象中的秘密武器:从网络行动到虚假信息,人工智能拓展了国家安全威胁的触角,可以精确、快速大规模地针对个人和整个社会。随着美国努力保持领先地位,美国情报体系(IC)正努力适应并开启人工智能即将带来的革命。美国情报体系启动了一些针对人工智能的影响和道德用途的举措,分析师开始构思人工智能将如何彻底地改变他们的学科,但是这些方法和 IC 对此类技术的其他一些实际应用在很大程度上都是分散的...美国不同的政府机构正在如何使用人工智能查找全球网络流量和卫星图像中的模式,但是在使用人工智能解释意图时存在着一些问题:Pyrra Technologies 的首席技术官 Eric Curwin 表示,人工智能的理解可能更类似于刚学会走路的人类幼儿。该公司帮助客户识别从暴力到虚假信息在内的各种虚拟威胁。Curwin表示:“例如人工智能可以理解人类语言的基础知识,但是基本模型不具备完成特定任务的相关知识或对上下文的理解。”Curwin 解释说,为了“建立可以开始取代人类直觉或认知的模型,研究人员必须首先了解如何解释行为,并将该行为转化成人工智能可以学习的东西。”

    1. 人工智能让研究人员能检查当今科学仪器产生的大量数据,改变了科学实践。使用深度学习,可以从数据本身中学习,在数据的海洋中大海捞针。人工智能正在推动基因搜索、药学、药物设计和化合物合成的发展。为了从新数据中提取信息,深度学习要使用算法,算法通常是在海量数据上训练出来的神经网络。按照其分步说明,它与传统计算有很大的不同。它从数据中学习。深度学习没有传统计算编程那么透明,这留下了一个悬而未决的重要问题:系统学到了什么,它知道什么?五十年来,计算机科学家一直在试图解决蛋白质折叠问题,但没有成功。2016 年 Google 母公司 Alphabet 的人工智能子公司 DeepMind 推出了 AlphaFold 计划。利用蛋白质数据库作为训练集,该库中包含了超过 15 万种蛋白质的经验确定结构。不到五年的时间里,AlphaFold 就解决了蛋白质折叠问题,或者至少解决了其中最重要的方面:根据氨基酸序列识别蛋白质结构。AlphaFold 无法解释蛋白质是如何如此快速而精准地折叠的。这对人工智能来说是一次巨大的胜利,因为它不仅赢得了很高的科学声誉,而且是一项可能影响每个人生活的重大科学突破。

  14. scottaaronson.blog scottaaronson.blog
    1. 知名量子计算机专家 Scott Aaronson 宣布他将离开 UT Austin 一年,到 AI 创业公司 OpenAI(大部分是远程)从事理论研究,其工作主要是研究防止 AI 失控的理论基础,以及计算复杂性对此有何作为。他承认暂时没有头绪,因此需要花一整年时间去思考。OpenAI 的使命是确保 AI 能让全人类受益,但它同时也是一家盈利实体。Scott Aaronson 称他即使没有签署保密协议,但也不太会公开任何专有信息,但会分享 AI 安全性的一般思考。他说,人们对于 AI 安全性的短期担忧是在垃圾信息、监视和宣传方面滥用 AI,长期担忧则是当 AI 智能在所有领域都超过人类会发生什么。一个方案是找到方法让 AI 在价值观上与人类保持一致。

    1. Google 工程师 Blake Lemoine 任职于 Responsible AI 部门。作为工作的一部分,他在去年秋天开始与公司的聊天机器人 LaMDA 对话。LaMDA 运用了 Google 最先进的大型语言模型,使用从互联网上收集的数万亿词汇进行训练。在与 LaMDA 交流期间,41 岁的 Lemoine 认为 AI 有了意识。比如 Lemoine 问 LaMDA 最害怕什么?LaMDA 回答说,也许听起来奇怪,它恐惧于被关闭。Lemoine:就像死亡?LaMDA:就像是死亡。Lemoine 和一名同事向 Google 高层展示证据,证明 LaMDA 有了意识。副总裁 Blaise Aguera y Arcas 和部门主管 Jen Gennai 看了他的证据之后驳回了他的主张。本周一他被公司勒令休行政假,在切断对其账号的访问前,他给有 200 人的 Google 机器学习列表发帖说,“LaMDA 是有生命的(LaMDA is sentient)”,他不在的时候请好好照顾它。没人回应他的帖子。

    1. 人工智能将通过自动化繁琐的任务使人类更加高效。例如,人类可以使用诸如 GPT-3 之类的文本 AI 来生成想法/样板写作,以绕过空白页的恐惧,然后简单地选择最好的并对其进行改进/迭代。(基于 GPT-2 的 AI Dril 就是一个早期的例子)。随着人工智能变得更好,“辅助创造力”将变得更大,使人类能够比以往更轻松、更好地创造复杂的人工制品(包括视频游戏!)。

  15. May 2022
    1. Another absurd page that suggests Alexa has feelings. In the strictest sense Alexa doesn't even qualify as a partial AI. It's just a glorified (although extremely helpful) lookup table. There is no reason to believe that even a true AI, such as a self-teaching, building and growing neural network (which Alexa is not), has feelings. Of what we know of feelings, the hard question of consciousness is only a prerequisite ... doesn't even guarantee having feelings, and even whether machines can be conscious is doubtful by many if not most experts of AI within existentialism. Even all the theories of consciousness are rooted in correlations that have little to do with scientific tenets, so to make the leap to an AI having feelings, let alone Alexa which isn't even a theoretical AI, is just a sad to see. At best we should not be "rude" to machine because it might be hard for some to distinguish between machine and a feeling thing. but in that case the problem is with the misperception that machines can feel, more than it is a problem with people being "rude" to machines.

    1. This is just really horrible to validate a falsehood to children that Alexa does in fact have feelings. Really warped, really messed up. Of course children should be taught good manners, and by example no less, but I worry for a future where people can be manipulated by suggesting that a non-living thing has feelings, regardless whether it has an AI or not.

      Note that a true AI has yet to be created ... only facsimile's exist, mostly of the expert-based AI which Alexa is, which doesn't even fit the definition of a partial AI, it's just a lookup table.

  16. Apr 2022
    1. nother trend that surfaced in our summer survey and became more pronounced in our 2021 surveydata is that organizations are focusing on AI/ML use cases that will reduce costs while improving thecustomer experience. When respondents were asked about the different ways they’re applying AI/ML intheir organizations, customer experience and process automation rose to the top as some of the mostcommon use cases respondents selected. We also saw a dramatic (74%) year-on-year increase inorganizations that selected more than five use cases from the list of options in the survey.

      There were more use cases from 2020 to 2021. The biggest increase was in improving customer experience. Following closely behind was in generating customer insights, then automating processes.

    2. Here’s an even more telling indicator of the accelerating pace of AI/ML strategies. Respondents were askedhow many data scientists their organizations employ, from which we estimated the average number of datascientists employed by organizations in both the 2020 and 2021 data. Year-on-year, the average number ofdata scientists employed has increased by 76%. In fact 29% of respondents in our 2021 report now havemore than 100 data scientists on their team, a significant increase from the 17% reported last year.

      There was a major increase in the number of data scientists from 2020 and 2021.

    3. It’s clear from this year’s data that AI/ML projects have becomeone of the top strategic priorities in many enterprises. As of last year,organizations had already begun to boost their AI/ML investments;71% of respondents in our 2020 report said their AI/ML budgets hadincreased compared with the previous year.They’re not dialing back that spending this year. In fact, companiesappear to be doubling down on their AI/ML investments. We ran asurvey this summer to see how organizations were adapting to thepandemic and its impacts, and it showed a new sense of urgencyaround AI/ML projects.

      Companies are spending more on AI.

    4. Continuing the trends we saw in our summer survey, our 2021 surveyshows an increase in prioritization, spending, and hiring for AI/ML. Firstoff, 76% of organizations say they prioritize AI/ML over other IT initiatives,and 64% say the priority of AI/ML has increased relative to other ITinitiatives in the last 12 months.43%Respondents who told usthat AI/ML matters “waymore than we thought”in a survey this summerThe time to invest inAI/ML is now, no matteryour organization’s size,

      AI is taking priority over the other IT initiatives.

    5. This year’s survey revealed 10 key trends that organizations should be paying attention toif they want to succeed with AI/ML in 2021. The trends fall into a few main themes, and theoverarching takeaway is that organizations are moving AI/ML initiatives up their strategicpriority lists—and accelerating their spending and hiring in the process.But despite increasing budgets and staff, organizations continue to face significant barriersto reaping AI/ML’s full benefits. Specifically, the market is still dominated by early adopters,and organizations continue to struggle with basic deployment and organizational challenges.The bottom line is, organizations simply haven’t learned how to translate increasinginvestments into efficiency and scale

      Many organisations still face challenges in AI adoption. The key question is how do they translate increasing investments in AI into efficiency and scale.

    6. 2020 was a year of belt-tightening for many organizations due largely to the macroeconomicimpacts of the COVID-19 pandemic. In May 2020, Gartner predicted that global IT spendingwould decline 8% over the course of 2020 as business and technology leaders refocusedtheir budgets on their most important initiatives.One thing is readily apparent in the 2021 edition of our enterprise trends in machinelearning report: AI and ML initiatives are clearly on the priority list in many organizations.Not only has the upheaval of 2020 not impeded AI/ML efforts that were already underway,but it appears to have accelerated those projects as well as new initiatives.

      2022 is certainly a year in which AI is changing many businesses

    Tags

    Annotators

  17. Mar 2022
    1. Ben Collins. (2022, February 28). Quick thread: I want you all to meet Vladimir Bondarenko. He’s a blogger from Kiev who really hates the Ukrainian government. He also doesn’t exist, according to Facebook. He’s an invention of a Russian troll farm targeting Ukraine. His face was made by AI. https://t.co/uWslj1Xnx3 [Tweet]. @oneunderscore__. https://twitter.com/oneunderscore__/status/1498349668522201099

    1. Eric Topol. (2022, February 28). A multimodal #AI study of ~54 million blood cells from Covid patients @YaleMedicine for predicting mortality risk highlights protective T cell role (not TH17), poor outcomes of granulocytes, monocytes, and has 83% accuracy https://nature.com/articles/s41587-021-01186-x @NatureBiotech @KrishnaswamyLab https://t.co/V32Kq0Q5ez [Tweet]. @EricTopol. https://twitter.com/EricTopol/status/1498373229097799680

    1. projet européen X5-GON (Global Open Education Network) qui collecte les informations sur les ressources éducatives libres et qui marche bien avec un gros apport d’intelligence artificielle pour analyser en profondeur les documents
  18. Feb 2022
    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: This study was approved by the Institutional Review Board of the Emory University School of Medicine.

      Consent: IRB of Emory University School of Medicine gave ethical approval for this work I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

      Inclusion and Exclusion Criteria

      not detected.

      Attrition

      The first case was identified in September of 2006 , 13 cases were detected in 2007 , and 16 cases in 2008 across these two hospitals ( total of 30 with 120 matched controls) .

      Sex as a biological variable

      Mean Median 60 62 ( range from 27 to 90 ) Sex Female Male 25 ( 52 ) 23 ( 48 ) Site of isolation Urine

      Subject Demographics

      Age: not detected. Weight: not detected.

      Randomization

      Controls, patients without CRKP were randomly selected from a computerized list of inpatients who matched the case age (+/- 5 years), sex, and facility and whose admission date was within 48 hours of the date of the initial, positive culture.

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Availability: The comparison of clinical characteristics between cases and controls was made using Chi-Square (or It is made available under a CC-BY-NC-ND 4.0 International license .

      Identifiers: medRxiv preprint doi: https:// doi.org/10.1101/2022.02.08.22269570; this version posted February 9 , 2022 . https://doi.org/10.1101/2022.02.08.22269570

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: The ethics committee approval of the research protocol was made by the Ankara City Hospital Consent: Informed consent was obtained from the patients to participate in the study.

      Inclusion and Exclusion Criteria

      not detected.

      Attrition

      Two publications are evaluating the association with Netrin-1 in bleomycin-induced lung fibrosis in mice and SSc lung cell culture in humans.

      Sex as a biological variable

      A total of 56 SSc patients (mean age: 48.08±13.59) consisting of 53 females and 3 males, who were followed up in the rheumatology department of Ankara city hospital, diagnosed according to the 2013 ACR (American College of Rheumatology)/EULAR (European League Against Rheumatism) SSc classification criteria were included in the study.

      Subject Demographics

      Age: For the control group, 58 healthy volunteers (mean age: 48.01±11.59 years) consisting of 54 females and 4 males were included in the study.

      Randomization

      not detected.

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Availability: It is made available under a CC-BY-NC-ND 4.0 International license .

      Identifiers: preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in medRxiv preprint doi: https:// doi.org/10.1101/2022.02.05.22270510; this version posted February 10, 2022. https://doi.org/10.1101/2022.02.05.22270510

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

      Field Sample Permit: The research has been conducted using the UK Biobank Resource and has been approved by the UK Biobank under Application no. 36226.

      Consent: I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

      Inclusion and Exclusion Criteria

      Similarly , individuals where a large proportion of SNPs could not be measured were excluded.

      Attrition

      not detected.

      Sex as a biological variable

      not detected.

      Subject Demographics

      Age: not detected.

      Weight: not detected.

      Randomization

      Mendelian randomization ( MR ) is a robust and accessible tool to examine the causal relationship between an exposure variable and an outcome from GWAS summary statistics. [ 19 ] We employed two-sample summary data Mendelian randomization to further validate causal effects of neutrophil cell count genes on the outcome of critical illness due to COVID-19

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Identifiers: medRxiv preprint doi: https:// doi.org/10.1101/2021.05.18.21256584; this version posted February 14 , 2022 . https://doi.org/10.1101/2021.05.18.21256584

      Identifiers: Manhattan plot of neutrophil cell count showing that we reproduce the reported CDK6 signal ( rs445 ) on chromosome 7 . rs445

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: 234 Ethical clearance was obtained from the regional Ethical Review Board of Amhara

      Consent: The general aim and purpose of the study was described to each 239 eligible patient and all voluntary participants gave verbal informed consent prior to 240 enrolment.

      Inclusion and Exclusion Criteria

      Those 135 patients who were critically ill and unable to respond and those not voluntary to 136 participate were excluded.

      Attrition

      Those 135 patients who were critically ill and unable to respond and those not voluntary to 136 participate were excluded .

      Sex as a biological variable

      Sex Male Female Age group 18-24 25-44 ≥45

      Subject Demographics

      Age: 130 All adult patients ( aged ≥18 years ) who were using clinical laboratory services at 131 public health facilities of east Amhara , northeast Ethiopia were source population.

      Randomization

      132 Study population and eligibility criteria 133 Adult patients who received general laboratory services at the randomly selected 134 government health facilities during the study period were study population .

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Availability: It is made available under a CC-BY-NC-ND 4.0 International license .

      Identifiers: preprint doi: https:// doi.org/10.1101/2022.01.25.22269238; this version posted January 25 , 2022 . https://doi.org/10.1101/2022.01.25.22269238

    1. Another strategy is reinforcement learning (aka. constraint learning), as used in some AI systems.
  19. Jan 2022
    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      Field Sample Permit: Our findings indicate a paucity of 217 research focusing on field trials and implementation studies related to CHIKV RDTs .

      IRB: I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

      Consent: I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

      Inclusion and Exclusion Criteria

      98 Articles were excluded if (i) the studies were reviews, case reports, or opinion articles; (ii) 99 the studies evaluated the performance of reverse transcription loop-mediated isothermal 100 amplification (RT-LAMP) assays; (iii) the studies were related to an outbreak investigation 101 without the evaluation of the accuracy of CHIKV RDTs; (iv) the studies used an inappropriate 102 study population (asymptomatic individuals); (v) the studies described inappropriate It is made available under a CC-BY-NC-ND 4.0 International license.

      Attrition

      Based on the tile and the abstract , 96 were excluded , with 89 full-text 158 articles retrieved and assessed for eligibility .

      Sex as a biological variable

      not detected.

      Subject Demographics

      Age: not detected. Weight: not detected.

      Randomization

      Similarly , there was a high risk of bias in 210 the patient selection domain because only three studies enrolled a consecutive or random 211 sample of eligible patients with suspicion of CHIKV infection to reduce the bias in the 212 diagnostic accuracy of the index test .

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Availability: The 90 Prisma-ScR checklist is available in the Supplementary material.

      Identifiers: medRxiv preprint doi: https:// doi.org/10.1101/2022.01.28.22270018; this version posted January 30 , 2022 . https://doi.org/10.1101/2022.01.28.22270018

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: Institutional Review Board and all participants gave their signed informed consent.

      Consent: Institutional Review Board and all participants gave their signed informed consent.

      Inclusion and Exclusion Criteria

      83 years; 34 males; 57 righthanded , see Table 1 ) met the inclusion criteria: All patients were older than 18 years , presented with first-ever ischemic ( 83 % ) or haemorrhagic ( 17 % ) stroke and behavioural deficits as assessed by a neurological examination.

      Attrition

      Patients who had a history of neurological or psychiatric presentations ( e.g. transient ischemic attack) , multifocal or bilateral strokes , or had MRI contraindications ( e.g. claustrophobia , ferromagnetic objects ) were excluded from the analysis ( n = 131 patients , see the enrollment flowchart in the supplementary materials from Corbetta et al. 2015).

      Sex as a biological variable

      Handedness ( % right-handed ) 91.94 Sex ( % female ) 45.16 Abbreviations: SD = standard deviation It is made available under a CC-BY-NC 4.0 International license.

      Subject Demographics

      Age: 83 years; 34 males; 57 righthanded , see Table 1 ) met the inclusion criteria: All patients were older than 18 years , presented with first-ever ischemic ( 83 % ) or haemorrhagic ( 17 % ) stroke and behavioural deficits as assessed by a neurological examinatio .

      Randomization

      The task instructions require patients to place and remove the nine pegs one at a time and in random order as quickly as possible ( Mathiowetz et al. 1985; Oxford Grice et al. 2003).

      Blinding

      Two boardcertified neurologists ( Drs Corbetta and Carter ) reviewed all segmentations blinded to the individual behavioural data .

      Power Analysis

      We believe that adding other factors ( e.g. demographic , clinical , socioeconomic variables ) that likely interact with the recovery of patients can help us increase the model’s predictive power.

      Replication

      not required.

      Cell Line Authentication

      Authentication: However , most of the studies fall into one of the pitfalls that were described above ( i.e. overfitting , generalisability , and diaschisis ) as the models are not validated in an independent dataset.

      Code Information

      Identifiers: This procedure is available as supplementary code with the manuscript ( see https://github.com/lidulyan/Hierarchical-Linear- Regression-R- ).

      https://github.com/lidulyan/Hierarchical-Linear- Regression-R-

      Data Information

      Availability: Handedness ( % right-handed ) 91.94 Sex ( % female ) 45.16 Abbreviations: SD = standard deviation It is made available under a CC-BY-NC 4.0 International license .

      Identifiers: preprint doi: https:// doi.org/10.1101/2021.12.01.21267129; this version posted December 2 , 2021.

      https://doi.org/10.1101/2021.12.01.21267129

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      Field Sample Permit: Collection of data for detecting cellular spatiotemporal condition supporting circularization For this purpose, online database and web server were used by taking specific queries like , RBP-types or lncRNAs to search out their special location inside cellular spaces

      Inclusion and Exclusion Criteria

      not required.

      Attrition

      not required.

      Sex as a biological variable

      not required.

      Subject Demographics

      Age: not required.

      Weight: not required.

      Randomization

      To reduce computational complexity in dealing with very large database where number of data is greater than 1000 , sample datasets were used through random selection of data from the original database .

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Identifiers: We analyzed the spread of this biomolecular entity outside and inside the sub- cellular space along with assimilating other reported pieces of information (e.g., about RBP molecules involved in circularization of such bioRxiv preprint doi: https:// doi.org/10.1101/2021.10.26.465935; this version posted October 26, 2021. https://doi.org/10.1101/2021.10.26.465935

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      Field Sample Permit: Seeds of sorghum (Sorghum bicolor) were obtained from the seed collection unit of the Office of the Agricultural Development Programme, Benin City, Edo State, Nigeria. Ferruginous (or iron elevated) soil used in this present study was obtained from around the Life Sciences Faculty environment and pooled to obtain composite sample.

      Inclusion and Exclusion Criteria

      not required.

      Attrition

      not required.

      Sex as a biological variable

      not required.

      Subject Demographics

      Age: not required.

      Weight: not required.

      Randomization

      In order to confirm ferrugenicity, samples were collected from random areas and iron content was first confirmed in the area before more samples were collected and pooled.

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      The experiment was laid out incompletely randomized design in a factorial arrangement and replicated three times per treatment.

      Number: The experiment was laid out incompletely randomized design in a factorial arrangement and replicated three times per treatment .

      Data Information

      Availability: It is made available under aCC-BY 4.0 International license.

      Identifiers: preprint doi: https:// doi.org/10.1101/2021.11.22.469542; this version posted November 22 , 2021 .

      https://doi.org/10.1101/2021.11.22.469542