20 Matching Annotations
  1. Sep 2024
    1. when a open AI developed a gp4 and they wanted to test what this new AI can do they gave it the task of solving capture puzzles it's these puzzles you encounter online when you try to access a website and the website needs to decide whether you're a human or a robot now uh gp4 could not solve the capture but it accessed a website task rabbit where you can hire people online to do things for you and it wanted to hire a human worker to solve the capture puzzle

      for - AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story

  2. Jun 2024
    1. this company's got not good for safety

      for - AI - security - Open AI - examples of poor security - high risk for humanity

      AI - security - Open AI - examples of poor security - high risk for humanity - ex-employees report very inadequate security protocols - employees have had screenshots capture while at cafes outside of Open AI offices - People like Jimmy Apple report future releases on twitter before Open AI does

    2. open AI literally yesterday published securing research infrastructure for advanced AI

      for - AI - Security - Open AI statement in response to this essay

    3. if you have the cognitive abilities of something that is you know 10 to 100 times smarter than you trying to to outm smarten it it's just you know it's just not going to happen whatsoever so you've effectively lost at that point which means that 00:36:03 you're going to be able to overthrow the US government

      for - AI evolution - nightmare scenario - US govt may seize Open AI assets if it arrives at superintelligence

      AI evolution - projection - US govt may seize Open AI assets if it arrives at superintelligence - He makes a good point here - If Open AI, or Google achieve superintelligence that is many times more intelligent than any human, - the US government would be fearful that they could be overthrown or that the technology can be stolen and fall into the wrong hands

  3. Jan 2024
    1. the canonical unit, the NCU supports natural capital accounting, currency source, calculating and accounting for ecosystem services, and influences how a variety of governance issues are resolved
      • for: canonical unit, collaborative commons - missing part - open learning commons, question - process trap - natural capital

      • comment

        • in this context, indyweb and Indranet are not the canonical unit, but then, it seems the model is fundamentally missing the functionality provided but the Indyweb and Indranet, which is and open learning system.
        • without such an open learning system that captures the essence of his humans learn, the activity of problem-solving cannot be properly contextualised, along with all of limitations leading to progress traps.
        • The entire approach of posing a problem, then solving it is inherently limited due to the fractal intertwingularity of reality.
      • question: progress trap - natural capital

        • It is important to be aware that there is a real potential for a progress trap to emerge here, as any metric is liable to be abused
  4. Aug 2023
  5. Mar 2023
    1. OpenChatKit은 다양한 응용 프로그램을위한 특수 및 범용 챗봇을 모두 생성 할 수있는 강력한 오픈 소스 기반을 제공합니다. 우리는 협력 법과 온 토코교육 데이터 세트를 작성합니다. 모델 릴리스 그 이상으로 이것은 오픈 소스 프로젝트의 시작입니다. 우리는 지역 사회 공헌으로 지속적인 개선을위한 도구와 프로세스를 발표하고 있습니다.Together는 오픈 소스 기초 모델이보다 포괄적이고 투명하며 강력하며 능력이 있다고 생각합니다. 우리는 공개하고 있습니다 OpenChatKit 0.15 소스 코드, 모델 가중치 및 교육 데이터 세트에 대한 전체 액세스 권한이있는 Apache-2.0 라이센스에 따라. 이것은 커뮤니티 중심의 프로젝트이며, 우리는 그것이 어떻게 발전하고 성장하는지 보게되어 기쁩니다!유용한 챗봇은 자연 언어로 된 지침을 따르고 대화 상자에서 컨텍스트를 유지하며 응답을 조정해야합니다. OpenChatKit은이베이스에서 특수 제작 된 챗봇을 도출하기위한 기본 봇과 빌딩 블록을 제공합니다.이 키트에는 4 가지 주요 구성 요소가 있습니다:100 % 탄소 음성 계산에 대한 4,300 만 건 이상의 명령으로 EleutherAI의 GPT-NeoX-20B에서 채팅을 위해 미세 조정 된 명령 조정 된 대용량 언어 모델;작업을 정확하게 수행하기 위해 모델을 미세 조정하는 사용자 정의 레시피;추론시 문서 저장소, API 또는 기타 실시간 업데이트 정보 소스의 정보로 봇 응답을 보강 할 수있는 확장 가능한 검색 시스템;봇이 응답하는 질문을 필터링하도록 설계된 GPT-JT-6B로 미세 조정 된 조정 모델.OpenChatKit에는 사용자가 피드백을 제공하고 커뮤니티 구성원이 새로운 데이터 세트를 추가 할 수 있도록하는 도구가 포함되어 있습니다. 시간이 지남에 따라 LLM을 개선 할 수있는 개방형 교육 데이터 모음에 기여합니다.

      OpenChatKit은 다양한 응용 프로그램을위한 특수 및 범용 챗봇을 모두 생성 할 수있는 강력한 오픈 소스 기반을 제공합니다. 우리는 협력 법과 온 토코교육 데이터 세트를 작성합니다. 모델 릴리스 그 이상으로 이것은 오픈 소스 프로젝트의 시작입니다. 우리는 지역 사회 공헌으로 지속적인 개선을위한 도구와 프로세스를 발표하고 있습니다.

      Together는 오픈 소스 기초 모델이보다 포괄적이고 투명하며 강력하며 능력이 있다고 생각합니다. 우리는 공개하고 있습니다 OpenChatKit 0.15 소스 코드, 모델 가중치 및 교육 데이터 세트에 대한 전체 액세스 권한이있는 Apache-2.0 라이센스에 따라. 이것은 커뮤니티 중심의 프로젝트이며, 우리는 그것이 어떻게 발전하고 성장하는지 보게되어 기쁩니다!

      유용한 챗봇은 자연 언어로 된 지침을 따르고 대화 상자에서 컨텍스트를 유지하며 응답을 조정해야합니다. OpenChatKit은이베이스에서 특수 제작 된 챗봇을 도출하기위한 기본 봇과 빌딩 블록을 제공합니다.

      이 키트에는 4 가지 주요 구성 요소가 있습니다:

      100 % 탄소 음성 계산에 대한 4,300 만 건 이상의 명령으로 EleutherAI의 GPT-NeoX-20B에서 채팅을 위해 미세 조정 된 명령 조정 된 대용량 언어 모델;

      작업을 정확하게 수행하기 위해 모델을 미세 조정하는 사용자 정의 레시피;

      추론시 문서 저장소, API 또는 기타 실시간 업데이트 정보 소스의 정보로 봇 응답을 보강 할 수있는 확장 가능한 검색 시스템;

      봇이 응답하는 질문을 필터링하도록 설계된 GPT-JT-6B로 미세 조정 된 조정 모델.

  6. Jan 2023
    1. the outputs of generative AI programs will continue to pass immediately into the public domain.

      I wonder if this isn't reading more into the decision than is there. I don't read the decision as a blanket statement. Rather it says that the claimant didn't provide evidence of creative input.Would the decision have gone differently if he had claimed creative intervention? And what if an author does not acknowledge using AI?

    2. The US Copyright Office rejected his attempt to register copyright in the work – twice

      AI-generated work not eligible for copyright protection. OTOH, how would anyone know if the "author" decided to keep the AI component a secret?

  7. Oct 2022
    1. In Mostaque’s explanation, open source is about “putting this in the hands of people that will build on and extend this technology.” However, that means putting all these capabilities in the hands of the public — and dealing with the consequences, both good and bad.

      THis focus on responsibility and consequences was not there, in the early days of open source, right?

  8. Sep 2022
    1. In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

      This is a big question, whether use restrictions, which are becoming prolific (RAIL license, for example), can be enforced. If not, and that's a big if, it might create a situation of "responsibility washing" - licensors can argue they did all that's possible to curb harmful uses, and these will continue to happen in a gray / dark zone

  9. Dec 2021
    1. Standard algorithms as a reliable engine in SaaS https://en.itpedia.nl/2021/12/06/standaard-algoritmen-als-betrouwbaar-motorblok-in-saas/ The term "Algorithm" has gotten a bad rap in recent years. This is because large tech companies such as Facebook and Google are often accused of threatening our privacy. However, algorithms are an integral part of every application. As is known, SaaS is standard software, which makes use of algorithms just like other software.

      • But what are algorithms anyway?
      • How can we use standard algorithms?
      • How do standard algorithms end up in our software?
      • When is software not an algorithm?
  10. Oct 2021
  11. Jun 2020
  12. Dec 2019
    1. Four databases of citizen science and crowdsourcing projects —  SciStarter, the Citizen Science Association (CSA), CitSci.org, and the Woodrow Wilson International Center for Scholars (the Wilson Center Commons Lab) — are working on a common project metadata schema to support data sharing with the goal of maintaining accurate and up to date information about citizen science projects.  The federal government is joining this conversation with a cross-agency effort to promote citizen science and crowdsourcing as a tool to advance agency missions. Specifically, the White House Office of Science and Technology Policy (OSTP), in collaboration with the U.S. Federal Community of Practice for Citizen Science and Crowdsourcing (FCPCCS),is compiling an Open Innovation Toolkit containing resources for federal employees hoping to implement citizen science and crowdsourcing projects. Navigation through this toolkit will be facilitated in part through a system of metadata tags. In addition, the Open Innovation Toolkit will link to the Wilson Center’s database of federal citizen science and crowdsourcing projects.These groups became aware of their complementary efforts and the shared challenge of developing project metadata tags, which gave rise to the need of a workshop.  

      Sense Collective's Climate Tagger API and Pool Party Semantic Web plug-in are perfectly suited to support The Wilson Center's metadata schema project. Creating a common metadata schema that is used across multiple organizations working within the same domain, with similar (and overlapping) data and data types, is an essential step towards realizing collective intelligence. There is significant redundancy that consumes limited resources as organizations often perform the same type of data structuring. Interoperability issues between organizations, their metadata semantics and serialization methods, prevent cumulative progress as a community. Sense Collective's MetaGrant program is working to provide a shared infastructure for NGO's and social impact investment funds and social impact bond programs to help rapidly improve the problems that are being solved by this awesome project of The Wilson Center. Now let's extend the coordinated metadata semantics to 1000 more organizations and incentivize the citizen science volunteers who make this possible, with a closer connection to the local benefits they produce through their efforts. With integration into Social impact Bond programs and public/private partnerships, we are able to incentivize collective action in ways that match the scope and scale of the problems we face.

  13. May 2018
    1. “In short, they have no history of supporting the machine learning research community and instead they are viewed as part of the disreputable ecosystem of people hoping to hype machine learning to make money.”

      Whew. Hot.