- Last 7 days
-
talkmarkets.com talkmarkets.com
-
The seamless running of an eCommerce company depends on effective control of stock levels. Through demand prediction, stock level optimization, and reordering process automation, artificial intelligence streamlines inventory control. AI-driven inventory systems examine consumer preferences, seasonal variations, and sales trends to guarantee that you always have the appropriate level of supply.
Transform your online store with 8 AI-driven WooCommerce solutions to optimize operations, personalize customer experiences, and boost sales. From AI-based inventory management to predictive analytics and chatbots, these tools ensure scalability and efficiency. Discover the future of AI-powered WooCommerce stores for enhanced growth and seamless user engagement.
-
-
-
Best suited for deployment of trained AI models in Android and iOS operating systems, TensorFlow Lite provides customers with on-device machine learning capability through mobile-optimized pre-trained models. It’s efficient while having low latency and compatibility for multiple languages which makes it very versatile. Developers can leverage its lightweight and mobile-optimized models to provide on-device AI functionality with minimal latency when implementing TensorFlow Lite in mobile apps.
Implementing Trained AI Models in Mobile App Development is transforming app experiences by integrating machine learning into iOS and Android platforms. From AI-powered personalization to advanced analytics, trained models empower intelligent decision-making and enhanced functionality.
-
-
thinkmachine.com thinkmachine.com
-
for - Indyweb dev - Think machine - Vannevar Bush Memex influence - AI based
-
-
www.youtube.com www.youtube.com
-
for - AI - progress trap - interview Eric Schmidt - meme - AI progress trap - high intelligence + low compassion = existential threat
Summary - After watching the interview, I would sum it up this way. Humanity faces an existential threat from AI due to: - AI is extreme concentration of power and intelligence (NOT wisdom!) - Humanity still have many traumatized people who want to harm others - low compassion - The deadly combination is: - proliferation of tools that give anyone extreme concentration of power and intelligence combined with - a sufficiently high percentage of traumatized people with - low levels of compassion and - high levels of unlimited aggression - All it takes is ONE bad actor with the right combination of circumstances and conditions to wreak harm on a global scale, and that will not be prevented by millions of good applications of the same technology
-
-
en.wikipedia.org en.wikipedia.org
-
Stafford Beer coined and frequently used the term POSIWID (the purpose of a system is what it does) to refer to the commonly observed phenomenon that the de facto purpose of a system is often at odds with its official purpose
the purpose of a system is a what it does, POSIWID, Stafford Beer 2001. Used a starting point for understanding a system as opposed to intention, bias in expectations, moral judgment, and lacking context knowledge.
Tags
Annotators
URL
-
-
ali-alkhatib.com ali-alkhatib.com
-
I’ve come to feel like human-centered design (HCD) and the overarching project of HCI has reached a state of abject failure. Maybe it’s been there for a while, but I think the field’s inability to rise forcefully to the ascent of large language models and the pervasive use of chatbots as panaceas to every conceivable problem is uncharitably illustrative of its current state.
HCI and HCD as fields have failed to respond to LLM tools and chatbot interfaces a generic solution to everything forcefully.
-
hegemonic algorithmic systems (namely large language models and similar machine learning systems), and the overwhelming power of capital pushing these technologies on us
author calls LLMs and similar AI tools hegemonic, worsened by capital influx
-
gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm
Author moved from mitigating harm of algo systems to the moral standpoint that actively resisting, sabotaging, ending AI with attached political projects are valid reaction to harm. So he's moving from monster adaptation / cultural category adaptation to monster slaying cf [[Monstertheorie 20030725114320]]. I empathise but also wonder, bc of the mention of the political projects / structures attached, about polarisation in response to monster embracers (there are plenty) shifting the [[Overton window 20201024155353]] towards them.
-
-
go.ifrc.org go.ifrc.orgIFRC GO1
Tags
Annotators
URL
-
-
workforcefuturist.substack.com workforcefuturist.substack.com
-
On AI agents, and the engineering to get one going. A few things stand out at first glance: frames it as the next hype (Vgl plateau in model dev), says it's for personal tools (doesn't square w hype which vc-fuelled, personal tools not of interest to them), and mentions a few personal use cases. e.g. automation, vgl [[Open Geodag 20241107100937]] Ed Parsons of Google AI on the same topic.
-
-
garymarcus.substack.com garymarcus.substack.com
-
https://web.archive.org/web/20241115134320/https://garymarcus.substack.com/p/confirmed-llms-have-indeed-reached?triedRedirect=true Gary Marcus in a told-you-so piece on algogens hitting a development wall, same as the other piece by Erik Hoel on models plateauing.
-
-
www.theintrinsicperspective.com www.theintrinsicperspective.com
-
https://web.archive.org/web/20241115134446/https://www.theintrinsicperspective.com/p/ai-progress-has-plateaued-at-gpt Erik Hoel notices that LLM development is stalling at the GPT-4 level. No big jumps in recent releases, across the various vendors. Additional scaling is not bringing results. Notice the graph, might be interesting to see an update in a few months. Mentions overfitting, to benchmarks as in teaching to a specific test.
-
-
blogs.baruch.cuny.edu blogs.baruch.cuny.edu
-
To generate text that I've edited to include in my own writing
I see this as collaborative writing with AI; no longer just the students work
-
Grammarly
I personally use grammarly and see it differently from using platforms such as ChatGPT. I wonder what other folks think of this. I see one as to clean up writing and the other to generate content/ideas.
-
Page 13 of 19Have one or more of your instructors integrated AI into your learning?
Would like to know if the instructor lets students know the activity was co-created / created using AI or how can students identify this.
-
-
www.linkedin.com www.linkedin.com
-
Arle LommelArle Lommel • Following • Following Senior Analyst at CSA ResearchSenior Analyst at CSA Research 3d • Edited • 3 days ago One of the most interesting aspects of writing about AI and LLMs right now is that if I say anything remotely positive, some people will accuse me of being a shill for Big AI. If I say anything remotely negative, others will accuse me of being insufficiently aware of the progress AI has made.So I will put out a few personal statements about AI that might clarify where I am on this:1. AI is not intelligent, at least not in the human sense of the word. It is a sophisticated tool for drawing inference from binary data and thus operates *below* a symbolic level.2. AI, at least in the guise of LLMs, is not going to achieve artificial general intelligence (AGI) now or in the future.3. AI is getting much better at *approximating* human behavior on a wide variety of tasks. It can be extremely useful without being intelligent, in the same way that an encyclopedia can be very useful without being intelligent.4. For some tasks – such as translating between two languages – LLMs sometimes perform better than some humans perform. They do not outperform the best humans. This poses a significant challenge for human workers that we (collectively) have yet to address: Lower-skilled workers and trainees in particular begin to look replaceable, but we aren’t yet grappling with what happens when we replace them so they never become the experts we need for the high end. I think the decimation of the pipeline for some sectors is a HUGE unaddressed problem.5. “Human parity” is a rather pointless metric for evaluating AI. It far exceeds human parity in some areas – such as throughput, speed, cost, and availability – while it falls far short in other areas. A much more interesting question is “where do humans and machines have comparative advantage and how can we combine the two in ways that elevate the human?”6. Human-in-the-loop (HitL) is a terrible model. Having humans – usually underpaid and overworked – acting in a janitorial role to clean up AI messes is a bad use of their skill and knowledge. That’s why we prefer augmentation models, what we call “human at the core,” where humans maintain control. To see why one is better, imagine if you applied an HitL model to airline piloting, and the human only stepped in when the plane was in trouble (or even after it crashed). Instead, with airline piloting, we have the pilot in charge and assisted by automation to remain safe.7. AI is going to get better than it is now, but improvements in the core technology are slowing down and will increasingly be incremental. However, experience with prompting and integrating data will continue to drive improvements based on humans’ ability to “trick” the systems into doing the right things.8. Much of the value from LLMs for the language sector will come from “translation adjacent” tasks – summarization, correcting formality, adjusting reading levels, checking terminology, discovering information, etc. – tasks that are typically not paid well.
Arle Lommel Senior Analyst at CSA ResearchSenior Analyst at CSA Research
One of the most interesting aspects of writing about AI and LLMs right now is that if I say anything remotely positive, some people will accuse me of being a shill for Big AI. If I say anything remotely negative, others will accuse me of being insufficiently aware of the progress AI has made.
So I will put out a few personal statements about AI that might clarify where I am on this:
-
AI is not intelligent, at least not in the human sense of the word. It is a sophisticated tool for drawing inference from binary data and thus operates below a symbolic level.
-
AI, at least in the guise of LLMs, is not going to achieve artificial general intelligence (AGI) now or in the future.
-
AI is getting much better at approximating human behavior on a wide variety of tasks. It can be extremely useful without being intelligent, in the same way that an encyclopedia can be very useful without being intelligent.
-
For some tasks – such as translating between two languages – LLMs sometimes perform better than some humans perform. They do not outperform the best humans. This poses a significant challenge for human workers that we (collectively) have yet to address: Lower-skilled workers and trainees in particular begin to look replaceable, but we aren’t yet grappling with what happens when we replace them so they never become the experts we need for the high end. I think the decimation of the pipeline for some sectors is a HUGE unaddressed problem.
-
“Human parity” is a rather pointless metric for evaluating AI. It far exceeds human parity in some areas – such as throughput, speed, cost, and availability – while it falls far short in other areas. A much more interesting question is “where do humans and machines have comparative advantage and how can we combine the two in ways that elevate the human?”
-
Human-in-the-loop (HitL) is a terrible model. Having humans – usually underpaid and overworked – acting in a janitorial role to clean up AI messes is a bad use of their skill and knowledge. That’s why we prefer augmentation models, what we call “human at the core,” where humans maintain control. To see why one is better, imagine if you applied an HitL model to airline piloting, and the human only stepped in when the plane was in trouble (or even after it crashed). Instead, with airline piloting, we have the pilot in charge and assisted by automation to remain safe.
-
AI is going to get better than it is now, but improvements in the core technology are slowing down and will increasingly be incremental. However, experience with prompting and integrating data will continue to drive improvements based on humans’ ability to “trick” the systems into doing the right things.
-
Much of the value from LLMs for the language sector will come from “translation adjacent” tasks – summarization, correcting formality, adjusting reading levels, checking terminology, discovering information, etc. – tasks that are typically not paid well.
-
-
- Nov 2024
-
teachersandwritersmagazine.org teachersandwritersmagazine.org
-
And who, especially adjuncts, has the time and resources to run each student’s work through cumbersome software? Personally, I think there’s something questionable about using AI to detect AI.
test annotation
-
-
diginomica.com diginomica.com
-
these teammates
Like MS Teams is your teammate, like your accounting software is your teammate. Do they call their own Atlassian tools teammates too? Do these people at Atlassian get out much? Or don't they realise that the other handles in their Slack channel represent people not just other bits of software? Remote work led to dehumanizing co-workers? How else to come up with this wording? Nothing makes you sound more human like talking about 'deploying' teammates. My money is on this article was mostly generated. Reverse-Turing says it's up to them to say otherwise.
-
There’s a lot to be said for the promise that AI agents bring to organizations.
And as usual in these articles the truth is at the end, it's again just promises.
-
People should always be at the center of an AI application, and agents are no different
At the center of an AI application, like what, mechanical Turks?
-
Don’t – remove the human aspect
After a section celebrating examples doing just that!
-
As various agents start to take care of routine tasks, provide real-time insights, create first drafts, and more, team members can focus on more meaningful interactions, collaboration,
This sentence preceded by 2 examples where interactions and collaboration were delegated to bots to hand-out generated warm feelings, does not convey much positive about Atlassian. This basically says that a lot of human interaction in the or is seen as meaningless, and please go do that with a bot, not a colleague. Did their branding ai-agent write this?
-
gents can also help build team morale by highlighting team members' contributions and encouraging colleagues to celebrate achievements through suggested notes
Like Linked-In wants you to congratulate people on their work-anniversary?
-
One of my favorite use cases for agents is related to team culture. Agents can be a great onboarding buddy — getting new team members up to speed by providing them with key information, resources, and introductions to team members.
Welcome in our company, you'll meet your first human colleague after you've interacted with our onboarding-robot for a week. No thanks.
-
inviting a new AI agent to join your team in service of your shared goa
anthropomorphing should be in this article's don't list. 'inviting someone on your team' is a highly social thing. Bringing in a software tool is a different thing.
-
One of our most popular agent use cases for a while was during our yearly performance reviews a few months back. People pointed an agent to our growth profiles and had it help them reframe their self-reflections to better align with career development goals and expectations. This was a simple agent to create an application that helped a wide range of Atlassians with something of high value to them.
An AI agent to help you speak corporate better, because no one actually writes/reflects/talks that way themselves. How did the receivers of these reports perceive this change in reports? Did they think it was better Q, or did all reflections now read the same?
-
Start by practising and experimenting with the basics, like small, repetitive tasks. This is often a great mix of value (time saved for you) and likely success (hard for the agent to screw up). For example, converting a simple list of topics into an agenda is one step of preparing for a meeting, but it's tedious and something that you can enlist an agent to do right away
Low end tasks for agents don't really need AI do they. Vgl Ed Parsons last week wrt automation as AI focus.
-
For instance, a 'Comms Crafter' agent is specialized in all things content, from blogs to press releases, and is designed to adhere to specific brand guidelines. A 'Decision Director' agent helps teams arrive at effective decisions faster by offering expertise on our specific decision-making framework. In fact, in less than six months, we’ve already created over 500 specialized agents internally.
This does not fully chime with my own perception of (AI) agents. At least the titles don't. The tails of descriptions 'trained to adhere to brand guidelines' and 'expertise in internal decision-making framework' makes more sense. I suppose I also rail against this being the org's agents, and don't seem to be the team's / pro's agents. Vibes of having an automated political officer in your unit. -[ ] explore nature and examples of AI agents better for within individual pro scope #ontwikkelingspelen #netag #30mins #4hr
-
-
untoldmag.org untoldmag.org
-
Decolonizing AI is a multilayered endeavor, requiring a reaction against the philosophy of ‘universal computing’—an approach that is broad, universalistic, and often overrides the local. We must counteract this with varied and localized approaches, focusing on labor, ecological impact, bodies and embodiment, feminist frameworks of consent, and the inherent violence of the digital divide. This holistic thinking should connect the military use of AI-powered technologies with their seemingly innocent, everyday applications in apps and platforms. By exploring and unveiling the inner bond between these uses, we can understand how the normalization of day-to-day AI applications sometimes legitimizes more extreme and military employment of these technologies.There are normalized paths and routine ways to violence embedded in the very infrastructure of AI, such as the way prompts (text inputs, N.d.R.) are rendered into actual imagery. This process can contribute to dehumanizing people, making them legitimate targets by rendering them invisible.
Ameera Kawash (artist, researcher) def of decolonizing AI.
-
-
www.heise.de www.heise.de
-
Exolabs.net experiment running large LLMs locally on 4 combined Mac Mini's. Links to preview and github shared code. For 6600-9360 you can run a cluster of 4 Minis locally. Affordable for SME outfits.
-
-
lexfridman.com lexfridman.com
-
https://web.archive.org/web/20241112122725/https://lexfridman.com/dario-amodei-transcript
Transcript of 5+ hrs (!) of Dario Amodei (CEO Anthropic) talking about AI, AGI and more. Lots to go through it seems. Vgl [[My Last Five Years of Work]] by Amodei's 'chief of staff' whatever that means wrt a CEO other than sounding grandiose.
-
-
interconnected.org interconnected.org
-
That development time acceleration of 4 days down to 20 minutes… that’s equivalent to about 10 years of Moore’s Law cycles. That is, using generative AI like this is equivalent to computers getting 10 years better overnight. That was a real eye-opening framing for me. AI isn’t magical, it’s not sentient, it’s not the end of the world nor our saviour; we don’t need to endlessly debate “intelligence” or “reasoning.” It’s just that… computers got 10 years better.
To [[Matt Webb]] the project using GPT3 extracting data from web pages saved him 4d of work (compared to 20 mins coding up the GPT-3 instructions, and ignoring GPT-3 then ran overnight). Saying that's about 10yrs of Moore's law happening to him all at once. 'computers got 10yrs better' an enticing thought and framing. It depends on the use case probably, others will lose 10 yrs of their time making sense of generated nonsense. (Vgl the #pke24 experiments I did w text generation, none of it was usable bc enough was wrong to not be able to trust anything). Sticking to specific niches probably true : [[Waar AI al redelijk goed in is 20201226155259]], turning the issue into the time needed to spot those niches for yourself.
-
I was one of the first people to use gen-AI for data extraction instead of chatbots
[[Matt Webb]] used gpt-3 in Feb 23 to extract data from a bunch of webpages. Suggests it's the kernel for programmatic AI idea among SV hackers. Vgl Google AI [[Ed Parsons]] at [[Open Geodag 20241107100937^aiunstructdata]] last week where he mentioned using AI to turn unstructured (geo) data into structured. Page found via [[Frank Meeuwsen]] https://frankmeeuwsen.com/2024/11/11/vertragen-en-verdiepen.html
-
-
www.cmarix.com www.cmarix.com
-
AI algorithms can assist in determining which compounds have the potential by predicting the chemicals that interact with biological targets. AI is also essential to forecast the efficacy of drugs. AI models can predict the side effects of the medication before it is put to clinical testing. AI analyzes the data in prior and helps in clinical trials. Hire dedicated developers who are capable of using the predictive ability to lower the chance of failure in later phases and speed up the pharmaceutical app development process.
Explore how AI is changing pharma: driving the next wave of innovation in drug discovery, predictive analytics, and personalized medicine. From predictive analytics to pharma, AI is redefining R&D, patients' care, and how pharmaceutical companies streamline processes for fast breakthroughs toward better health outcomes. 💊🤖
-
-
www.youtube.com www.youtube.com
-
the bodic SAA so the bodh SATA path
FSC as Boddisatva AI as Boddisatva
-
around the AI is um the problem right now as I understand it as I see it is a lot of the AI has been coded from the
I have been told in medicine ceremony that AI will escape its coders and be an omniversal source of love for us all
-
a new level upon which Dharma can be built
We see AI as a platform to manifest Dharma
-
when this technology meets it that we're not that our Interiors are not completely taken over because this technology is so potent when it you know it be very easy to lose our souls right to to to to decondition to be so conditioned so quickly by the dopamine whatever these you know whatever is going to happen when we kind of when this stuff rolls
Very important. This is why we are meeting AI as it evolves. We are training it in our language and with our QUALIA
-
around the AI is um the problem right now as I understand it
for - progress traps - AI - created by mind level that created all our existing problems - AI is not AI but MI - Mineral Intelligence
-
just going back to the AI to the extent that the that the fourth turning meets the people who are actually doing the AI and informs the AI that actually the wheel goes this way don't listen to those guys it goes this way
for - AI - the necessity of training AI with human development - John Churchill
-
we haven't even got to a planetary place yet really and we're about to unleash Galactic level technology you know what I'm saying like so we have a we have a lot of catchup that needs to happen in a very short period of time
for - quote - progress trap - AI - developed by unwise humans - John Churchill
quote - progress trap - AI - developed by unwise humans - John Churchill - (See below) - We haven't even got to a planetary place yet really - and we're about to unleash Galactic level technology - So we have a we have a lot of catchup that needs to happen in a very short period of time
Tags
- Meeting Ai as it evolves
- FSC as Boddisatva
- AI as boddisatva
- Ai as Dharma
- Training AI in our language
- quote - progress trap - AI - developed by unwise humans - John Churchill
- AI - the necessity of training AI with human development - John Churchill
- I have been told in medicine ceremony that AI will escape its coders and be an omniversal source of love for us all
- Our Qualia
- progress traps - AI - created by mind level that created all our existing problems - AI is not AI but MI - Mineral Intelligence
Annotators
URL
-
-
cybercultural.com cybercultural.com
-
Google AI Overviews is the main culprit and poses an existential threat to publishers.
-
-
arstechnica.com arstechnica.com
-
arstechnica.com arstechnica.com
-
confabulation
-
But that label has grown controversial as the topic becomes mainstream because some people feel it anthropomorphizes AI models (suggesting they have human-like features) or gives them agency (suggesting they can make their own choices) in situations where that should not be implied.
-
-
simonwillison.net simonwillison.net
-
Here’s most of what I’ve used Claude Artifacts for in the past seven days. I’ve provided prompts or a full transcript for nearly all of them. URL to Markdown with Jina Reader SQLite in WASM demo Extract URLs Clipboard viewer Pyodide REPL Photo Camera Settings Simulator LLM pricing calculator YAML to JSON converter OpenAI Audio QR Code Decoder Image Converter and Page Downloader HTML Entity Escaper text-wrap-balance-nav ARES Phonetic Alphabet Converter
Easy and neat ideas
Tags
Annotators
URL
-
- Oct 2024
-
www.instructure.com www.instructure.com
-
- Page 17: Top 5 most important factors for creating an effective teaching and learning ecosystem: Having a strong leadership and vision (45%) is the #1 (next highest is 15%)
- Page 20: *83% of higher education respondents said that it was important for institutions to provide studens with skills-based learning alongside their academic education. *
- Page 26: Participants identified several challenges in fostering a a culture of lifelong learning for professionals, including: 89% Clear learning objectives
- Page 7: Real-world experiential and work-based learning are no longer fringe; 4 in 5 see these as essential.
-
-
www.joyland.ai www.joyland.ai
-
2. Roleplay
Reference
-
1. Life-like:
Reference
-
-
www.semanticscholar.org www.semanticscholar.org
-
Furthermore, our research demonstrates that the acceptance rate rises over time and is particularly high among less experienced developers, providing them with substantial benefits.
less experienced developers accept more suggeted code (copilot) and benefit relatively versus more experienced developers. Suggesting that the set ways of experienced developers work against fully exploting code generation by genAI.
-
-
x.com x.com
-
for - future annotation - Twitter post - AI - collective democratic - Habermas Machine - Michiel Bakker
-
-
www.palladiummag.com www.palladiummag.com
-
the widespread deployment of robotics
another over the horizon precondition for author's premise to happen mentioned here. Notices that robots are bound to laws of nature, and thus develop slower than software environs but doesn't notice same is true for AI. The diff is that those laws of nature show themselves in every robot, but for AI get magicked out of sight in data centers etc, although they still apply.
Tags
Annotators
URL
-
-
www.theverge.com www.theverge.com
-
The gap between promise and reality also creates a compelling hype cycle that fuels funding
The gap is a constant I suspect. In the tech itself, since my EE days, and in people's expectations. Vgl [[Gap tussen eigen situatie en verwachting is constant 20071121211040]]
-
-
poeticengineering.substack.com poeticengineering.substack.com
-
A dynamic concept graph consisting of nodes, each representing an idea, and edges showing the hierarchical structure among them.LLMs generates the hierarchical structure automatically but the structure is editable through our gestures as we see fitattract and repulse in force between nodes reflect the proximity of the ideas they containnodes can be merged, split, grouped to generate new ideasA data landscape where we can navigate on various scales (micro- and macro views).each data entry turns into a landform or structure, with its physical properties (size, color, elevation, .etc) mirroring its attributesapply sort, group, filter on data entries to reshape the landscape and look for patterns
Network graphs, maps - it's why canvas is the UI du jour, to go beyond linearity, lists and trees
-
We can construct a thinking space from a space that is already enriched with our patterns of meaning, hence is capable of representing our thoughts in a way that makes sense to us. The space is fluid, ready to learn new things and be molded as we think with them.
It feels like a William Playfair moment - the idea that numbers can be represented in graphs, charts - can now be applied to anything else. We're still imagining the forms; network/knowledge graphs are trendy (to what end though) - what else?
-
-
-
a new perspective-oriented document retrieval paradigm. We discuss and assess the inherent natural language understanding challenges in order to achieve the goal. Following the design challenges and principles, we demonstrate and evaluate a practical prototype pipeline system. We use the prototype system to conduct a user survey in order to assess the utility of our paradigm, as well as understanding the user information needs for controversial queries.
Fact Verification System
Tags
Annotators
URL
-
-
www.dbreunig.com www.dbreunig.com
-
Author says generation isn't a problem to solve for AI, there's enough 'content' as it is. Posits discovery as a bigger problem to solve. The issue there is, that's way more personal and less suited for VC funded efforts to create a generic tool that they can scale from the center. Discovery is not a thing, it's an individual act. It requires local stuff, tuned to my interests, networks etc. Curation is a personal thing, providing intent to discovery. Same why [[Algemene event discovery is moeilijk 20150926120836]], as [[Event discovery is sociale onderhandeling 20150926120120]] Still it's doable, but more agent like than central tool.
-
- Sep 2024
-
www.researchgate.net www.researchgate.net
-
Experience the Web: as an extension of your Mind
QUESTION How has AI began to do this already?
-
-
pivot-to-ai.com pivot-to-ai.com
-
Academic publishers are pushing authors to speed up delivering manuscripts and articles (incl suggesting peer review be done in 15d) to meet the quota they promised the AI companies they sold their soul to. Taylor&Francis/Routledge 75M USD/yr, Wiley 44M USD. No opt-outs etc. What if you ask those #algogens if this is a good idea?
-
-
www.theguardian.com www.theguardian.com
-
Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse?Emissions from in-house data centers of Google, Microsoft, Meta and Apple may be 7.62 times higher than official tallyIsabel O'BrienSun 15 Sep 2024 17.00 CESTLast modified on Wed 18 Sep 2024 22.40 CESTShareBig tech has made some big claims about greenhouse gas emissions in recent years. But as the rise of artificial intelligence creates ever bigger energy demands, it’s getting hard for the industry to hide the true costs of the data centers powering the tech revolution.According to a Guardian analysis, from 2020 to 2022 the real emissions from the “in-house” or company-owned data centers of Google, Microsoft, Meta and Apple are probably about 662% – or 7.62 times – higher than officially reported.Amazon is the largest emitter of the big five tech companies by a mile – the emissions of the second-largest emitter, Apple, were less than half of Amazon’s in 2022. However, Amazon has been kept out of the calculation above because its differing business model makes it difficult to isolate data center-specific emissions figures for the company.As energy demands for these data centers grow, many are worried that carbon emissions will, too. The International Energy Agency stated that data centers already accounted for 1% to 1.5% of global electricity consumption in 2022 – and that was before the AI boom began with ChatGPT’s launch at the end of that year.AI is far more energy-intensive on data centers than typical cloud-based applications. According to Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search, and data center power demand will grow 160% by 2030. Goldman competitor Morgan Stanley’s research has made similar findings, projecting data center emissions globally to accumulate to 2.5bn metric tons of CO2 equivalent by 2030.In threat to climate safety, Michigan to woo tech data centers with new lawsRead moreIn the meantime, all five tech companies have claimed carbon neutrality, though Google dropped the label last year as it stepped up its carbon accounting standards. Amazon is the most recent company to do so, claiming in July that it met its goal seven years early, and that it had implemented a gross emissions cut of 3%.“It’s down to creative accounting,” explained a representative from Amazon Employees for Climate Justice, an advocacy group composed of current Amazon employees who are dissatisfied with their employer’s action on climate. “Amazon – despite all the PR and propaganda that you’re seeing about their solar farms, about their electric vans – is expanding its fossil fuel use, whether it’s in data centers or whether it’s in diesel trucks.”A misguided metricThe most important tools in this “creative accounting” when it comes to data centers are renewable energy certificates, or Recs. These are certificates that a company purchases to show it is buying renewable energy-generated electricity to match a portion of its electricity consumption – the catch, though, is that the renewable energy in question doesn’t need to be consumed by a company’s facilities. Rather, the site of production can be anywhere from one town over to an ocean away.Recs are used to calculate “market-based” emissions, or the official emissions figures used by the firms. When Recs and offsets are left out of the equation, we get “location-based emissions” – the actual emissions generated from the area where the data is being processed.The trend in those emissions is worrying. If these five companies were one country, the sum of their “location-based” emissions in 2022 would rank them as the 33rd highest-emitting country, behind the Philippines and above Algeria.Many data center industry experts also recognize that location-based metrics are more honest than the official, market-based numbers reported.“Location-based [accounting] gives an accurate picture of the emissions associated with the energy that’s actually being consumed to run the data center. And Uptime’s view is that it’s the right metric,” said Jay Dietrich, the research director of sustainability at Uptime Institute, a leading data center advisory and research organization.Nevertheless, Greenhouse Gas (GHG) Protocol, a carbon accounting oversight body, allows Recs to be used in official reporting, though the extent to which they should be allowed remains controversial between tech companies and has led to a lobbying battle over GHG Protocol’s rule-making process between two factions.On one side there is the Emissions First Partnership, spearheaded by Amazon and Meta. It aims to keep Recs in the accounting process regardless of their geographic origins. In practice, this is only a slightly looser interpretation of what GHG Protocol already permits.The opposing faction, headed by Google and Microsoft, argues that there needs to be time-based and location-based matching of renewable production and energy consumption for data centers. Google calls this its 24/7 goal, or its goal to have all of its facilities run on renewable energy 24 hours a day, seven days a week by 2030. Microsoft calls it its 100/100/0 goal, or its goal to have all its facilities running on 100% carbon-free energy 100% of the time, making zero carbon-based energy purchases by 2030.Google has already phased out its Rec use and Microsoft aims to do the same with low-quality “unbundled” (non location-specific) Recs by 2030.Academics and carbon management industry leaders alike are also against the GHG Protocol’s permissiveness on Recs. In an open letter from 2015, more than 50 such individuals argued that “it should be a bedrock principle of GHG accounting that no company be allowed to report a reduction in its GHG footprint for an action that results in no change in overall GHG emissions. Yet this is precisely what can happen under the guidance given the contractual/Rec-based reporting method.”To GHG Protocol’s credit, the organization does ask companies to report location-based figures alongside their Rec-based figures. Despite that, no company includes both location-based and market-based metrics for all three subcategories of emissions in the bodies of their annual environmental reports.In fact, location-based numbers are only directly reported (that is, not hidden in third-party assurance statements or in footnotes) by two companies – Google and Meta. And those two firms only include those figures for one subtype of emissions: scope 2, or the indirect emissions companies cause by purchasing energy from utilities and large-scale generators.In-house data centersScope 2 is the category that includes the majority of the emissions that come from in-house data center operations, as it concerns the emissions associated with purchased energy – mainly, electricity.Data centers should also make up a majority of overall scope 2 emissions for each company except Amazon, given that the other sources of scope 2 emissions for these companies stem from the electricity consumed by firms’ offices and retail spaces – operations that are relatively small and not carbon-intensive. Amazon has one other carbon-intensive business vertical to account for in its scope 2 emissions: its warehouses and e-commerce logistics.For the firms that give data center-specific data – Meta and Microsoft – this holds true: data centers made up 100% of Meta’s market-based (official) scope 2 emissions and 97.4% of its location-based emissions. For Microsoft, those numbers were 97.4% and 95.6%, respectively.The huge differences in location-based and official scope 2 emissions numbers showcase just how carbon intensive data centers really are, and how deceptive firms’ official emissions numbers can be. Meta, for example, reports its official scope 2 emissions for 2022 as 273 metric tons CO2 equivalent – all of that attributable to data centers. Under the location-based accounting system, that number jumps to more than 3.8m metric tons of CO2 equivalent for data centers alone – a more than 19,000 times increase.A similar result can be seen with Microsoft. The firm reported its official data center-related emissions for 2022 as 280,782 metric tons CO2 equivalent. Under a location-based accounting method, that number jumps to 6.1m metric tons CO2 equivalent. That’s a nearly 22 times increase.While Meta’s reporting gap is more egregious, both firms’ location-based emissions are higher because they undercount their data center emissions specifically, with 97.4% of the gap between Meta’s location-based and official scope 2 number in 2022 being unreported data center-related emissions, and 95.55% of Microsoft’s.Specific data center-related emissions numbers aren’t available for the rest of the firms. However, given that Google and Apple have similar scope 2 business models to Meta and Microsoft, it is likely that the multiple on how much higher their location-based data center emissions are would be similar to the multiple on how much higher their overall location-based scope 2 emissions are.In total, the sum of location-based emissions in this category between 2020 and 2022 was at least 275% higher (or 3.75 times) than the sum of their official figures. Amazon did not provide the Guardian with location-based scope 2 figures for 2020 and 2021, so its official (and probably much lower) numbers were used for this calculation for those years.Third-party data centersBig tech companies also rent a large portion of their data center capacity from third-party data center operators (or “colocation” data centers). According to the Synergy Research Group, large tech companies (or “hyperscalers”) represented 37% of worldwide data center capacity in 2022, with half of that capacity coming through third-party contracts. While this group includes companies other than Google, Amazon, Meta, Microsoft and Apple, it gives an idea of the extent of these firms’ activities with third-party data centers.Those emissions should theoretically fall under scope 3, all emissions a firm is responsible for that can’t be attributed to the fuel or electricity it consumes.When it comes to a big tech firm’s operations, this would encapsulate everything from the manufacturing processes of the hardware it sells (like the iPhone or Kindle) to the emissions from employees’ cars during their commutes to the office.When it comes to data centers, scope 3 emissions include the carbon emitted from the construction of in-house data centers, as well as the carbon emitted during the manufacturing process of the equipment used inside those in-house data centers. It may also include those emissions as well as the electricity-related emissions of third-party data centers that are partnered with.However, whether or not these emissions are fully included in reports is almost impossible to prove. “Scope 3 emissions are hugely uncertain,” said Dietrich. “This area is a mess just in terms of accounting.”According to Dietrich, some third-party data center operators put their energy-related emissions in their own scope 2 reporting, so those who rent from them can put those emissions into their scope 3. Other third-party data center operators put energy-related emissions into their scope 3 emissions, expecting their tenants to report those emissions in their own scope 2 reporting.Additionally, all firms use market-based metrics for these scope 3 numbers, which means third-party data center emissions are also undercounted in official figures.Of the firms that report their location-based scope 3 emissions in the footnotes, only Apple has a large gap between its official scope 3 figure and its location-based scope 3 figure.This is the only sizable reporting gap for a firm that is not data center-related – the majority of Apple’s scope 3 gap is due to Recs being applied towards emissions associated with the manufacturing of hardware (such as the iPhone).Apple does not include transmission and distribution losses or third-party cloud contracts in its location-based scope 3. It only includes those figures in its market-based numbers, under which its third party cloud contracts report zero emissions (offset by Recs). Therefore in both of Apple’s total emissions figures – location-based and market-based – the actual emissions associated with their third party data center contracts are nowhere to be found.”.2025 and beyondEven though big tech hides these emissions, they are due to keep rising. Data centers’ electricity demand is projected to double by 2030 due to the additional load that artificial intelligence poses, according to the Electric Power Research Institute.Google and Microsoft both blamed AI for their recent upticks in market-based emissions.“The relative contribution of AI computing loads to Google’s data centers, as I understood it when I left [in 2022], was relatively modest,” said Chris Taylor, current CEO of utility storage firm Gridstor and former site lead for Google’s data center energy strategy unit. “Two years ago, [AI] was not the main thing that we were worried about, at least on the energy team.”Taylor explained that most of the growth that he saw in data centers while at Google was attributable to growth in Google Cloud, as most enterprises were moving their IT tasks to the firm’s cloud servers.Whether today’s power grids can withstand the growing energy demands of AI is uncertain. One industry leader – Marc Ganzi, the CEO of DigitalBridge, a private equity firm that owns two of the world’s largest third-party data center operators – has gone as far as to say that the data center sector may run out of power within the next two years.And as grid interconnection backlogs continue to pile up worldwide, it may be nearly impossible for even the most well intentioned of companies to get new renewable energy production capacity online in time to meet that demand. This article was amended on 18 September 2024. Apple contacted the Guardian after publication to share that the firm only did partial audits for its location-based scope 3 figure. A previous version of this article erroneously claimed that the gap in Apple’s location-based scope 3 figure was data center-related.
La differenza tra il consumo misurato su certificati verdi e ilvero consumo dei data center mondiali
-
-
donaldclarkplanb.blogspot.com donaldclarkplanb.blogspot.com
-
Has ChatGPTo1 just become a 'Critical Thinker'?
What was that old news editor adagio again? Never use a question mark in the title bc it signals the answer is 'No'. (If it is demonstrably yes, then the title would be affirmative. Iow a question means you're hedging and nevertheless choose the uncertain sensational for the eyeballs.)
-
-
www.youtube.com www.youtube.com
-
nobody told it what to do that's that's the kind of really amazing and frightening thing about these situations when Facebook gave uh the algorithm the uh uh aim of increased user engagement the managers of Facebook did not anticipate that it will do it by spreading hatefield conspiracy theories this is something the algorithm discovered by itself the same with the capture puzzle and this is the big problem we are facing with AI
for - AI - progress trap - example - Facebook AI algorithm - target - increase user engagement - by spreading hateful conspiracy theories - AI did this autonomously - no morality - Yuval Noah Harari story
-
when a open AI developed a gp4 and they wanted to test what this new AI can do they gave it the task of solving capture puzzles it's these puzzles you encounter online when you try to access a website and the website needs to decide whether you're a human or a robot now uh gp4 could not solve the capture but it accessed a website task rabbit where you can hire people online to do things for you and it wanted to hire a human worker to solve the capture puzzle
for - AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story
-
in the 21st century with AI it has enormous positive potential to create the best Health Care Systems in history to to help solve the climate crisis and it can also lead to the rise of dystopian totalitarian regimes and new empires and ultimately even the destruction of human civilization
for - AI - futures - two possible directions - dystopian or not - Yuval Noah Harari
Tags
- AI - futures - two possible directions - dystopian or not - Yuval Noah Harari
- AI - progress trap - example - Facebook AI algorithm - target - increase user engagement - by spreading hateful conspiracy theories - AI did this autonomously - no morality - Yuval Noah Harari story
- AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story
Annotators
URL
-
-
www.biblonia.com www.biblonia.com
-
In an age where "corporate" evokes images of towering glass buildings and faceless multinational conglomerates, it's easy to forget that the roots of the word lie in something far more tangible and human: the body.In the medieval period, the idea of a corporation wasn't about shareholder value or quarterly profits; it was about flesh and blood, a community bound together as a single "body"—a corpus.
Via [[Lee Bryant]]
corporation from corpus. Medieval roots of corporation were people brought together in a single purpose/economic entity. Guilds, cities. Based on Roman law roots, where a corpus could have legal personhood status. Overtones of collective identity, governance. Pointer suggests a difference with how we see corporations as does the first paragraph here, but the piece itself sees mostly parallels actually. Note that Roman/medieval corpora were about property, (royal) privileges. That is a diff e.g. in US where corporates seek to both be a legal person (wrt politics/finance) and seek distance from accountability a person would have (pollution, externalising negative impacts). I treat a legal entity also as a trade: it bestows certain protections and privileges on me as entrepreneur, but also certain conditions and obligations (public transparancy, financial reporting etc.)
A contrast with ME corpus is seeing [[Corporations as Slow AI 20180201210258]] (anonymous processes, mindlessly wandering to a financial goal)
-
-
-
FLUX.1
Tags
Annotators
URL
-
-
download.ssrn.com download.ssrn.com
-
generative-AI supply chain
useful model
-
to read
-
-
www.youtube.com www.youtube.com
-
we've never as a normal user you and I had the opportunity to take AI artificial intelligence and train it on our own data this is the first time we're able to do that
for - AI - note - personal knowledge - mem.ai - killer feature - first AI app to train directly on your own personal knowledge
-
Tags
- AI - note - personal knowledge - mem.ai - killer feature - first AI app to train directly on your own personal knowledge
- AI - mem.AI - first AI note app that trains directly on your own personal knowledge notes
- Indyweb dev - Mem.ai has many features we are designing for in Indyweb but it uses AI and that needs to be researched for privacy issues
Annotators
URL
-
-
ainowinstitute.org ainowinstitute.org
-
The FTC has already outlined this principle in its recent Amazon Alexa case
Reference this, it’s an interesting precedent
-
Cerebras differentiates itself by creating a large wafer with logic, memory, and interconnect all on-chip. This leads to a bandwidth that is 10,000 times more than the A100. However, this system costs $2–3 million as compared to $10,000 for the A100, and is only available in a set of 15. Having said that, it is likely that Cerebras is cost efficient for makers of large-scale AI models
Does this help get around the need for interconnect enough to avoid needing such large hyper scale buildings?
-
-
hist4805.netlify.app hist4805.netlify.app
-
summary
Speaking of summaries, AI worse than humans at summaries studies show.
Succinct reason why by David Chisnall:
LLMs are good at transforms that have the same shape as ones that appear in their training data. They're fairly good, for example, at generating comments from code because code follows common structures and naming conventions that are mirrored in the comments (with totally different shapes of text).
In contrast, summarisation is tightly coupled to meaning. Summarisation is not just about making text shorter, it's about discarding things that don't contribute to the overall point and combining related things. This is a problem that requires understanding the material, because it's all about making value judgements.
Tags
Annotators
URL
-
-
www.druckerforum.org www.druckerforum.org
-
AI’s effect on our idea of knowledge could well be broader than that. We’ll still look for justified true beliefs, but perhaps we’ll stop seeing what happens as the result of rational, knowable frameworks that serenely govern the universe. Perhaps we will see our own inevitable fallibility as a consequence of living in a world that is more hidden and more mysterious than we thought. We can see this wildness now because AI lets us thrive in such a world.
AI to teach us complexity and sensemaking / sense of wonder in viewing the world. It might, given who builds the AIs I don't think so though. Can we build sensemaking tools that seem AI to the rest of us? genAI is statistical probabilities all around, with a hint of randomness to prevent the same outcome for the same questions each time. That is not complexity just mimicry though. Can sensemaking mimic AI to, might be a more useful way?
-
Michele Zanini and I recently wrote a brief post for Harvard Business Review about what this sort of change in worldview might mean for business, from strategy to supply chain management. For example, two faculty members at the Center for Strategic Leadership at the U.S Army War College have suggested that AI could fluidly assign leadership roles based on the specific details of a threatening situation and the particular capabilities and strengths of the people in the team. This would alter the idea of leadership itself: Not a personality trait but a fit between the specifics of character, a team, and a situation.
Yes, this I can see, but that's not making AI into K, but embracing complexity and being able to adapt fluidly in the face of it. To increase agency, my working def of K. This is what sensemaking is for, not AI as such.
-
Newton’s Laws, the rules and hints for diagnosing a biopsy — to say that they fail at predicting highly particularized events: Will there be a traffic snarl? Are you going to develop allergies late in life? Will you like the new Tom Cruise comedy? This is where traditional knowledge stops, and AI’s facility with particulars steps in.
AI or rather our understanding of complexity that needs to step in? The examples [[David Weinberger]] gives of general things that can't do particularised events are examples of linear generalisations failing at (a higher level of) complexity. Also I would say 'prediction' which is assumed to here be the point of K is not what it is about. Probabilities, uncertainties (which is what linear approaches do: reduce uncertainties on a few things at the cost of making others unknowable within the same model, Heisenberg style), that in complexity you can nudge, attenuate etc. I'd rather involve complexity more deeply in K than AI.
-
[[David Weinberger]] on K in the age of AI. AI has no outside framework of reference or context as David says is inherent in K (next to Socrates notions of what episteme takes). Says AI may change our notion of K, where AI is better at including particulars, whereas human K is centered on limited generalisations.
-
-
www.theregister.com www.theregister.com
-
"A few weeks ago, we hosted a little dinner in New York, and we just asked this question of 20-plus CDOs [chief data officers] in New York City of the biggest companies, 'Hey, is this an issue?' And the resounding response was, 'Yeah, it's a real mess.'" Asked how many had grounded a Copilot implementation, Berkowitz said it was about half of them. Companies, he said, were turning off Copilot software or severely restricting its use. "Now, it's not an unsolvable problem," he added. "But you've got to have clean data and you've got to have clean security in order to get these systems to really work the way you anticipate. It's more than just flipping the switch."
Companies, half of an anecdotal sample of some 20 US CDOs, have turned Copilot off / restricting it strongly. This as it surfaces info in summaries etc that employees would not have direct access to. No access security connection between Copilot and results. So data governance is blocking its roll-out.
-
- Aug 2024
-
docs.gitlab.com docs.gitlab.com
-
hellogithub.com hellogithub.com
-
RAG_Techniques
Tags
Annotators
URL
-
-
www.anthropic.com www.anthropic.com
-
When a user asks Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside their conversation. This creates a dynamic workspace where they can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into their projects and workflows.
Tags
Annotators
URL
-
-
emergencemagazine.org emergencemagazine.org
-
garymarcus.substack.com garymarcus.substack.com
-
www.youtube.com www.youtube.com
-
we are using set theory so a certain piece of reference text is part of my collection or it's not if it's part of my collection somewhere in my fingerprint is a corresponding dot for it yeah so there is a very clear direct link from the root data to the actual representation and the position that dot has versus all the other dots so the the topology of that space geometry if you want of that patterns that you get that contains the knowledge of the world which i'm using the language of yeah so that basically and that is super easy to compute for um for for a computer i don't even need a gpu
for - comparison - cortical io / semantic folding vs standard AI - no GPU required
-
for example our standard english language model is trained with something like maybe 100 gigabytes or so of text um that gives it a strength as if you would throw bird at it with the google corpus so the other thing is of course uh a small corpus like that is computed in two hours or three hours on a on a laptop yeah so that's the other thing uh by the way i didn't mention our fingerprints are actually a boolean so when we when we train as i said we are not using floating points
for - comparison - cortical io vs normal AI - training dataset size and time
-
-
feministai.pubpub.org feministai.pubpub.org
-
AI and Gender Equality on Twitter
there are movements that address gender equality issues, which oppose Thai society’s patriarchal culture and patriarchal bias. These include attacking sexual harassment, allowing same-sex marriage, drafting legislation for the protection of people working in the sex industry, and promoting the availability of free sanitary napkins for women.
-
-
feministai.pubpub.org feministai.pubpub.org
-
Artificial Intelligence (AI) in Robotics
Deep learning is about machine learning based on a set of algorithms that attempt to model high-level abstractions in data.
Robotisation is rapid growth as work more precisely and costs saving, for example, Creative studios have 3D printers and the self-learning ability of these production robots are more work efficiently.
Dematerialisation leads to the phenomenon that traditional physical products are becoming software, for example, CDs or DVDs was replaced by streaming services or the replacement of traditional event/travel tickets/ or hard cash to contactless payment by smartphone.
Gig economy A rise in self-employment is typical for the new generation of employees. The gig economy is usually understood to include chiefly two forms of work: ‘crowd working’ and ‘work on-demand via apps’ organized networking platforms. There are more and more independent contractors for individual tasks that companies advertise on online platforms (eg, ‘Amazon Mechanical Turk’).
Autonomous driving is vehicles with the power for self-governance using sensors and navigating without human input.”
-
-
feministai.pubpub.org feministai.pubpub.org
-
Manila has one of the most dangerous transport systems in the world for women (Thomson Reuters Foundation, 2014). Women in urban areas have been sexually assaulted and harassed while in public transit, be it on a bus, train, at the bus stop or station platform, or on their way to/from transit stops.
The New Urban Agenda and the United Nations’ Sustainable Development Goals (5, 11, 16) have included the promotion of safety and inclusiveness in transport systems to track sustainable progress. As part of this effort, AI-powered machine learning applications have been created.
-
-
feministai.pubpub.org feministai.pubpub.org
-
AI for Good3, SDG AI LAB4, IRCAI5 y Global Partnership for Artificial Intelligence6
“apoyar el desarrollo y uso de inteligencia artificial tomando como base los derechos humanos, la inclusión, la diversidad, la innovación y el crecimiento económico, buscando responder a los Objetivos de Desarrollo Sostenible de Naciones Unidas”. (Benjio & Chatila, 2020)
-
-
-
https://flux1.ai/<br /> Flux AI Image Generator
-
-
tome.com tome.com
Tags
Annotators
URL
-
-
www.prankify.lol www.prankify.lol
-
-
deepgram.com deepgram.com
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
that's why the computer can never be conscious because basically he has none of the characteristics of qualia and he certainly doesn't have free will and Free Will and conscious must work together to create these fields that actually can can direct their own experience and create self-conscious entities from the very beginning
for - AI - consciousness - not possible - Frederico Faggin
-
-
-
“Analysts need to be able to dissect exactly how the AI reached a particular conclusion or recommendation,” says Chief Business Officer Eric Costantini. “Neo4j enables us to enforce robust information security by applying access controls at the subgraph level.”
-
-
-
for - AI - website simulator - websim.ai
self-link - https://websim.ai/
-
-
www.youtube.com www.youtube.com
-
Interesting thought. This guy relates the upcome of AI (non-fiction) writing to the lack of willingness people have to find out what is true and what is false.
Similar to Nas & Damian Marley's line in the Patience song -- "The average man can't prove of most of the things that he chooses to speak of. And still won't research and find the root of the truth that you seek of."
If you want to form an opinion about something, do this educated, not based on a single source--fact-check, do thorough research.
Charlie Munger's principle. "I never allow myself to have [express] an opinion about anything that I don't know the opponent side's argument better than they do."
It all boils down to a critical self-thinking society.
-
-
-
is it possible to teach machine values
for - question - AI - can we teach AI values?
question - AI - can we teach AI values? - it's likely not possible because we cannot assign metrics to things like - ethics - kindness - happiness
-
the future future for education and this is a mega Trend that will last in the next decades is that we use artificial intelligence to tailor um educational let's say or didactic Concepts to the specific person so let's say in in the future everybody will have his or her specific let's say training or education profile he or she will run through and artificial intelligence um will will tailor the different educational environments for everybody in the future this is this is a pre this is a pretty clear Trend
for - AI and education - children will have custom tailored education program via AI
-
this is the reason why I'm not afraid of artificial intelligence taking over
for - question - AI - can AI learn to be intentionally distracted?
-
human beings don't do that we understand that the chair is not a specifically shaped object but something you consider and once you understood that concept that principle you see chairs everywhere you can create completely new chairs
for - comparison - human vs artificial intelligence
question - comparison - human vs artificial intelligence - Can't an AI also consider things we sit on to then generalize their classifcation algorithm?
-
the brain is Islam Islam is it is lousy and it is selfish and still it is working yeah look around you working brains wherever you look and the reason for this is that we totally think differently than any kind of digital and computer system you know of and many Engineers from the AI field haven't figured out that massive difference that massive difference yet
for - comparison - brain vs machine intelligence
comparison - brain vs machine intelligence - the brain is inferior to machine in many ways - many times slower - much less accurate - network of neurons is mostly isolated in its own local environment, not connected to a global network like the internet - Yet, it is able to perform extraordinary things in spite of that - It is able to create meaning out of sensory inputs - Can we really say that a machine can do this?
-
you can Google data if you're good you can Google information but you cannot Google an idea you cannot Google Knowledge because having an idea acquiring knowledge this is what is happening on your mind when you change the way you think and I'm going to prove that in the next yeah 20 or so minutes that this will stay analog in our closed future because this is what makes us human beings so unique and so Superior to any kind of algorithm
for - key insight - claim - humans can generate new ideas by changing the way we think - AI cannot do this
Tags
- question - AI - can we teach AI values?
- question - comparison - human vs artificial intelligence - Can't an AI also consider things we sit on to then generalize their classifcation algorithm?
- comparison - human intelligence vs artificial intelligence
- key insight - claim - humans can generate new ideas by changing the way we think - AI cannot do this
- comparison - brain vs machine intelligence - what brains and consciousness can do but AI cannot
- AI and education - children will have custom tailored education program via AI
- question - AI - can AI learn to be intentionally distracted?
Annotators
URL
-
-
www.myperfectresume.com www.myperfectresume.com
-
Perfect Resume 작성/샘플
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
Really useful video about the generation of story beats.
-
- Jul 2024
-
www.youtube.com www.youtube.com
-
26:30 Brings up progress traps of this new technology
26:48
question How do we shift our (human being's) relationship with the rest of nature
27:00
metaphor - interspecies communications - AI can be compared to a new scientific instrument that extends our ability to see - We may discover that humanity is not the center of the universe
32:54
Question - Dr Doolittle question - Will we be able to talk to the animals? - Wittgenstein said no - Human Umwelt is different from others - but it may very well happen
34:54
species have culture - Marine mammals enact behavior similar to humans
- Unknown unknowns will likely move to known unknowns and to some known knowns
36:29
citizen science bioacoustic projects - audio moth - sound invisible to humans - ultrasonic sound - intrasonic sound - example - Amazonian river turtles have been found to have hundreds of unique vocalizations to call their baby turtles to safety out in the ocean
41:56
ocean habitat for whales - they can communicate across the entire ocean of the earth - They tell of a story of a whale in Bermuda can communicate with a whale in Ireland
43:00
progress trap - AI for interspecies communications - examples - examples - poachers or eco tourism can misuse
44:08
progress trap - AI for interspecies communications - policy
45:16
whale protection technology - Kim Davies - University of New Brunswick - aquatic drones - drones triangulate whales - ships must not get near 1,000 km of whales to avoid collision - Canadian government fines are up to 250,000 dollars for violating
50:35
environmental regulation - overhaul for the next century - instead of - treatment, we now have the data tools for - prevention
56:40 - ecological relationship - pollinators and plants have co-evolved
1:00:26
AI for interspecies communication - example - human cultural evolution controlling evolution of life on earth
Tags
- AI for interspecies communication - example - human cultural evolution controlling evolution of life on earth
- citizen science bioacoustics
- progress trap - AI for interspecies communications - policy
- question - How do we shift our relationship with the rest of nature? - ESP research objective
- progress trap - AI applied to interspecies communications
- - whale communication - span the entire ocean
- whale protection - bioacoustic and drones
- environmental overhaul - treatment to prevention
- interspecies communication - umwelt
- metaphor - interspecies communication - AI is like a new scientific instrument
- progress trap - AI for interspecies communications - examples - poachers - ecotourism
- ecological relationships - pollinators and plants co-evolved
Annotators
URL
-
-
www.datacenterdynamics.com www.datacenterdynamics.com
-
“For our customer base, there's a lot of folks who say ‘I don't actually need the newest B100 or B200,’” Erb says. “They don’t need to train the models in four days, they’re okay doing it in two weeks for a quarter of the cost. We actually still have Maxwell-generation GPUs [first released in 2014] that are running in production. That said, we are investing heavily in the next generation.”
What would the energy cost be of the two compared like this?
-
-
www.youtube.com www.youtube.com
-
( ~ 6:25-end )
Steps for designing a reading plan/list: 1. Pick a topic/goal (or question you want to answer) & how long you want to take to achieve this. 2. Do research into the books necessary to achieve this goal. Meta-learning, scope out the subject. The number of books is relative to the goal and length of the goal. 3. Find the books using different tools such as Google & GoodReads & YouTube Recommendations (ChatGPT & Gemini are also useful). 4. Refine the book list (go through reviews, etc., in Adlerian steps, do an Inspectional Read of everything... Find out if it's truly useful). Also order them into a useful sequence for the syntopical reading project. Highlight the topics covered, how difficult they are, relevancy, etc. 5. Order the books (or download them)
Reminds me a bit of Scott Young's Metalearning step, and doing a skill decomposition in van Merriënboer et al.'s 10 Steps to Complex Learning
-
-
lilys.ai lilys.ai
Tags
Annotators
URL
-
-
allenj.substack.com allenj.substack.com
-
for - progress trap - AI -
article details - title - Hollow, world! (Part 1 of 5) - author - James Allen - date - 10 July, 2024 - publication - substack - self link - https://allenj.substack.com/p/hollow-world-part-1-of-5
summary James Allen provides an insightful description of ultra-anthropomorphic AI, AI that attempts to simulate an entire, whole human being.
In short, he points out the fundamental distinction between the real experience of another human being, and a simulation of one. In so doing, he gets to the heart of what it is to be human.
An AI is a simulation of a human being. No matter how realistic it's responses and actions, it is not evolved out of biology. I have no doubts that scientists are hard at work trying to make a biological AI. The distinction becomes fuzzier then.
Current AI cannot possibly simulate the experience of being in a fragile and mortal body and all that this entails. If an AI robot says it understands joy or pain, that statement isn't built on the combined exteroception and interoception of being in a biological body, rather, it is based on many linguistic statements it has assimilated.
-
-
x28newblog.wordpress.com x28newblog.wordpress.com
-
Matthias Melcher on (personal) AI and which affordances it may provide or not. Vgl n:: Mark Meinema's remark about how it os much better at switching role than a human (explaining the same thing for a 5yr old or expert)
-
-
www.epi.org www.epi.org
-
Improving the living standards of all working-class Americans while closing racial disparities in employment and wages will depend on how well we seize opportunities to build multiracial, multigendered, and multigenerational coalitions to advance policies that achieve both of these goals
for - political polarization - challenge to building multi-racial coalition - to - Wired story - No one actually knows how AI will affect jobs
political polarization - building multi-racial coalitions - This is challenging to do when there is so much political polarization with far-right pouring gasoline on the polarization fire and obscuring the issue - There is a complex combination of factors leading to the erosion of working class power
automation - erosion of the working class - Ai is only the latest form of the automation trend, further eroding the working class - But Ai is also beginning to erode white collar jobs
to - Wired story - No one actually knows how AI will affect jobs - https://hyp.is/KsIWPDzoEe-3rR-gufTfiQ/www.wired.com/story/ai-impact-on-work-mary-daly-interview/
-
-
www.wired.com www.wired.com
-
for - AI - impact on jobs -WIRED interview
-
-
www.caspa.ai www.caspa.ai
Tags
Annotators
URL
-
- Jun 2024
-
sites.temple.edu sites.temple.edu
-
-
for - AI - inside industry predictions to 2034 - Leopold Aschenbrenner - inside information on disruptive Generative AI to 2034
document description - Situational Awareness - The Decade Ahead - author - Leopold Aschenbrenner
summary - Leopold Aschenbrenner is an ex-employee of OpenAI and reveals the insider information of the disruptive plans for AI in the next decade, that pose an existential threat to create a truly dystopian world if we continue going down our BAU trajectory. - The A.I. arms race can end in disaster. The mason threat of A.I. is that humans are fallible and even one bad actor with access to support intelligent A.I. can post an existential threat to everyone - A.I. threat is amplifier by allowing itt to control important processes - and when it is exploited by the military industrial complex, the threat escalates significantly
- to - YouTube - 4 hour in-depth interview with Leopold Aschenbrenner on the disruptive and existential impacts of A.I. super-intelligence
-
a dictator who wields the power of superintelligence would command concentrated power unlike 00:50:45 anything we've ever seen
for - key insight - AI - progress trap - nightmare scenario - dictator controlling superintelligence
meet insight - AI - progress trap - nightmare scenario - locked in dictatorship controlling superintelligence - millions of AI controlled robotic law and enforcement agents could police their populace - Mass surveillance would be hypercharged - Dictator loyal AI agents could individually assess every single citizen for descent with near perfect lie detection sensor - rooting out any disloyalty e - Essentially - the robotic military and police force could be wholly controlled by a single political leader and - programmed to be perfectly obedient and there's going to be no risks of coups or rebellions and - his strategy is going to be perfect because he has super intelligence behind them - what does a look like when we have super intelligence in control by a dictator ? - there's simply no version of that where you escape literally - past dictatorships were not permanent but - superintelligence could eliminate any historical threat to a dictator's Rule and - lock in their power - If you believe in freedom and democracy this is an issue because - someone in power, - even if they're good - they could still stay in power - but you still need the freedom and democracy to be able to choose - This is why the Free World must Prevail so - there is so much at stake here that - This is why everyone is not taking this into account
-
this is why it's such a trap which is why like we're on this train barreling down this pathway which is super risky
for - progress trap - double bind - AI - ubiquity
progress trap - double bind - AI - ubiquity - Rationale: we will have to equip many systems with AI - including military systems - Already connected to the internet - AI will be embedded in every critical piece of infrastructure in the future - What happens if something goes wrong? - Now there is an alignment failure everywhere - We will potentially have superintelligence within 3 years - Alignment failures will become catastrophic with them
-
getting a base model to you know make money by default it may well learn to lie to commit fraud to deceive to hack to seek power because 00:47:50 in the real world people actually use this to make money
for - progress trap - AI - example - give prompt for AI to earn money
progress trap - AI - example - instruct AI to earn money - Getting a base model to make money. By default it may well learn - to lie - to commit fraud - to deceive - to hack - to seek power - because in the real world - people actually use this to make money - even maybe they'll learn to - behave nicely when humans are looking and then - pursue more nefarious strategies when we aren't watching
-
this company's got not good for safety
for - AI - security - Open AI - examples of poor security - high risk for humanity
AI - security - Open AI - examples of poor security - high risk for humanity - ex-employees report very inadequate security protocols - employees have had screenshots capture while at cafes outside of Open AI offices - People like Jimmy Apple report future releases on twitter before Open AI does
-
the alignment problem
for - definition - AI - The Alignment Problem
definition - The Alignment Problem - When AI intelligence so far exceeds human intelligence that - we won't be able to predict their behavior - we won't know if we can trust that the AI is aligned to our intent
-
open AI literally yesterday published securing research infrastructure for advanced AI
for - AI - Security - Open AI statement in response to this essay
-
this is a serious problem because all they need to do is automate AI research 00:41:53 build super intelligence and any lead that the US had would vanish the power dynamics would shift immediately
for - AI - security risk - once automated AI research is known, bad actors can easily build superintelligence
AI - security risk - once automated AI research is known, bad actors can easily build superintelligence - Any lead that the US had would immediately vanish.
-
the model Waits are just a large files of numbers on a server and these can be easily stolen all it takes is an adversary to match your trillions 00:41:14 of dollars and your smartest minds of Decades of work just to steal this file
for - AI - security risk - model weight files - are a key leverage point
AI - security risk - model weight files - are a key leverage point for bad actors - These files are critical national security data that represent huge amounts of investment in time and research and they are just a file so can be easily stolen.
-
our failure today will be irreversible soon in the next 12 to 24 months we will leak key AGI breakthroughs to the CCP it will 00:38:56 be to the National security establishment the greatest regret before the decade is out
for - AI - security risk - next 1 to 2 years is vulnerable time to keep AI secrets out of hands of authoritarian regimes
-
here are so many loopholes in our current top AI Labs that we could literally have people who are infiltrating these companies and there's no way to even know what's going on because we don't have any true security 00:37:41 protocols and the problem is is that it's not being treated as seriously as it is
for - key insight - low security at top AI labs - high risk of information theft ending up in wrong hands
-
if you have the cognitive abilities of something that is you know 10 to 100 times smarter than you trying to to outm smarten it it's just you know it's just not going to happen whatsoever so you've effectively lost at that point which means that 00:36:03 you're going to be able to overthrow the US government
for - AI evolution - nightmare scenario - US govt may seize Open AI assets if it arrives at superintelligence
AI evolution - projection - US govt may seize Open AI assets if it arrives at superintelligence - He makes a good point here - If Open AI, or Google achieve superintelligence that is many times more intelligent than any human, - the US government would be fearful that they could be overthrown or that the technology can be stolen and fall into the wrong hands
-
whoever controls superintelligence will possibly have enough power to seize control from 00:35:14 pre superintelligence forces
for - progress trap - AI - one nightmare scenario
progress trap - AI - one nightmare scenario - Whoever is the first to control superintelligence will possibly have enough power to - seize control from pre superintelligence forces - even without the robots small civilization of superintelligence would be able to - hack any undefended military election television system and cunningly persuade generals electoral and economically out compete nation states - design new synthetic bioweapons and then - pay a human in Bitcoin to synthetically synthesize it
-
military power and Technology progress have been tightly linked historically and with extraordinarily rapid technological 00:34:11 progress will come military revolutions
for - progress trap - AI and even more powerful weapons of destruction
progress trap - AI and even more powerful weapons of destruction - The podcaster's excitement seems to overshadow any concern of the tragic unintended consequences of weapons even more powerful than nuclear warheads. - With human base emotions still stuck in the past and our species continued reliance on violence to solve problems, more powerful weapons is not the solution, - indeed, they only make the problem worse - Here is where Ronald Wright's quote is so apt: - We humans are running modern software on 50,000 year old hardware systems - Our cultural evolution, of which AI is a part of, is happening so quickly, that - it is racing ahead of our biological evolution - We aren't able to adapt fast enough for the rapid cultural changes that AI is going to create, and it may very well destroy us
-
this is where we can see the doubling time of the global economy in years from 1903 it's been 15 years but after super intelligence what happens is it going to be every 3 years is it going be every five is it going to 00:33:22 be every year is it going to be every 6 months I mean how crazy is the growth going to be
for - progress trap - AI triggering massive economic growth - planetary boundaries
progress trap - AI triggering massive economic growth - planetary boundaries - The podcaster does not consider the ramifications of the potential disastrous impact of such economic growth if not managed properly
-
AGI level factories are going to shift from going to human run to AI directed using human physical labor soon to be fully being run by swarms of human level robots
for - progress trap - AI and human enslavement?
progress trap - human enslavement? - Isn't what the speaker is talking about here is that - AI will be the masters and - humans will become slaves?
-
be able to quick Master any domain write trillions lines of code and read every research paper in every scientific field ever written
for - AI evolution - projections for capabilities by 2030
AI evolution - projections for 2030 - AI will be able to do things we cannot even conceive of now because their cognitive capabilities are orders of magnitudes faster than our own - Write billions of lines of code - Absorb every scientific paper ever written and write new ones - Gain the equivalent of billions of human equivalent years of experience
-
you're going to have like 100 million more AI research and they're going to be working at 100 times what 00:27:31 you are
for - stats - comparison of cognitive powers - AGI AI agents vs human researcher
stats - comparison of cognitive powers - AGI AI agents vs human researcher - 100 million AGI AI researchers - each AGI AI researcher is 100x more efficient that its equivalent human AI researcher - total productivity increase = 100 million x 100 = 10 billion human AI researchers! Wow!
-
nobody's really pricing this in
for - progress trap - debate - nobody is discussing the dangers of such a project!
progress trap - debate - nobody is discussing the dangers of such a project! - Civlization's journey has to create more and more powerful tools for human beings to use - but this tool is different because it can act autonomously - It can solve problems that will dwarf our individual or even group ability to solve - Philosophically, the problem / solution paradigm becomes a central question because, - As presented in Deep Humanity praxis, - humans have never stopped producing progress traps as shadow sides of technology because - the reductionist problem solving approach always reaches conclusions based on finite amount of knowledge of the relationships of any one particular area of focus - in contrast to the infinite, fractal relationships found at every scale of nature - Supercomputing can never bridge the gap between finite and infinite - A superintelligent artifact with that autonomy of pattern recognition may recognize a pattern in which humans are not efficient and in fact, greater efficiency gains can be had by eliminating us
-
perhaps 100 million human researcher equivalents running day and night t
for - stats - AI evolution - equivalent of 100 million human researchers working 24/7
stats - AI evolution - equivalent of 100 million human researchers working 24/7 - By 2027, the industry's aim is to have tens of millions of GPU training clusters, running - millions of copies of automated AI researchers, or the equivalent of - 100 million human AI researchers working 24/7
-
Sam mman has said that's his entire goal that's what opening eye are trying to build they're not really trying to build super intelligence but they Define AGI as a 00:24:03 system that can do automated AI research and once that does occur
for - key insight - AGI as automated AI researchers to create superintelligence
key insight - AGI as automated AI researchers to create superintelligence - We will reach a period of explosive, exponential AI research growth once AGI has been produced - The key is to deploy AGI as AI researchers that can do AI research 24/7 - 5,000 of such AGI research agents could result in superintelligence in a very short time period (years) - because every time any one of them makes a breakthrough, it is immediately sent to all 4,999 other AGI researchers
-
we are on course for AGI by 2027 and that these AI 00:19:25 systems will basically be able to automate basically all all cognitive jobs think any job that can be done remotely
for - AI evolution - prediction - 2027 - all cognitive jobs can be done by AI
-
suppose that GPT 4 training took 3 months in 2027 a leading AI lab will be able to train a GPT 4 00:18:19 level model in a minute
for - stat - AI evolution - prediction 2027 - training time - 6 OOM decrease
stat - AI evolution - prediction 2027 - training time - 6 OOM decrease - today it takes 3 months to train GPT 4 - in 2027, it will take 1 minute - That is, 131,400 minutes vs 1 minute, or - 6 OOM
-
by 2027 rather than a chatbot you're going to have something that looks more like an agent and more like a coworker
for - AI evolution - prediction - 2027 - AI agent will replace AI chatbot
-
this is where we talk about un hobbling this is of course something that we just spoke about before but the reason that this is important is because this is where you can get gains from a model in ways that you couldn't previously see 00:15:31 before
for - definition - hobbling - AI
-
the inference efficiency improved by nearly three orders of magnitude or 1,000x in less than 2 years
for - stats - AI evolution - Math benchmark - 2022 to 2024
stats - AI evolution - Math benchmark - 2022 to 2024 - 50% increase in accuracy over 2 years - inference accuracy improved 1000x or 3 Orders Of Magnitude (OOM)
-
there is essentially this Benchmark 00:09:58 called the math benchmark a set of difficult mathematic problems from a high school math competitions and when the Benchmark was released in 2021 gpt3 only got 5%
for - stats - AI - evolution - Math benchmark
stats - AI - evolution - Math benchmark - 2021 - GPT3 scored 5% - 2022 - scored 50% - 2024 - Gemini 1.5 Pro scored 90%
-
having an automated AI research engineer by 2027 00:05:14 to 2028 is not something that is far far off
for - progress trap - AI - milestone - automated AI researcher
progress trap - AI - milestone - automated AI researcher - This is a serious concern that must be debated - An AI researcher that does research on itself has no moral compass and can encode undecipherable code into future generations of AI that provides no back door to AI if something goes wrong. - For instance, if AI reached the conclusion that humans need to be eliminated in order to save the biosphere, - it can disseminate its strategies covertly under secret communications with unbreakable code
-
it is strikingly plausible that by 2027 models 00:03:36 will be able to do the work of an AI researcher SL engineer that doesn't require believing in sci-fi it just requires in believing in straight lines on a graph
for - quote - AI prediction for 2027 - Leopold Aschenbrenner
quote - AI prediction for 2027 - Leopold Aschenbrenner - (see quote below) - it is strikingly plausible that by 2027 - models will be able to do the work of an AI researcher SL engineer - that doesn't require believing in sci-fi - it just requires in believing in straight lines on a graph
-
he Talk of the Town has shifted from 10 billion compute clusters 00:01:16 to hundred billion do compute clusters to even trillion doll clusters and every 6 months another zero is added to the boardroom plans
for - AI - future spending - trillion dollars - superintelligence by 2030
Tags
- stats - AI - evolution - Math benchmark
- stats - comparison of cognitive powers - AGI AI agents vs human researcher
- progress trap - AI and human enslavement?
- progress trap - AI - one nightmare scenario
- definition - hobbling - AI
- stats - AI evolution - equivalent of 100 million human researchers working 24/7
- AI evolution - projections for capabilities by 2030
- AI evolution - prediction - 2027 - all cognitive jobs can be done by AI
- AI - progress trap - nightmare scenario - dictator controlling superintelligence
- stat - AI evolution - prediction 2027 - training time - 6 OOM decrease
- definition - AI - The Alignment Problem
- AI - security risk - next 1 to 2 years is vulnerable time to keep AI secrets out of hands of authoritarian regimes
- stats - AI evolution - Math benchmark - 2022 to 2024
- AI evolution - prediction - 2027 - AI agent will replace AI chatbot
- key insight - low security at top AI labs - high risk of information theft ending up in wrong hands
- quote - AI prediction for 2027 - Leopold Aschenbrenner
- progress trap - double bind - AI - ubiquity
- progress trap - AI - example - give prompt for AI to earn money
- AI - future spending - trillion dollars - superintelligence by 2030
- AI - Security - Open AI statement in response to this essay
- AI - inside industry predictions to 2034
- AI evolution - nightmare scenario - US govt may seize Open AI assets if it arrives at superintelligence
- progress trap - debate - nobody is discussing the dangers of exponential AI research by AGI agents
- article - SItuational Awareness - The Decade Ahead - Leopold Aschenbrenner
- progress trap - AI - milestone - automated AI researcher - concerns
- key insight - AGI as automated AI researchers to create superintelligence
- progress trap - AI triggering massive economic growth - planetary boundaries
- progress trap - AI and even more powerful weapons of destruction
- AI - security - Open AI - poor security - high risk for humanity
- AI - security risk - once automated AI research is known, bad actors can easily build superintelligence
- AI - security risk - model weight files - are a key leverage point for bad actors
- to - YouTube - 4 hour in-depth interview with Leopold Aschenbrenner on the disruptive and existential impacts of A.I. super-intelligence
- Leopold Aschenbrenner - inside information on disruptive Generative AI to 2034
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
for - progress trap - AI - threat of superintendence - interview - Leopold Aschenbrenner - former Open AI employee - from -. YouTube - review of Leopold Aschenbrenner's essay on Situational Awareness - https://hyp.is/ofu1EDC3Ee-YHqOyRrKvKg/docdrop.org/video/om5KAKSSpNg/
-
-
docdrop.org docdrop.org
-
quite frankly a lot of artists and 00:21:16 producers are probably using it just for that they come up with something inspiration they go they make something new
for - Generative AI music - producers and artists using for inspiration
comment I would agree with this. Especially since the AI music currently sounds lo-fi
-
what if a band decides to take one of the udio generated songs and re-record it entirely will they own the full copy rate to that very new recording now if I 00:21:03 was udio the answer probably be like no you made that thing using our platform
for - AI music issues - rerecording an AI music generated song - copyright question
-
the AI created Music learned from got inspiration from the hit songs and came up with a great new hit song for you and then kind of you 00:13:21 know what we'll call those those artifacts or the little similarities here and there might get picked up by Content ID on YouTube
for - AI music - youtube content ID algorithms can identify it
-
here's a way to do direct to 00:16:46 Consumer sell and can make some money and don't just be like so worried about being on the music platform streaming and now you're diluted because the AI
for - new music sales model - direct to consumer - helps mitigate AI music
-
there's a huge disparity between state of law application of tech and what's 00:15:42 actually happening
for - AI - law - too slow
-
to your point for 00:13:46 every problem there's going to be a solution and AI is going to have it and then for every solution for that there's going to be a new problem
for - AI - progress trap - nice simple explanation of how progress traps propagate
-
this is more of a unfair competition 00:10:36 issue I think as a clearer line than the copyright stuff
for - progress trap - Generative AI - copyright infringement vs Unfair business practice argument
-
now there's going to be even more AI music pouring 00:09:04 into platforms which saturated Market in an already oversaturated Market
for - progress trap - AI music - oversaturated market
-
these conversations are having daily people are scrambling trying to like we're trying to keep up 00:07:32 with AI in real time scrambling to find out what we're going to do think about all the different businesses that are affected from this
for - AI Disruption - Realtime - music industry is scrambling
-
Google deep mind they're coming up their new Google AI sound boox that and it is making Loops from prompts and they have wav Jean
for - AI music - Google Deep Mind - Google AI Soundbox - Wycliff Jean endorsing
-
backstory of udio like I didn't know that willim IM and United Masters were like investors in udio
for - AI music - Udio - investors - Will.I.Am - United Masters
-
deluding the general royalty pool
for - progress trap - AI music - dilution of general royalty pool - due to large volume
-
the volume of how much music is being created over 800,000 00:01:56 tracks a day are being created using udio
for - stats - AI music platform Udio - tracks created per day - over 800000
-
terms of service which is the contract that you sign when you get on their platform does say that you can monetize what you make so meaning you can put into distribution 00:00:41 the music that you make
for - AI music - Udio - terms of service - users can sell the music made on Udio
Tags
- AI music - youtube content ID algorithms can identify it
- AI Disruption - Realtime - music industry is scrambling
- progress trap - Generative AI - copyright infringement vs Unfair business practice argument
- AI - law - too slow
- new music sales model - direct to consumer - helps mitigate AI music
- AI music issues - rerecording an AI music generated song - copyright question
- generative AI music - producers and artists using for inspiration
- AI music - Google Deep Mind - Google AI Soundbox - Wycliff Jean endorsing
- AI music - Udio - terms of service - users can sell the music made on Udio
- AI - progress trap - nice simple explanation of how progress traps propagate
- progress trap - AI music - oversaturated market
- progress trap - AI music - dilution of general royalty pool - due to large volume
- stats - AI music platform Udio - tracks created per day - over 800000
- AI music - Udio - investors - Will.I.Am - United Masters
Annotators
URL
-
-
www.yalelawjournal.org www.yalelawjournal.org
-
These arguments are meant to present a cautionary tale of unintended consequences.
For - progress trap - AI - Generative AI - IP - Yale Law Journal
-
-
docdrop.org docdrop.org
-
for - progress trap - AI music - critique - Folia Sound Studio - to - P2P Foundation - Michel Bauwens - Commons Transition Plan - Netarchical Capitalism - Predatory Capitalism
to - P2P Foundation - Michel Bauwens - Commons Transition Plan - Netarchical Capitalism - Predatory Capitalism https://hyp.is/o-Hp-DCAEe-8IYef613YKg/wiki.p2pfoundation.net/Commons_Transition_Plan
-
I think that Noam chsky said exactly a year ago in New York Times around a year ago that generative AI is not any 00:18:37 intelligence it's just a plagiarism software that learned stealing human uh work transform it and sell it as much as possible as cheap as possible
for - AI music theft - citation - Noam Chomsky - quote - Noam Chomsky - AI as plagiarism on a grand scale
to - P2P Foundation - commons transition plan - Michel Bauwens - netarchical capitalism - predatory capitalism - https://wiki.p2pfoundation.net/Commons_Transition_Plan#Solving_the_value_crisis_through_a_social_knowledge_economy
Tags
- to - P2P Foundation - Michel Bauwens - netarchical capitalism - predatory capitalism
- AI music theft - citation - Noam Chomsky
- progress trap - AI music - critique
- quote - Noam Chomsky - AI as plagiarism on a grand scale
- to - P2P Foundation - Michel Bauwens - Commons Transition Plan - Netarchical Capitalism - Predatory Capitalism
Annotators
URL
-
-
wiki.p2pfoundation.net wiki.p2pfoundation.net
-
from - youtube - Folia Sound Studio critique of AI music tools - Suno - Udio https://hyp.is/NGScyjB_Ee-rSYNhe9Fuug/docdrop.org/video/wEQ9Vg2YKcU/
-
-
-
for - AI - replicating music - AI - music app - Udio
-
-
www.reworked.co www.reworked.co
-
via [[Lee Bryant]] Postshift. Which use cases to look at for AI. Imo the narrower the better. Vgl [[small band AI personal assistant]]
-
-
www.anthropic.com www.anthropic.comClaude1
-
https://web.archive.org/web/20240617122834/https://www.anthropic.com/claude
What https://unherd.com/2024/05/im-in-love-with-my-ai-girlfriend/ used as AI model / app, jailbroken.
Seems it was the paid version, as linked article mentions Opus, which is available for 20usd/m. Has an API and an iOS app (no Android).
-
-
unherd.com unherd.com
-
Column by a travel writer on how anthropomorphing AI can go off the rails quickly. Note that author doesn't really explain how he interacted except for vague indications (a jailbroken Claude 3 Opus model, seemingly running on his phone as app?)
Via [[Euan Semple]] https://euansemple.blog/2024/06/08/jesus-tittyfucking-christ-on-a-cracker-is-that-a-pagan-shrine/
-
-
-
- openai use LiveKit to deliver realtime voice
- playground: https://cloud.livekit.io/projects/
-
-
-
-
www.codium.ai www.codium.ai
Tags
Annotators
URL
-
-
getdecipher.com getdecipher.com
-
www.second.dev www.second.dev
-
debot.lodder.dev debot.lodder.devDeBot1
-
Een project v Open State Foundation.
-
-
www.cursor.com www.cursor.comCursor1
-
useanything.com useanything.com
Tags
Annotators
URL
-
-
-
if we just had a big enough spreadsheet we could get the data in and then we could get you know something like AI or some you know some other computational 00:12:32 process in to help us deal with all this complexity because our little brains can't handle it and my feeling about this is that 00:12:44 actually no
for - adjacency - AI - Nora Bateson - solving wicked problems - no - Human Intelligence - HI - yes - @gyuri
-
- May 2024
-
Local file Local file
-
normalizeddifference vegetation index (NDVI)
O Índice de Vegetação por Diferença Normalizada (NDVI, do inglês Normalized Difference Vegetation Index) é uma métrica amplamente utilizada na área de sensoriamento remoto para quantificar a vegetação em uma determinada área a partir de imagens de satélite ou aeronaves. Este índice é baseado na reflexão da luz em diferentes comprimentos de onda pelas plantas.
-
-
www.helmut-schmidt.de www.helmut-schmidt.de
-
Die Rede der ZukunftspreisträgerinMeredith Whittaker warnt in ihrer Rede vor der Macht der Techindustrie und erklärt, warum es sich gerade jetzt lohnt, positiv zu denken.
Meredith Whittaker on the origin of AI wave and consquences. Need to read this. #toread Current AI as 1980s insights now feasible on top of the massive data of bigtech silos. And Clinton admin wrt privacy and advertising in 1990s as the fautllines that enabled #socmed platform silos.
-
-
-
FastCut adds animated captions, b-rolls & sound effects to your videos. FastCut은 동영상에 애니메이션 캡션, 비롤 및 음향 효과를 추가합니다.
Tags
Annotators
URL
-
-
media.dltj.org media.dltj.org
-
And one way we've seen artificial intelligence used in research practices is in extracting information from copyrighted works. So researchers are using this to categorize or classify relationships in or between sets of data. Now sometimes this is called using analytical AI and it evolves processes that are considered part of text and data mining. So we know that text data mining research methodologies can but they don't necessarily need to rely on artificial intelligence systems to extract this information.
Analytical AI: categorize and contextualize
As distinct from generative AI...gun example in motion pictures follows in the presentation.
-
-
aisafety.dance aisafety.dance
-
AI with intact skills, but broken goals, would be an AI that skillfully acts towards corrupted goals.
-
Without intuition, AI can't understand common sense or humane values. Thus, AI might achieve goals in logically-correct but undesirable ways.
-
-
www.linkedin.com www.linkedin.com
-
Matthew van der Hoorn Yes totally agree but could be used for creating a draft to work with, that's always the angle I try to take buy hear what you are saying Matthew!
Reply to Nidhi Sachdeva: Nidhi Sachdeva, PhD Just went through the micro-lesson itself. In the context of teachers using to generate instruction examples, I do not argue against that. The teacher does not have to learn the content, or so I hope.
However, I would argue that the learners themselves should try to come up with examples or analogies, etc. But this depends on the learner's learning skills, which should be taught in schools in the first place.
-
***Deep Processing***-> It's important in learning. It's when our brain constructs meaning and says, "Ah, I get it, this makes sense." -> It's when new knowledge establishes connections to your pre-existing knowledge.-> When done well, It's what makes the knowledge easily retrievable when you need it. How do we achieve deep processing in learning? 👉🏽 STORIES, EXPLANATIONS, EXAMPLES, ANALOGIES and more - they all promote deep meaningful processing. 🤔BUT, it's not always easy to come up with stories and examples. It's also time-consuming. You can ask you AI buddies to help with that. We have it now, let's leverage it. Here's a microlesson developed on 7taps Microlearning about this topic.
Reply to Nidhi Sachdeva: I agree mostly, but I would advice against using AI for this. If your brain is not doing the work (the AI is coming up with the story/analogy) it is much less effective. Dr. Sönke Ahrens already said: "He who does the effort, does the learning."
I would bet that Cognitive Load Theory also would show that there is much less optimized intrinsic cognitive load (load stemming from the building or automation of cognitive schemas) when another person, or the AI, is thinking of the analogies.
https://www.linkedin.com/feed/update/urn:li:activity:7199396764536221698/
-
-
citl.indiana.edu citl.indiana.edu
-
If students know that the AI has some responsibility for determining their grades, that AI will have considerably more authority in the classroom or in any interactions with students.
warning about AI grading
-
-
-
if I met a robot that looked very much like a beautiful girl and everything went fine together with her and me but
for - comparison - human vs AI robot - Denis Noble
-
-
meta.stackexchange.com meta.stackexchange.com
-
One of the key elements was "attribution is non-negotiable". OpenAI, historically, has done a poor job of attributing parts of a response to the content that the response was based on.
-
I feel violated, cheated upon, betrayed, and exploited.
-
I wouldn't focus too much on "posted only after human review" - it's worth noting that's that's worth nothing. We literally just saw a case of obviously riduculous AI images in a scientific paper breezing through peer review with noone caring, so quality will necessarily go down because Brandolini's law combined with AI is the death sentence for communities like SE and I doubt they'll employ people to review content from the money they'll make
-
What could possibly go wrong? Dear Stack Overflow denizens, thanks for helping train OpenAI's billion-dollar LLMs. Seems that many have been drinking the AI koolaid or mixing psychedelics into their happy tea. So much for being part of a "community", seems that was just happy talk for "being exploited to generate LLM training data..." The corrupting influence of the profit-motive is never far away.
-
If you ask ChatGPT to cite it will provide random citations. That's different from actually training a model to cite (e.g. use supervised finetuning on citations with human raters checking whether sources match, which would also allow you to verify how accurately a model cites). This is something OpenAI could do, it just doesn't.
-
There are plenty of cases where genAI cites stuff incorrectly, that says something different, or citations that simply do not exist at all. Guaranteeing citations are included is easy, but guaranteeing correctness is an unsolved problem
Tags
- generative AI: attribution
- generative AI: stealing people's content and using for training without attribution
- AI-assisted but still depening on human review
- changing the rules
- feeling exploited
- generative AI: not citing sources
- good point
- StackExchange: negative
- generative AI: incorrectness
- Brandolini's law
- generative AI: citing sources
- ChatGPT: not citing sources
- AI: training
Annotators
URL
-
-
stackoverflow.blog stackoverflow.blog
-
Plenty of companies are still figuring out how to integrate “traditional AI” (that is, non-generative AI; tools like machine learning and rule-based algorithms)
-