- Aug 2024
-
www.anthropic.com www.anthropic.com
-
When a user asks Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside their conversation. This creates a dynamic workspace where they can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into their projects and workflows.
Tags
Annotators
URL
-
- Jun 2024
-
-
for - AI - inside industry predictions to 2034 - Leopold Aschenbrenner - inside information on disruptive Generative AI to 2034
document description - Situational Awareness - The Decade Ahead - author - Leopold Aschenbrenner
summary - Leopold Aschenbrenner is an ex-employee of OpenAI and reveals the insider information of the disruptive plans for AI in the next decade, that pose an existential threat to create a truly dystopian world if we continue going down our BAU trajectory. - The A.I. arms race can end in disaster. The mason threat of A.I. is that humans are fallible and even one bad actor with access to support intelligent A.I. can post an existential threat to everyone - A.I. threat is amplifier by allowing itt to control important processes - and when it is exploited by the military industrial complex, the threat escalates significantly
- to - YouTube - 4 hour in-depth interview with Leopold Aschenbrenner on the disruptive and existential impacts of A.I. super-intelligence
Tags
- Leopold Aschenbrenner - inside information on disruptive Generative AI to 2034
- to - YouTube - 4 hour in-depth interview with Leopold Aschenbrenner on the disruptive and existential impacts of A.I. super-intelligence
- AI - inside industry predictions to 2034
- article - SItuational Awareness - The Decade Ahead - Leopold Aschenbrenner
Annotators
URL
-
-
docdrop.org docdrop.org
-
quite frankly a lot of artists and 00:21:16 producers are probably using it just for that they come up with something inspiration they go they make something new
for - Generative AI music - producers and artists using for inspiration
comment I would agree with this. Especially since the AI music currently sounds lo-fi
-
this is more of a unfair competition 00:10:36 issue I think as a clearer line than the copyright stuff
for - progress trap - Generative AI - copyright infringement vs Unfair business practice argument
-
-
www.yalelawjournal.org www.yalelawjournal.org
-
These arguments are meant to present a cautionary tale of unintended consequences.
For - progress trap - AI - Generative AI - IP - Yale Law Journal
-
- May 2024
-
meta.stackexchange.com meta.stackexchange.com
-
One of the key elements was "attribution is non-negotiable". OpenAI, historically, has done a poor job of attributing parts of a response to the content that the response was based on.
-
I feel violated, cheated upon, betrayed, and exploited.
-
What could possibly go wrong? Dear Stack Overflow denizens, thanks for helping train OpenAI's billion-dollar LLMs. Seems that many have been drinking the AI koolaid or mixing psychedelics into their happy tea. So much for being part of a "community", seems that was just happy talk for "being exploited to generate LLM training data..." The corrupting influence of the profit-motive is never far away.
-
There are plenty of cases where genAI cites stuff incorrectly, that says something different, or citations that simply do not exist at all. Guaranteeing citations are included is easy, but guaranteeing correctness is an unsolved problem
-
GenAIs are not capable of citing stuff. Even if it did, there's no guarantee that the source either has anything to do with the topic in question, nor that it states the same as the generated content. Citing stuff is trivial if you don't have to care if the citation is relevant to the content, or if it says the same as you.
-
-
stackoverflow.blog stackoverflow.blog
-
Plenty of companies are still figuring out how to integrate “traditional AI” (that is, non-generative AI; tools like machine learning and rule-based algorithms)
-
-
stackoverflow.co stackoverflow.co
-
AI-powered code generation tools like GitHub Copilot make it easier to write boilerplate code, but they don’t eliminate the need to consult with your organization’s domain experts to work through logic, debugging, and other complex problems.Stack Overflow for Teams is a knowledge-sharing platform that transfers contextual knowledge validated by your domain experts to other employees. It can even foster a code generation community of practice that champions early adopters and scales their learnings. OverflowAI makes this trusted internal knowledge—along with knowledge validated by the global Stack Overflow community—instantly accessible in places like your IDE so it can be used alongside code generation tools. As a result, your teams learn more about your codebase, rework code less often, and speed up your time-to-production.
Tags
Annotators
URL
-
-
openai.com openai.com
-
We recently improved source links in ChatGPT(opens in a new window) to give users better context and web publishers new ways to connect with our audiences.
-
Our models are designed to help us generate new content and ideas – not to repeat or “regurgitate” content. AI models can state facts, which are in the public domain.
-
- Apr 2024
-
www.perplexity.ai www.perplexity.ai
-
I ran across an AI tool that cites its sources if anyone's interested (and heard of it yet): https://www.perplexity.ai/
That's one of the things that I dislike the most about ChatGPT is that it just synthesizes/paraphrases the information, but doesn't let me quickly and easily check the original sources so that I can verify (and learn more about the topic by doing further reading) the information for myself. Without access to primary sources, it often feels no better than a rumor — a retelling of what someone somewhere allegedly, purportedly, ostensibly found to be true — can I really trust what ChatGPT claims? (No...)
-
-
-
-
Perplexity AI's biggest strength over ChatGPT 3.5 is its ability to link to actual sources of information. Where ChatGPT might only recommend what to search for online, Perplexity doesn't require that back-and-forth fiddling.
-
- Mar 2024
-
www.theverge.com www.theverge.com
-
have extensively criticized both companies (and generative AI systems in general) for training their models on masses of online data scraped from their works without consent. Stable Diffusion and Midjourney have both been targeted with several copyright lawsuits, with the latter being accused of creating an artist database for training purposes in December.
-
- Nov 2023
-
www.meetup.com www.meetup.com
-
https://www.meetup.com/edtechsocal/events/296723328/
Generative AI: Super Learning Skills with Data Discovery and more!
-
- Sep 2023
-
www.theguardian.com www.theguardian.com
- Aug 2023
-
chat.openai.com chat.openai.comChatGPT1
-
remikalir.com remikalir.com
-
Nonetheless, Claude is first AI tool that has really made me pause and think. Because, I’ve got to admit, Claude is a useful tool to think with—especially if I’m thinking about, and then writing about, another text.
-
-
Local file Local file
-
Mills, Anna, Maha Bali, and Lance Eaton. “How Do We Respond to Generative AI in Education? Open Educational Practices Give Us a Framework for an Ongoing Process.” Journal of Applied Learning and Teaching 6, no. 1 (June 11, 2023): 16–30. https://doi.org/10.37074/jalt.2023.6.1.34.
Annotation url: urn:x-pdf:bb16e6f65a326e4089ed46b15987c1e7
-
ignoring AI altogether–not because they don’t wantto navigate it but because it all feels too much or cyclicalenough that something else in another two years will upendeverything again
Might generative AI worries follow the track of the MOOC scare? (Many felt that creating courseware was going to put educators out of business...)
-
For many, generative AI takesa pair of scissors and cuts apart that web. And that canfeel like having to start from scratch as a professional.
How exactly? Give us an example? Otherwise not very clear.
-
T9 (text prediction):generative AI::handgun:machine gun
-
Some may not realize it yet, but the shift in technology represented by ChatGPT is just another small evolution in the chain of predictive text with the realms of information theory and corpus linguistics.
Claude Shannon's work along with Warren Weaver's introduction in The Mathematical Theory of Communication (1948), shows some of the predictive structure of written communication. This is potentially better underlined for the non-mathematician in John R. Pierce's book An Introduction to Information Theory: Symbols, Signals and Noise (1961) in which discusses how one can do a basic analysis of written English to discover that "e" is the most prolific letter or to predict which letters are more likely to come after other letters. The mathematical structures have interesting consequences like the fact that crossword puzzles are only possible because of the repetitive nature of the English language or that one can use the editor's notation "TK" (usually meaning facts or date To Come) in writing their papers to make it easy to find missing information prior to publication because the statistical existence of the letter combination T followed by K is exceptionally rare and the only appearances of it in long documents are almost assuredly areas which need to be double checked for data or accuracy.
Cell phone manufacturers took advantage of the lower levels of this mathematical predictability to create T9 predictive text in early mobile phone technology. This functionality is still used in current cell phones to help speed up our texting abilities. The difference between then and now is that almost everyone takes the predictive magic for granted.
As anyone with "fat fingers" can attest, your phone doesn't always type out exactly what you mean which can result in autocorrect mistakes (see: DYAC (Damn You AutoCorrect)) of varying levels of frustration or hilarity. This means that when texting, one needs to carefully double check their work before sending their text or social media posts or risk sending their messages to Grand Master Flash instead of Grandma.
The evolution in technology effected by larger amounts of storage, faster processing speeds, and more text to study means that we've gone beyond the level of predicting a single word or two ahead of what you intend to text, but now we're predicting whole sentences and even paragraphs which make sense within a context. ChatGPT means that one can generate whole sections of text which will likely make some sense.
Sadly, as we know from our T9 experience, this massive jump in predictability doesn't mean that ChatGPT or other predictive artificial intelligence tools are "magically" correct! In fact, quite often they're wrong or will predict nonsense, a phenomenon known as AI hallucination. Just as with T9, we need to take even more time and effort to not only spell check the outputs from the machine, but now we may need to check for the appropriateness of style as well as factual substance!
The bigger near-term problem is one of human understanding and human communication. While the machine may appear to magically communicate (often on our behalf if we're publishing it's words under our names), is it relaying actual meaning? Is the other person reading these words understanding what was meant to have been communicated? Do the words create knowledge? Insight?
We need to recall that Claude Shannon specifically carved semantics and meaning out of the picture in the second paragraph of his seminal paper:
Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.
So far ChatGPT seems to be accomplishing magic by solving a small part of an engineering problem by being able to explore the adjacent possible. It is far from solving the human semantic problem much less the un-adjacent possibilities (potentially representing wisdom or insight), and we need to take care to be aware of that portion of the unsolved problem. Generative AIs are also just choosing weighted probabilities and spitting out something which is prone to seem possible, but they're not optimizing for which of many potential probabilities is the "best" or the "correct" one. For that, we still need our humanity and faculties for decision making.
Shannon, Claude E. A Mathematical Theory of Communication. Bell System Technical Journal, 1948.
Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.
Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.
Shannon, Claude Elwood. “The Bandwagon.” IEEE Transactions on Information Theory 2, no. 1 (March 1956): 3. https://doi.org/10.1109/TIT.1956.1056774.
We may also need to explore The Bandwagon, an early effect which Shannon noticed and commented upon. Everyone seems to be piling on the AI bandwagon right now...
Tags
- coronavirus
- cultural shifts
- Lance Eaton
- Anna Mills
- generative AI
- references
- open education
- MOOC
- The Bandwagon
- ChatGPT
- social media machine guns
- solution spaces
- Claude Shannon
- analogies
- information theory
- pedagogy
- adjacent possible
- OER
- Future Trends Forum 2023-08-31
- artificial intelligence for writing
- EdTech
- T9 (text prediction)
- Maha Bali
- ChatGPTedu
- hallucinating
- machine guns
Annotators
-
-
er.educause.edu er.educause.edu
-
A Generative AI Primer on 2023-08-15 by Brian Basgen
ᔥGeoff Corb in LinkedIn update (accessed:: 2023-08-26 01:34:45)
-
- Mar 2023
-
librarian.aedileworks.com librarian.aedileworks.com
-
I want to bring to your attention one particular cause of concern that I have heard from a number of different creators: these new systems (Google’s Bard, the new Bing, ChatGPT) are designed to bypass creators work on the web entirely as users are presented extracted text with no source. As such, these systems disincentivize creators from sharing works on the internet as they will no longer receive traffic
Generative AI abstracts away the open web that is the substrate it was trained on. Abstracting away the open web means there may be much less incentive to share on the open web, if the LLMs etc never point back to it. Vgl the way FB et al increasingly treated open web URLs as problematic.
-
-
- Jan 2023
-
-
The potential size of this market is hard to grasp — somewhere between all software and all human endeavors
I don't think "all" software needs or all human endeavors benefit from generative AI. Especially when you consider the associated prerequisitve internet access or huge processing requirements.
-
Other hardware options do exist, including Google Tensor Processing Units (TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the game, is also entering the market with their high-end Habana chips and Ponte Vecchio GPUs. But so far, few of these new chips have taken significant market share. The two exceptions to watch are Google, whose TPUs have gained traction in the Stable Diffusion community and in some large GCP deals, and TSMC, who is believed to manufacture all of the chips listed here, including Nvidia GPUs (Intel uses a mix of its own fabs and TSMC to make its chips).
Look at market share for tensorflow and pytorch which both offer first-class nvidia support and likely spells out the story. If you are getting in to AI you go learn one of those frameworks and they tell you to install CUDA
-
Commoditization. There’s a common belief that AI models will converge in performance over time. Talking to app developers, it’s clear that hasn’t happened yet, with strong leaders in both text and image models. Their advantages are based not on unique model architectures, but on high capital requirements, proprietary product interaction data, and scarce AI talent. Will this serve as a durable advantage?
All current generation models have more-or-less the same architecture and training regimes. Differentiation is in the training data and the number of hyper-parameters that the company can afford to scale to.
-
In natural language models, OpenAI dominates with GPT-3/3.5 and ChatGPT. But relatively few killer apps built on OpenAI exist so far, and prices have already dropped once.
OpenAI have already dropped prices on their GPT-3/3.5 models and relatively few apps have emerged. This could be because companies are reluctant to build their core offering around a third party API
-
Vertical integration (“model + app”). Consuming AI models as a service allows app developers to iterate quickly with a small team and swap model providers as technology advances. On the flip side, some devs argue that the product is the model, and that training from scratch is the only way to create defensibility — i.e. by continually re-training on proprietary product data. But it comes at the cost of much higher capital requirements and a less nimble product team.
There's definitely a middle ground of taking an open source model that is suitably mature and fine-tuning it for a specific use case. You could start without a moat and build one over time through collecting use data (similar to network effect)
-
Many apps are also relatively undifferentiated, since they rely on similar underlying AI models and haven’t discovered obvious network effects, or data/workflows, that are hard for competitors to duplicate.
Companies that rely on underlying AI models without adding value via model improvements are going to find that they have no moat.
-
Over the last year, we’ve met with dozens of startup founders and operators in large companies who deal directly with generative AI. We’ve observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, haven’t yet achieved large commercial scale.
Infrastructure vendors are laughing all the way to the bank because companies are dumping millions on GPUs. Meanwhile, the people building apps on top of these models are struggling. We've seen this sort of gold-rush before and infrastructure providers are selling the shovels.
-
- Dec 2022
-
-
Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.”
These models are "children of Tay", the story of the Microsoft's bot repeating itself, again
-