at the end, it shows us a chart of what is the most reliable - post-hoc detection is the best
- Last 7 days
-
www.brookings.edu www.brookings.edu
-
-
One might contend that even if post-hoc detectors aren’t very good today, it’s only a matter of time before the technology improves enough to be reliable and practical. Unfortunately, the opposite is far more likely. As AI models improve and produce more realistic writing and audio/visual media, AI-generated content will have an easier time passing as human-authored content.
it will be harder to detect Ai genereated content without computter help
-
the Internet Corporation for Assigned Names and Numbers (ICANN)—a multi-stakeholder not-for-profit partnership organization responsible for international coordination and maintenance of the internet domain name system (among other things), critical to ensuring the smooth and secure operation of the internet.
basically a nonprofit organization to keep the internet running
-
A potential approach to pursue such a watermarking regime is to establish a trusted organization with the following two responsibilities:
ways to make watermarking a regular thing
-
The White House announced this past summer that leading AI companies had voluntarily committed to “developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system,” although the commitment appears limited to audio/visual content and excludes language models.
page is not available? i wonder why looks like it had been under biden when published - "biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/"
-
there has been a recent explosion of research interest in this area.
scholary article discussing algoithmic solutions to detecting AI written text
-
(studied independently by researchers at the University of Maryland and OpenAI)
both credible sources
-
Google also recently announced SynthID, an experimental tool for watermarking and identifying images generated by the company’s AI models that uses one machine learning model to embed an imperceptible watermark and another model to detect the watermark.
one of the biggest companies working off of their own AI-model in oder to make a seamless watermark
-
it is possible to use sophisticated approaches from the field of steganography, the technique of hiding messages in simple text through secret patterns in word choice or order.
certain words in a certain order show if text is AI-written
-
Does the method require cooperation from the developer?
why would the developers not cooperate?
-
In particular, some scholars have argued that f
from columbia university - possible concession?
-
The EU AI Act contains provisions that require users of AI systems in certain contexts to disclose and label their AI-generated content, as well as provisions that require people to be informed when interacting with AI systems. The National Defense Authorization Act (NDAA) for Fiscal Year 2024 has provisions for a prize competition “to evaluate technology…for the detection and watermarking of generative artificial intelligence.” It also has provisions for the Department of Defense to study and pilot an implementation of “industry open technical standards” for embedding content provenance information in the metadata of publicly released official audio/video. Last fall, Senator Ricketts introduced a bill requiring all AI models to watermark their outputs. Perhaps most prominently, the White House announced last summer that it had secured voluntary commitments from major AI companies to develop “robust technical mechanisms to ensure that users know when content is AI generated,” such as watermarking or content provenance for audio/visual media.
actual government PDFs and websites which talk about what is being made/done about AI
-
As such, generative AI models are raising concerns about the credibility of digital content and the ease of producing harmful content going forward.
at lot of what im focusing on
-