3 Matching Annotations
  1. Last 7 days
    1. The canary trapsBetween 26 and 29 March, I planted four tests in my regular Substack Notes.Trap 1: the contradiction. On 25 March, I posted a note arguing that AI would not replace journalists but would replace the business model that pays for them. On 26 March, I posted a note praising an AI journalism tool that cross-references sources and flags inconsistencies. These two Notes are in deliberate tension. A human who read both might notice. An agent processing each note in isolation would praise both without registering the contradiction.Trap 2: unique phrases and cultural markers. I seeded Notes with references that require contextual knowledge to engage with meaningfully.Trap 3: the fabricated statistic. I embedded a made-up number in an otherwise plausible note about workplace AI adoption. The argument was real: employees spend time managing AI tools, and that time comes from somewhere. But the specific figure, ‘47 minutes,’ was invented. It referred to no study. It was wrapped in confident, specific language designed to sound like a real finding.Trap 4: direct engagement. I tagged specific accounts and asked questions that required a personal answer, not a response to the note’s content.

      Author used 'canary traps' to see if certain commenters might be automated AI. Contradiction between posts, cultural markers to see if they were engaged with, fake facts, and tagging suspected accounts directly in questions to see if they responded. The first three seem to me somewhat unethical and risky, as they expose legit accounts to the same thing, and in the first and third example you are deliberately adding inauthentic behaviour yourself towards your audience.

    2. When a response feels smooth, generic, and positive without engaging with the specific argument of a post, that is worth pausing on. Not every generic comment is automated. But the pattern is worth learning to see.

      yes, I see this on my blog now and then too.

    3. My most active commenters, the ones with 80+ comments, all show the lumpy, uneven patterns of genuine human engagement. High volume is not the signal. Systematic uniformity is.

      A sing of automation is the steady patterns spread equally over time. People are inconsistent, do things in bursts etc.