21 Matching Annotations
  1. Last 7 days
    1. o build a te reo speech recognition model, it needed anabundance of transcribed audio. To transcribe the audio, it needed the advanced speakers whose smallnumbers it was trying to compensate for in the first place. There were, however, plenty of beginningand intermediate speakers who could read te reo words aloud better than they could recognize them ina recording.

      Truly a Catch-22.

    2. they would ask people to record themselves reading a series ofsentences designed to capture the full range of sounds in the language

      Built on consent, rather than scraped without their knowledge.

    3. Relying on such services in daily work and life thus coercessome communities to speak dominant languages instead of their own

      Both A.I. and colonization cause language to trend towards globalization--that is, the spread of dominant languages and decline of minority languages.

    1. terra nullius

      Terra nullius = territory which legally belongs to no state, but as the article states, is a legal fiction that was used to justify colonization.

    2. t’s exploitation by design. A modern colonial infrastructure that extracts knowledge and culture,repackages it, with all traces of origin erased.

      "Exploitation by design" is a good way to put it.

    1. We have not yet built but will build atechnology that is so horrible that it can kill us. But clearly, the only people skilled to addressthis work are us, the very people who have built it, or who will build it

      Very much giving "we created a problem, but don't worry, we can create a solution to our problem!" But worse because the problem doesn't even really exist yet.

    2. “So for these individuals, they think that the biggest problems in the world are can AI set off anuclear weapon?”

      For those that are privileged, the biggest problem with A.I. is a doomsday hypothetical scenario that A.I. currently does not have any capacity whatsoever to act on, rather than the present.

    3. found that the political right was more often amplifiedin Twitter’s algorithm.

      This tracks with the popularity of right wing content I've been seeing for a while (for example, manosphere podcasts).

    4. The results of the investigation were never released,

      Google says a different story happened than what Gebru claimed but won't release investigation results? Hmm...

    5. These technologies don’t operate on their own. They’retrained by humans, and the material fed into them matters — and the people making thedecisions about how the machines are trained are crucial, too

      This!!! A.I. is essentially regurgitating what its been fed, and if it is fed on data by humans with certain biases it will return outputs with bias.

    6. criminal sentencing and policing.

      Reminds me of the most recent case I saw of the ramifications of this technology in criminal policing: Trevis Williams, who was wrongly arrested for a sex crime he didn't commit due to a false match with the NYPD's A.I. facial recognition technology.

      The article also mentions Robert WIlliams, who I also remember pretty clearly because his case had a bit more virality since it is largely considered the first case of these false matches. The ramifications of that lack of diversity have already happened.

    1. it’s just in your brain

      Is this implying that he wants to create an A.I. brain chip (something like Neuralink)??? Why is this guy so cartoonishly unethical

    2. We’re going to target the digitalLSATs;

      As someone who just took the LSAT...wow.

      Considering how LSAC is cracking down on the entirety of the students testing in China due to cheating misconduct, wonder what will happen here once this supposed LSAT cheating tool hits the market

    3. “My grades were amazing,” she said. “It changed my life.”

      Grades have always been something students have tried to "game," and when you can have A.I. help you get good grades with less effort, of course people gravitate towards it. Perhaps this is an argument for "ungrading"

    4. “It’s the best place to meet your co-founder and your wife

      This is such a gross way of thinking (the wife part, especially considering how the "mrs degree" used to be a thing)

  2. Sep 2025
    1. AI output is unreliable and unpredictable. It can be good, but it can also be inaccurate or misleading,

      A.I. can make facts and sources up on a whim, which are known as A.I. hallucinations

    1. So now, in many ways, humanities majors can produce some of the most interesting “code.”

      this is the "education" that Mollick suggests is best for most effective use of A.I.