30 Matching Annotations
  1. Aug 2024
  2. Jan 2024
    1. Facial Recognition Technology: Current Capabilities, Future Prospects, and Governance (2024) 120 pages | 8.5 x 11 | PAPERBACK ISBN 978-0-309-71320-7 | DOI 10.17226/27397

      http://nap.nationalacademies.org/27397

  3. Dec 2023
    1. Texas has a law called CUBI and Illinois has BIPA. They prevent me from even doing the scan on somebody to determine if they’re in the set. I think these are bad laws. They prevent a very useful, reasonable, completely sensible thing.The thing that people are worried about, I don’t think anyone is building. No one has been trying to build this ‘who are these random people?’

      Meta’s CTO doesn’t know about Clearview AI

      There are companies that are trying to build systems to recognize random faces.

  4. Sep 2023
    1. there are currently no laws or 00:10:29 standards that govern how to use certain kinds of products machine learning products or AI products - and for what purpose right so there are no there's 00:10:41 there's no restrictions so we don't know if like these algorithms that are being used by law enforcement are breaking certain laws we don't know if algorithms that are being used for hiring our breaking Equal Employment Opportunity

      Here Gebru questions the common belief that law enforcement and employers are trustworthy.

  5. Jul 2023
    1. we have all sorts of stupid biases when it comes to leadership selection.
      • facial bias
        • experiments show that children and adults alike who didn't know any of the faces shown, chose actual election leaders and runner ups of elections to be their leaders
        • China exploits the "white-guy-in- a-tie" problem to win deals.
          • Companies hire a white person with zero experience to wear a nice suit and tie and pose as a businessman who has just flown in from Silicon Valley.
  6. Mar 2023
  7. Mar 2022
    1. computers might therefore easily outperform humans at facial recognition and do so in a much less biased way than humans. And at this point, government agencies will be morally obliged to use facial recognition software since it will make fewer mistakes than humans do.

      Banning it now because it isn't as good as humans leaves little room for a time when the technology is better than humans. A time when the algorithm's calculations are less biased than human perception and interpretation. So we need rigorous methodologies for testing and documenting algorithmic machine models as well as psychological studies to know when the boundary of machine-better-than-human is crossed.

  8. Dec 2021
  9. Oct 2021
  10. Sep 2021
  11. Aug 2021
  12. Jul 2021
  13. Jun 2021
  14. Feb 2021
  15. Oct 2020
  16. Jun 2020
  17. May 2020
  18. Mar 2020
    1. The system has been criticised due to its method of scraping the internet to gather images and storing them in a database. Privacy activists say the people in those images never gave consent. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said in a recent interview with CoinDesk. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”
    1. Enligt Polismyndighetens riktlinjer ska en konsekvensbedömning göras innan nya polisiära verktyg införs, om de innebär en känslig personuppgiftbehandling. Någon sådan har inte gjorts för det aktuella verktyget.

      Swedish police have used Clearview AI without any 'consequence judgement' having been performed.

      In other words, Swedish police have used a facial-recognition system without being allowed to do so.

      This is a clear breach of human rights.

      Swedish police has lied about this, as reported by Dagens Nyheter.

  19. Jan 2020
  20. Apr 2019
    1. “In contrast to Dr. Wood’s claims, bias found in one system is cause for concern in the other, particularly in use cases that could severely impact people’s lives, such as law enforcement applications,” they wrote.

      This is more important than most people probably realise. Recognition bias will decide if a person dies or not, when implemented at substantial scale, which isn't far away.

    1. U.S. securities regulators shot down attempts by Amazon.com Inc to stop its investors from considering two shareholder proposals about the company’s controversial sale of a facial recognition service, a sign of growing scrutiny of the technology.

      Surveillance capitalism at its worst; this behemoth tries to have the people who own it not make decisions.

      Capitalism is like Skynet, an organism that's taken flight on its own, bound to make solipsistic and egoistic judgments and choices.

  21. Oct 2018
    1. Only the most mundane uses of biometrics and facial recognition are concerned with only identifying a specific person, matching a name to a face or using a face to unlock a phone. Typically these systems are invested in taking the extra steps of assigning a subject to an identity category in terms of race, ethnicity, gender, sexuality, and matching those categories with guesses about emotions, intentions, relationships, and character to shore up forms of discrimination, both judicial and economic.
    2. Questions about the inclusivity of engineering and computer science departments have been going on for quite some time. Several current “innovations” coming out of these fields, many rooted in facial recognition, are indicative of how scientific racism has long been embedded in apparently neutral attempts to measure people — a “new” spin on age-old notions of phrenology and biological determinism, updated with digital capabilities.
  22. Dec 2017
    1. Starting Tuesday, any time someone uploads a photo that includes what Facebook thinks is your face, you’ll be notified even if you weren’t tagged.

      This is eerily like in the book The Circle where facial recognition is done over all photos and video on the web--including CCTV. No more secrets.

  23. Oct 2016