42 Matching Annotations
  1. Nov 2023
    1. The nightmares of AI discrimination and exploitation are the lived reality of those I call the excoded

      Defining 'excoded'

    2. AI raises the stakes because now that data is not only used to make decisions about you, but rather to make deeply powerful inferences about people and communities. That data is training models that can be deployed, mobilized through automated systems that affect our fundamental rights and our access to whether you get a mortgage, a job interview, or even how much you’re paid. Thinking individually is only part of the equation now; you really need to think in terms of collective harm. Do I want to give up this data and have it be used to make decisions about people like me—a woman, a mother, a person with particular political beliefs?

      Adding your data to AI models is a collective decision

  2. Feb 2023
    1. Staff and studentsare rarely in a position to understand the extent to which data is being used, nor are they able todetermine the extent to which automated decision-making is leveraged in the curation oramplification of content.

      Is this a data (or privacy) literacy problem? A lack of regulation by experts in this field?

  3. Jan 2023
    1. View closed captioning or live transcription during a meeting or webinar Sign in to the Zoom desktop client. Join a meeting or webinar. Click the Show Captions button .
    2. If closed captioning or live transcripts are available during a meeting or webinar, you can view these as a participant
    1. User To enable automated captioning for your own use: Sign in to the Zoom web portal. In the navigation menu, click Settings. Click the Meeting tab. Under In Meeting (Advanced), click the Automated captions toggle to enable or disable it. If a verification dialog displays, click Enable or Disable to verify the change.Note: If the option is grayed out, it has been locked at either the group or account level. You need to contact your Zoom admin. (Optional) Click the edit option to select which languages you want to be available for captioning. Note: Step 7 may not appear for some users until September 2022, as a set of captioning enhancements are rolling out to users over the course of August.
  4. Dec 2022
    1. It’s tempting to believe incredible human-seeming software is in a way superhuman, Block-Wehba warned, and incapable of human error. “Something scholars of law and technology talk about a lot is the ‘veneer of objectivity’ — a decision that might be scrutinized sharply if made by a human gains a sense of legitimacy once it is automated,” she said.

      Veneer of Objectivity

      Quote by Hannah Bloch-Wehba, TAMU law professor

  5. May 2022
    1. This model was tasked with predicting whether a future comment on a thread will be abusive. This is a difficult task without any features provided on the target comment. Despite the challenges of this task, the model had a relatively high AUC over 0.83, and was able to achieve double digit precision and recall at certain thresholds.

      Predicting Abusive Conversation Without Target Comment

      This is fascinating. The model is predicting if the next, new comment will be abusive by examining the existing conversation, and doing this without knowing what the next comment will be.

  6. Apr 2022
    1. And therefore, to accept the dictates of algorithms in deciding what, for example, the next song we should listen to on Spotify is, accepting that it will be an algorithm that dictates this because we no longer recognize our non-algorithmic nature and we take ourselves to be the same sort of beings that don’t make spontaneous irreducible decisions about what song to listen to next, but simply outsource the duty for this sort of thing, once governed by inspiration now to a machine that is not capable of inspiration.

      Outsourcing decisions to algorithms

  7. Mar 2022
    1. The growing prevalence of AI systems, as well as their growing impact on every aspect of our daily life create a great need to that AI systems are "responsible" and incorporate important social values such as fairness, accountability and privacy.

      An AI is the sum of its programming along with its training data. Its "perspecitive" of social values such as fairness, accountability, and privacy are a function of the data used to create it.

  8. Dec 2021
  9. Jul 2021
  10. Jun 2021
  11. Feb 2021
    1. Keeping bootstrap-sass in sync with upstream changes from Bootstrap used to be an error prone and time consuming manual process. With Bootstrap 3 we have introduced a converter that automates this.
  12. Nov 2020
  13. Oct 2020
    1. the actual upgrade path should be very simple for most people since the deprecated things are mostly edge cases and any common ones can be codemodded
  14. Sep 2020
  15. Jul 2020
  16. May 2020
    1. I originally did not use this approach because many pages that require translation are behind authentication that cannot/should not be run through these proxies.
    2. It shouldn't be problem to watch the remote scripts for changes using Travis and repack and submit a new version automatically (depends on licensing). It does not put the script under your control, but at least it's in the package and can be reviewed.
    1. You might try this extension: https://github.com/andreicristianpetcu/google_translate_this It does the same thing in the same way as Page Translator and likely will be blocked by Mozilla, but this is a cat and mouse game worth playing if you rely on full-page in-line language translation.
  17. Mar 2020
    1. For automated testing, include the parameter is_test=1 in your tests. That will tell Akismet not to change its behaviour based on those API calls – they will have no training effect. That means your tests will be somewhat repeatable, in the sense that one test won’t influence subsequent calls.
  18. Jan 2020
  19. Dec 2019
  20. Dec 2015
    1. With SmartBooks, students can see the important content highlighted

      Like an algorithmic version of Hypothesis? Is McGraw-Hill part of the Coalition? Looks like it isn’t. Is it a “for us or against us” situation?

  21. Feb 2014
    1. Alternatively, Daphne Koller and Andrew Ng who are the founders of Coursera, a Stanford MOOC startup, have decided to use peer evaluation to assess writing. Koller and Ng (2012) specifically used the term “calibrated peer review” to refer to a method of peer review distinct from an application developed by UCLA with National Science Foundation funding called Calibrated Peer Review™ (CPR). For Koller and Ng, “calibrated peer review” is a specific form of peer review in which students are trained on a particular scoring rubric for an assignment using practice essays before they begin the peer review process.
  22. Jan 2014
    1. A rigorous understanding of these developmental processes requires automated methods that quantitatively record and analyze complex morphologies and their associated patterns of gene expression at cellular resolution.

      Rigorous understanding requires automated methods using quantitative recording and analysis.