29 Matching Annotations
  1. Mar 2019
    1. We have developed quite a few concepts and methods for using the computer system to help us plan and supervise sophisticated courses of action, to monitor and evaluate what we do, and to use this information as direct feedback for modifying our planning techniques in the future.

      This reminds me of "personalized learning."

  2. Feb 2019
    1. I think it could be a big mistake to have the population at large play around with algorithms.

      Interesting that a trader, the person who'd most likely be on the winning side of inexperienced people playing with algorithmic finance, would be hesitant to release it on the world at large.

    1. In other words, when YouTube fine-tunes its algorithms, is it trying to end compulsive viewing, or is it merely trying to make people compulsively watch nicer things?

      YouTube's business interests are clearly rewarded by compulsive viewing. If it is even possible to distinguish "nicer" things, YouTube might have to go against its business interests if less-nice things DO lead to more compulsive viewing. Go even deeper, as Rob suggests below, and ask if viewing itself can shape both how (compulsive?) and what (nice or not-nice?) we view?

    1. Algorithms will privilege some forms of ‘knowing’ over others, and the person writing that algorithm is going to get to decide what it means to know… not precisely, like in the former example, but through their values. If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.

      I'm so glad I read Dave's post after having just read Rob Horning's great post, "The Sea Was Not a Mask", also addressing algorithms and YouTube.

  3. Jan 2019
    1. Do we want technology to keep giving more people a voice, or will traditional gatekeepers control what ideas can be expressed?

      Part of the unstated problem here is that Facebook has supplanted the "traditional gatekeepers" and their black box feed algorithm is now the gatekeeper which decides what people in the network either see or don't see. Things that crazy people used to decry to a non-listening crowd in the town commons are now blasted from the rooftops, spread far and wide by Facebook's algorithm, and can potentially say major elections.

      I hope they talk about this.

  4. Oct 2018
    1. A more active stance by librarians, journalists, educators, and others who convey truth-seeking habits is essential.

      In some sense these people can also be viewed as aggregators and curators of sorts. How can their work be aggregated and be used to compete with the poor algorithms of social media?

    1. Once products and, more important, people are coded as having certain preferences and tendencies, the feedback loops of algorithmic systems will work to reinforce these often flawed and discriminatory assumptions. The presupposed problem of difference will become even more entrenched, the chasms between people will widen.
    1. We want to make our model temporally-aware, as furtherinsights can be gathered by analyzing the temporal dy-namics of the user interactions.

      sounds exciting

    2. Reproducibility:We ran our experiment on a single com-puter, running a 3.2 GHz Intel Core i7 CPU, using PyTorchversion 0.2.0.45. We run the optimization on GPU NVIDIAGTX 670. We trained our model with the following parame-ters:= 0:04,= 0:01,K= 120. All code will be madeavailable at publication time6.

      reproducibility

  5. Sep 2018
  6. Aug 2018
    1. interest in understanding how web pages are rankedis foiled: in particular, users cannot know whether ornot a high ranking is the result of payment – andagain, such secrecy reduces trust and thereby theusability and accessibility of important information
    2. The basic dilemma is simple. If the algorithms areopen – then webmasters (and anyone else) interestedin having their websites appear at the top of a searchresult will be able to manipulate their sites so as toachieve that result: but such results would then bemisleading in terms of genuine popularity, potentialrelevance to a searcher’s interests, etc., therebyreducing users’ trust in the search engine results andhence reducing the usability and accessibility ofimportant information. On the other hand, if thealgorithms are secret, then the legitimate public
  7. Jul 2018
    1. Leading thinkers in China argue that putting government in charge of technology has one big advantage: the state can distribute the fruits of AI, which would otherwise go to the owners of algorithms.
  8. Jun 2018
    1. use algorithms to decide on what individual users most wanted to see. Depending on our friendships and actions, the system might deliver old news, biased news, or news which had already been disproven.
    2. 2016 was the year of politicians telling us what we should believe, but it was also the year of machines telling us what we should want.
  9. Apr 2018
    1. ConvexHull

      In mathematics, the convex hull or convex envelope of a set X of points in the Euclidean plane or in a Euclidean space (or, more generally, in an affine space over the reals) is the smallest convex set that contains X. For instance, when X is a bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around X. -Wikipedia

  10. Mar 2018
  11. Jan 2018
    1. You know Goethe's (or hell, Disney's) story of The Sorceror's Apprentice? Look it up. It'll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice. The difference with Zuck is that he doesn't have all the mastery that's in the sorcerer's job description. He can't control the spirits released by machines designed to violate personal privacy, produce echo chambers, and to rationalize both by pointing at how popular it all is with the billions who serve as human targets for messages (while saying as little as possible about the $billions that bad acting makes for the company).

      This is something I worry about with the IndieWeb movement sometimes. What will be the ultimate effect of everyone having their own site instead of relying on social media? In some sense it may have a one-to-one map to personal people (presuming there aren't armies of bot-sites) interacting. The other big portion of the puzzle that I often leave out is the black box algorithms that social silos run which have a significant influence on their users. Foreseeably one wouldn't choose to run such a black box algorithm on their own site and by doing so they take a much more measured and human approach to what they consume and spread out, in part because I hope they'll take more ownership of their own site.

  12. May 2017
    1. How do we reassert humanity’s moral compass over these alien algorithms? We may need to develop a version of Isaac Asimov’s “Three Laws of Robotics” for algorithms.

      A proposed solution to bad effects of info algorithms.

  13. Apr 2017
  14. Mar 2017
    1. “Design it so that Google is crucial to creating a response rather than finding one,”

      With "Google" becoming generic for "search" today, it is critical that students understand that Google, a commercial entity, will present different results in search to different people based on previous searches. Eli Pariser's work on the filter bubble is helpful for demonstrating this.

  15. Feb 2017
    1. Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment
  16. Aug 2016
    1. A team at Facebook reviewed thousands of headlines using these criteria, validating each other’s work to identify a large set of clickbait headlines. From there, we built a system that looks at the set of clickbait headlines to determine what phrases are commonly used in clickbait headlines that are not used in other headlines. This is similar to how many email spam filters work.

      Though details are scarce, the very idea that Facebook would tackle this problem with both humans and algorithms is reassuring. The common argument about human filtering is that it doesn’t scale. The common argument about algorithmic filtering is that it requires good signal (though some transhumanists keep saying that things are getting better). So it’s useful to know that Facebook used so hybrid an approach. Of course, even algo-obsessed Google has used human filtering. Or, at least, human judgment to tweak their filtering algorithms. (Can’t remember who was in charge of this. Was a semi-frequent guest on This Week in Google… Update: Matt Cutts) But this very simple “we sat down and carefully identified stuff we think qualifies as clickbait before we fed the algorithm” is refreshingly clear.

  17. Jun 2016
  18. Apr 2016
    1. While there are assets that have not been assigned to a cluster If only one asset remaining then Add a new cluster Only member is the remaining asset Else Find the asset with the Highest Average Correlation (HC) to all assets not yet been assigned to a Cluster Find the asset with the Lowest Average Correlation (LC) to all assets not yet assigned to a Cluster If Correlation between HC and LC > Threshold Add a new Cluster made of HC and LC Add to Cluster all other assets that have yet been assigned to a Cluster and have an Average Correlation to HC and LC > Threshold Else Add a Cluster made of HC Add to Cluster all other assets that have yet been assigned to a Cluster and have a Correlation to HC > Threshold Add a Cluster made of LC Add to Cluster all other assets that have yet been assigned to a Cluster and have Correlation to LC > Threshold End if End if End While

      Fast Threshold Clustering Algorithm

      Looking for equivalent source code to apply in smart content delivery and wireless network optimisation such as Ant Mesh via @KirkDBorne's status https://twitter.com/KirkDBorne/status/479216775410626560 http://cssanalytics.wordpress.com/2013/11/26/fast-threshold-clustering-algorithm-ftca/

  19. Jan 2016
  20. May 2015
    1. Financial algorithms execute trades based on many variables, sometimes performing autonomously. And they move faster than human thought. Since the markets operate on uncertainties and probabilities, the algorithms presumably responded to the uncertainties and probabilities implied by the false tweet, but Karppi says it's impossible to know the specific genetics of these algorithms.