35 Matching Annotations
  1. Last 7 days
    1. concepts that we have never yet imagined

      Has this been achieved by people, or have algorithms taken on this task and automated this process beyond our ability to directly interact with these concepts?

  2. Apr 2019
    1. When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.“They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.

      Obviously they took the easy route. You may need to measure what matters, but getting to that goal by any means necessary or using indefensible shortcuts is the fallacy here. They could have had that North Star, but it's the means they used by which to reach it that were wrong.

      This is another great example of tech ignoring basic ethics to get to a monetary goal. (Another good one is Marc Zuckerberg's "connecting people" mantra when what he should be is "connecting people for good" or "creating positive connections".

    2. The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.#lazy-img-336042387:before{padding-top:66.68334167083543%;}

      This is a great summation of the issue.

    1. One reason is that products are often designed in ways that make us act impulsively and against our better judgment. For example, suppose you have a big meeting at work tomorrow. Ideally, you want to spend some time preparing for it in the evening and then get a good night’s rest. But before you can do either, a notification pops up on your phone indicating that a friend tagged you on Facebook. “This will take a minute,” you tell yourself as you click on it. But after logging in, you discover a long feed of posts by friends. A few clicks later, you find yourself watching a YouTube video that one of them shared. As soon as the video ends, YouTube suggests other related and interesting videos. Before you know it, it’s 1:00 a.m., and it’s clear that you will need an all-nighter to get ready for the following morning’s meeting. This has happened to most of us.

      This makes me think about the question of social and moral responsibility- I understand that YouTube and Facebook didn't develop these algorithms with nefarious intent, but it is a very drug-like experience, and I know I'm not the only one who can relate to this experience

  3. Mar 2019
    1. Roth, now 67, gravitated to matching markets, where parties must choose one another, through applications, courtship and other means. In 1995, he wrote a mathematical algorithm that greatly improved the efficiency of the system for matching medical school graduates to hospitals for their residencies. That work led him to improve the matching models for law clerkships, the hiring of newly minted economists, Internet auctions and sororities. “I’m a market designer,” he says. “Currently, I’m focused on kidneys. We’re trying to match more donor kidneys to people globally.”

      Interesting for many, many fields.

    1. If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
    1. We have developed quite a few concepts and methods for using the computer system to help us plan and supervise sophisticated courses of action, to monitor and evaluate what we do, and to use this information as direct feedback for modifying our planning techniques in the future.

      This reminds me of "personalized learning."

  4. Feb 2019
    1. I think it could be a big mistake to have the population at large play around with algorithms.

      Interesting that a trader, the person who'd most likely be on the winning side of inexperienced people playing with algorithmic finance, would be hesitant to release it on the world at large.

    1. In other words, when YouTube fine-tunes its algorithms, is it trying to end compulsive viewing, or is it merely trying to make people compulsively watch nicer things?

      YouTube's business interests are clearly rewarded by compulsive viewing. If it is even possible to distinguish "nicer" things, YouTube might have to go against its business interests if less-nice things DO lead to more compulsive viewing. Go even deeper, as Rob suggests below, and ask if viewing itself can shape both how (compulsive?) and what (nice or not-nice?) we view?

    1. Algorithms will privilege some forms of ‘knowing’ over others, and the person writing that algorithm is going to get to decide what it means to know… not precisely, like in the former example, but through their values. If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.

      I'm so glad I read Dave's post after having just read Rob Horning's great post, "The Sea Was Not a Mask", also addressing algorithms and YouTube.

  5. Jan 2019
    1. Do we want technology to keep giving more people a voice, or will traditional gatekeepers control what ideas can be expressed?

      Part of the unstated problem here is that Facebook has supplanted the "traditional gatekeepers" and their black box feed algorithm is now the gatekeeper which decides what people in the network either see or don't see. Things that crazy people used to decry to a non-listening crowd in the town commons are now blasted from the rooftops, spread far and wide by Facebook's algorithm, and can potentially say major elections.

      I hope they talk about this.

  6. Oct 2018
    1. A more active stance by librarians, journalists, educators, and others who convey truth-seeking habits is essential.

      In some sense these people can also be viewed as aggregators and curators of sorts. How can their work be aggregated and be used to compete with the poor algorithms of social media?

    1. Once products and, more important, people are coded as having certain preferences and tendencies, the feedback loops of algorithmic systems will work to reinforce these often flawed and discriminatory assumptions. The presupposed problem of difference will become even more entrenched, the chasms between people will widen.
    1. We want to make our model temporally-aware, as furtherinsights can be gathered by analyzing the temporal dy-namics of the user interactions.

      sounds exciting

    2. Reproducibility:We ran our experiment on a single com-puter, running a 3.2 GHz Intel Core i7 CPU, using PyTorchversion 0.2.0.45. We run the optimization on GPU NVIDIAGTX 670. We trained our model with the following parame-ters:= 0:04,= 0:01,K= 120. All code will be madeavailable at publication time6.

      reproducibility

  7. Sep 2018
  8. Aug 2018
    1. interest in understanding how web pages are rankedis foiled: in particular, users cannot know whether ornot a high ranking is the result of payment – andagain, such secrecy reduces trust and thereby theusability and accessibility of important information
    2. The basic dilemma is simple. If the algorithms areopen – then webmasters (and anyone else) interestedin having their websites appear at the top of a searchresult will be able to manipulate their sites so as toachieve that result: but such results would then bemisleading in terms of genuine popularity, potentialrelevance to a searcher’s interests, etc., therebyreducing users’ trust in the search engine results andhence reducing the usability and accessibility ofimportant information. On the other hand, if thealgorithms are secret, then the legitimate public
  9. Jul 2018
    1. Leading thinkers in China argue that putting government in charge of technology has one big advantage: the state can distribute the fruits of AI, which would otherwise go to the owners of algorithms.
  10. Jun 2018
    1. use algorithms to decide on what individual users most wanted to see. Depending on our friendships and actions, the system might deliver old news, biased news, or news which had already been disproven.
    2. 2016 was the year of politicians telling us what we should believe, but it was also the year of machines telling us what we should want.
  11. Apr 2018
    1. ConvexHull

      In mathematics, the convex hull or convex envelope of a set X of points in the Euclidean plane or in a Euclidean space (or, more generally, in an affine space over the reals) is the smallest convex set that contains X. For instance, when X is a bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around X. -Wikipedia

  12. Mar 2018
  13. Jan 2018
    1. You know Goethe's (or hell, Disney's) story of The Sorceror's Apprentice? Look it up. It'll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice. The difference with Zuck is that he doesn't have all the mastery that's in the sorcerer's job description. He can't control the spirits released by machines designed to violate personal privacy, produce echo chambers, and to rationalize both by pointing at how popular it all is with the billions who serve as human targets for messages (while saying as little as possible about the $billions that bad acting makes for the company).

      This is something I worry about with the IndieWeb movement sometimes. What will be the ultimate effect of everyone having their own site instead of relying on social media? In some sense it may have a one-to-one map to personal people (presuming there aren't armies of bot-sites) interacting. The other big portion of the puzzle that I often leave out is the black box algorithms that social silos run which have a significant influence on their users. Foreseeably one wouldn't choose to run such a black box algorithm on their own site and by doing so they take a much more measured and human approach to what they consume and spread out, in part because I hope they'll take more ownership of their own site.

  14. May 2017
    1. How do we reassert humanity’s moral compass over these alien algorithms? We may need to develop a version of Isaac Asimov’s “Three Laws of Robotics” for algorithms.

      A proposed solution to bad effects of info algorithms.

  15. Apr 2017
  16. Mar 2017
    1. “Design it so that Google is crucial to creating a response rather than finding one,”

      With "Google" becoming generic for "search" today, it is critical that students understand that Google, a commercial entity, will present different results in search to different people based on previous searches. Eli Pariser's work on the filter bubble is helpful for demonstrating this.

  17. Feb 2017
    1. Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment
  18. Aug 2016
    1. A team at Facebook reviewed thousands of headlines using these criteria, validating each other’s work to identify a large set of clickbait headlines. From there, we built a system that looks at the set of clickbait headlines to determine what phrases are commonly used in clickbait headlines that are not used in other headlines. This is similar to how many email spam filters work.

      Though details are scarce, the very idea that Facebook would tackle this problem with both humans and algorithms is reassuring. The common argument about human filtering is that it doesn’t scale. The common argument about algorithmic filtering is that it requires good signal (though some transhumanists keep saying that things are getting better). So it’s useful to know that Facebook used so hybrid an approach. Of course, even algo-obsessed Google has used human filtering. Or, at least, human judgment to tweak their filtering algorithms. (Can’t remember who was in charge of this. Was a semi-frequent guest on This Week in Google… Update: Matt Cutts) But this very simple “we sat down and carefully identified stuff we think qualifies as clickbait before we fed the algorithm” is refreshingly clear.

  19. Jun 2016
  20. Apr 2016
    1. While there are assets that have not been assigned to a cluster If only one asset remaining then Add a new cluster Only member is the remaining asset Else Find the asset with the Highest Average Correlation (HC) to all assets not yet been assigned to a Cluster Find the asset with the Lowest Average Correlation (LC) to all assets not yet assigned to a Cluster If Correlation between HC and LC > Threshold Add a new Cluster made of HC and LC Add to Cluster all other assets that have yet been assigned to a Cluster and have an Average Correlation to HC and LC > Threshold Else Add a Cluster made of HC Add to Cluster all other assets that have yet been assigned to a Cluster and have a Correlation to HC > Threshold Add a Cluster made of LC Add to Cluster all other assets that have yet been assigned to a Cluster and have Correlation to LC > Threshold End if End if End While

      Fast Threshold Clustering Algorithm

      Looking for equivalent source code to apply in smart content delivery and wireless network optimisation such as Ant Mesh via @KirkDBorne's status https://twitter.com/KirkDBorne/status/479216775410626560 http://cssanalytics.wordpress.com/2013/11/26/fast-threshold-clustering-algorithm-ftca/

  21. Jan 2016
  22. May 2015
    1. Financial algorithms execute trades based on many variables, sometimes performing autonomously. And they move faster than human thought. Since the markets operate on uncertainties and probabilities, the algorithms presumably responded to the uncertainties and probabilities implied by the false tweet, but Karppi says it's impossible to know the specific genetics of these algorithms.