1,140 Matching Annotations
  1. Last 7 days
    1. Francis Fukuyama et al., Middleware for Dominant Digital Platforms: A Technological Solution to a Threat to Democracy, Stanford Cyber Policy Center, 3, https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/cpc-middleware_ff_v2.pdf.
    2. Every year, in my platform-regulation class, I draw a Venn diagram on the board with three interlocking circles: privacy, speech, and competition. Then we identify all the issues that fall at the intersection of two or more circles. Interoperability, including for content-moderation purposes, is always smack in the middle. It touches every circle. This is what makes it hard. We have to solve problems in all those areas to make middleware work. But this is also what makes the concept so promising. If—or when—we do manage to meet this many-sided challenge, we will unlock something powerful.

      Interesting point about the intersection of interoperability. Are there other features that also touch them all?

    3. Fukuyama's answer is no. Middleware providers will not see privately shared content from a user's friends. This is a good answer if our priority is privacy. It lets my cousin decide which companies to trust with her sensitive personal information. But it hobbles middleware as a tool for responding to her claims about vaccines. And it makes middleware providers far less competitive, since they will not be able to see much of the content we want them to curate.

      Is it alright to let this sort of thing go on the smaller scale personal shared level? I would suggest that the issue is not this small scale conversation which can happen linearly, but we need to focus on the larger scale amplification of misinformation by sources. Get rid of the algorithmic amplification of the fringe bits which is polarizing and toxic. Only allow the amplification of the more broadly accepted, fact-based, edited, and curated information.

    4. If we cannot afford real, diverse, and independent assessment, we will not realize the promise of middleware.
    5. Facebook deploys tens of thousands of people to moderate user content in dozens of languages. It relies on proprietary machine-learning and other automated tools, developed at enormous cost. We cannot expect [End Page 169] comparable investment from a diverse ecosystem of middleware providers. And while most providers presumably will not handle as much content as Facebook does, they will still need to respond swiftly to novel and unpredictable material from unexpected sources. Unless middleware services can do this, the value they provide will be limited, as will users' incentives to choose them over curation by the platforms themselves.

      Does heavy curation even need to exist? If a social company were able to push a linear feed of content to people without the algorithmic forced engagement, then the smaller, fringe material wouldn't have the reach. The majority of the problem would be immediately solved with this single feature.

    6. Second, how is everyone going to get paid? Without a profit motive for middleware providers, the magic will not happen, or it will not happen at large enough scale. Something about business models—or, at a minimum, the distribution of ads and ad revenue—will have to change. That leaves the two thorny issues I do know a fair amount about: curation costs and user privacy.
    7. First, how technologically feasible is it for competitors to remotely process massive quantities of platform data? Can newcomers really offer a level of service on par with incumbents?

      Do they really need to process all the data?

    8. The First Amendment precludes lawmakers from forcing platforms to take down many kinds of dangerous user speech, including medical and political misinformation.

      Compare social media with the newspaper business from this perspective.

      People joined social media not knowing the end effects, but now don't have a choice of platform after-the-fact. Social platforms accelerate the disinformation using algorithms.

      Because there is choice amongst newspapers, people can easily move and if they'd subscribed to a racist fringe newspaper, they could easily end their subscription and go somewhere else. This is patently not the case for any social media. There's a high hidden personal cost for connectivity that isn't taken into account. The government needs to regulate this and not the speech portion.

      Social media should be considered a common carrier and considered as such. It was an easier and more logical process in the telephone, electricity and other areas to force this as the cost of implementation for them was magnitudes of order higher. The data formats and storage for social should be standardized (potentially even in three or more formats) and that should be the common carrier imposed. Would this properly skirt the First Amendment issues?

    9. Fukuyama's work, which draws on both competition analysis and an assessment of threats to democracy, joins a growing body of proposals that also includes Mike Masnick's "protocols not platforms," Cory Doctorow's "adversarial interoperability," my own "Magic APIs," and Twitter CEO Jack Dorsey's "algorithmic choice."

      Nice overview of work in the space for fixing monopoly in social media space the at the moment. I hadn't heard about Fukuyama or Daphne Keller's versions before.

      I'm not sure I think Dorsey's is actually a thing. I suspect it is actually vaporware from the word go.

      IndieWeb has been working slowly at the problem as well.

    10. Francis Fukuyama has called "middleware": content-curation services that could give users more control over the material they see on internet platforms such as Facebook or Twitter.
    1. whether the advent of modem communications media has much enhanced our understanding of the world in which we live.

      But it may be seriously questioned whether the advent of modem communications media has much enhanced our understanding of the world in which we live.

      Now that I'm thinking about it, I sort of want the ability to more easily capture audio, annotate and save it while I'm listening to radio or even television. Pausing the media and having the ability to reply it (TIVO and some DVRs provide this capability) and do other things with it would be truly fantastic, especially for saving tidbits for later use and consumption.

    1. Platforms of the Facebook walled-factory type are unsuited to thework of building community, whether globally or locally, becausesuch platforms are unresponsive to their users, and unresponsive bydesign (design that is driven by a desire to be universal in scope). Itis virtually impossible to contact anyone at Google, Facebook,Twitter, or Instagram, and that is so that those platforms can trainus to do what they want us to do, rather than be accountable to ourdesires and needs

      This is one of the biggest underlying problems that centralized platforms often have. It's also a solid reason why EdTech platforms are pernicious as well.

    2. As Astra Taylor explains in her vital book !e People’sPlatform, this process has often been celebrated by advocates ofnew platforms.

      Worth taking a look at?

    3. It is common to refer to universally popular social media sites likeFacebook, Instagram, Snapchat, and Pinterest as “walled gardens.”But they are not gardens; they are walled industrial sites, withinwhich users, for no financial compensation, produce data which theowners of the factories sift and then sell. Some of these factories(Twitter, Tumblr, and more recently Instagram) have transparentwalls, by which I mean that you need an account to post anythingbut can view what has been posted on the open Web; others(Facebook, Snapchat) keep their walls mostly or wholly opaque.

      Would it be useful to distinguish and differentiate the silos based on their level of access? Some are transparent silos while others are not?

      Could we define a spectrum from silo to open? Perhaps axes based on audience or access? Privacy to fully open? How many axes might there be?

  2. Jul 2021
    1. What motivated my newsletter reading habits normally? In large part, affection and light voyeurism. I subscribed to the newsletters of people I knew, who treated the form the way they had once treated personal blogs. I skimmed the dadlike suggestions of Sam Sifton in the New York Times’ Cooking newsletter (skillet chicken and Lana Del Rey’s “Chemtrails Over the Country Club” — sure, okay). I subscribed briefly to Alison Roman’s recipe newsletter before deciding that the ratio of Alison Roman to recipes was much too high. On a colleague’s recommendation, I subscribed to Emily Atkin’s climate newsletter and soon felt guilty because it was so long and came so often that I let it pile up unread. But in order to write about newsletters, I binged. I went about subscribing in a way no sentient reader was likely to do — omnivorously, promiscuously, heedless of redundancy, completely open to hate-reading. I had not expected to like everything I received. Still, as the flood continued, I experienced a response I did not expect. I was bored.

      The question of motivation about newsletter subscriptions is an important one. Some of the thoughts here mirror some of my feelings about social media in general.

      Why?

    2. “Substack is longform media Twitter, for good and for ill,” wrote Ashley Feinberg in the first installment of her Substack.

      Definitely a hot take, but a truthful sounding one.

    3. This came in the context of weighing what she stood to gain and lose in leaving a staff job at BuzzFeed. She knew the worth of what editors, fact-checkers, designers, and other colleagues brought to a piece of writing. At the same time, she was tired of working around the “imperatives of social media sharing.” Clarity and concision are not metrics imposed by the Facebook algorithm, of course — but perhaps such concerns lose some of their urgency when readers have already pledged their support.

      Continuing with the idea above about the shift of Sunday morning talk shows and the influence of Hard Copy, is social media exerting a negative influence on mainstream content and conversation as a result of their algorithmic gut reaction pressure? How can we fight this effect?

    4. Early on, circa 2015, there was a while when every first-person writer who might once have written a Tumblr began writing a TinyLetter. At the time, the writer Lyz Lenz observed that newsletters seemed to create a new kind of safe space. A newsletter’s self-selecting audience was part of its appeal, especially for women writers who had experienced harassment elsewhere online.

      What sort of spaces do newsletters create based upon their modes of delivery? What makes them "safer" for marginalized groups? Is there a mitigation of algorithmic speed and reach that helps? Is it a more tacit building of community and conversation? How can these benefits be built into an IndieWeb space?

      How can a platform provide "reach" while simultaneously creating negative feedback for trolls and bad actors?

    1. Offline we exist by default; online we have to post our way into selfhood.
    2. A platform like Twitter makes our asynchronous posts feel like real-time interaction by delivering them in such rapid succession, and that illusion begets another more powerful one, that we’re all actually present within the feed.

      This same sort of illusion also occurs in email where we're always assumed to be constantly available to others.

    1. <small><cite class='h-cite via'> <span class='p-author h-card'>Alan Jacobs</span> in re-setting my mental clock – Snakes and Ladders (<time class='dt-published'>07/01/2021 14:58:05</time>)</cite></small>

  3. Jun 2021
    1. Deepti Gurdasani on Twitter: “I’m still utterly stunned by yesterday’s events—Let me go over this in chronological order & why I’m shocked. - First, in the morning yesterday, we saw a ‘leaked’ report to FT which reported on @PHE_uk data that was not public at the time🧵” / Twitter. (n.d.). Retrieved June 27, 2021, from https://twitter.com/dgurdasani1/status/1396373990986375171

    1. Dr. Syra Madad. (2021, February 7). What we hear most often “talk to your health care provider if you have any questions/concerns on COVID19 vaccines” Vs Where many are actually turning to for COVID19 vaccine info ⬇️ This is also why it’s so important for the media to report responsibly based on science/evidence [Tweet]. @syramadad. https://twitter.com/syramadad/status/1358509900398272517

    1. Professor, interested in plagues, and politics. Re-locking my twitter acct when is 70% fully vaccinated.

      Example of a professor/research who has apparently made his Tweets public, but intends to re-lock them majority of threat is over.

  4. May 2021
    1. Milman Parry, hailed as “the Darwin of Homeric scholar ship,” was among the first men to conceive of literature not merely in terms of genre; but of media.

      Literature isn't merely genre, but media.

    1. Charlotte Jee recently wrote a lovely fictional intro to a piece on a “feminist Internet” that crystallized something I can’t quite believe I never saw before; if girls, women and non-binary people really got to choose where they spent their time online, we would never choose to be corralled into the hostile, dangerous spaces that endanger us and make us feel so, so bad. It’s obvious when you think about it. The current platforms are perfectly designed for misogyny and drive literally countless women from public life, or dissuade them from entering it. Online abuse, doxing, blue-tick dogpiling, pro-stalking and rape-enabling ‘features’ (like Strava broadcasting runners’ names and routes, or Slack’s recent direct-messaging fiasco) only happen because we are herded into a quasi-public sphere where we don’t make the rules and have literally nowhere else to go.

      A strong list of toxic behaviors that are meant to keep people from having a voice in the online commons. We definitely need to design these features out of our social software.

    2. The European Commission has prepared to legislate to require interoperability, and it calls being able to use your data wherever and whenever you like “multi-homing”. (Not many other people like this term, but it describes something important – the ability for people to move easily between platforms

      an interesting neologism to describe something that many want

    1. In 1962, a book called Silent Spring by Rachel Carson documenting the widespread ecological harms caused by synthetic pesticides went off like a metaphorical bomb in the nascent environmental movement.

      Where is the Silent Spring in the data, privacy, and social media space?

    2. For example, we know one of the ways to make people care about negative externalities is to make them pay for it; that’s why carbon pricing is one of the most efficient ways of reducing emissions. There’s no reason why we couldn’t enact a data tax of some kind. We can also take a cautionary tale from pricing externalities, because you have to have the will to enforce it. Western Canada is littered with tens of thousands of orphan wells that oil production companies said they would clean up and haven’t, and now the Canadian government is chipping in billions of dollars to do it for them. This means we must build in enforcement mechanisms at the same time that we’re designing principles for data governance, otherwise it’s little more than ethics-washing.

      Building in pre-payments or a tax on data leaks to prevent companies neglecting negative externalities could be an important stick in government regulation.

      While it should apply across the board, it should be particularly onerous for for-profit companies.

    3. Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me. People who refuse to wear a mask because they’re willing to risk getting Covid are often only thinking about their bodies as a thing to defend, whose sanctity depends on the strength of their individual immune system. They’re not thinking about their bodies as a thing that can also attack, that can be the conduit that kills someone else. People who are careless about their own data because they think they’ve done nothing wrong are only thinking of the harms that they might experience, not the harms that they can cause.

      What lessons might we draw from public health and epidemiology to improve our privacy lives in an online world? How might we wear social media "masks" to protect our friends and loved ones from our own posts?

    1. Sanders, J. G., Tosi, A., Obradovic, S., Miligi, I., & Delaney, L. (2021). Lessons from lockdown: Media discourse on the role of behavioural science in the UK COVID-19 response. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.647348