722 Matching Annotations
  1. Last 7 days
    1. Professor, interested in plagues, and politics. Re-locking my twitter acct when is 70% fully vaccinated.

      Example of a professor/research who has apparently made his Tweets public, but intends to re-lock them majority of threat is over.

  2. Jun 2021
  3. May 2021
    1. Charlotte Jee recently wrote a lovely fictional intro to a piece on a “feminist Internet” that crystallized something I can’t quite believe I never saw before; if girls, women and non-binary people really got to choose where they spent their time online, we would never choose to be corralled into the hostile, dangerous spaces that endanger us and make us feel so, so bad. It’s obvious when you think about it. The current platforms are perfectly designed for misogyny and drive literally countless women from public life, or dissuade them from entering it. Online abuse, doxing, blue-tick dogpiling, pro-stalking and rape-enabling ‘features’ (like Strava broadcasting runners’ names and routes, or Slack’s recent direct-messaging fiasco) only happen because we are herded into a quasi-public sphere where we don’t make the rules and have literally nowhere else to go.

      A strong list of toxic behaviors that are meant to keep people from having a voice in the online commons. We definitely need to design these features out of our social software.

    2. The European Commission has prepared to legislate to require interoperability, and it calls being able to use your data wherever and whenever you like “multi-homing”. (Not many other people like this term, but it describes something important – the ability for people to move easily between platforms

      an interesting neologism to describe something that many want

    1. In 1962, a book called Silent Spring by Rachel Carson documenting the widespread ecological harms caused by synthetic pesticides went off like a metaphorical bomb in the nascent environmental movement.

      Where is the Silent Spring in the data, privacy, and social media space?

    2. For example, we know one of the ways to make people care about negative externalities is to make them pay for it; that’s why carbon pricing is one of the most efficient ways of reducing emissions. There’s no reason why we couldn’t enact a data tax of some kind. We can also take a cautionary tale from pricing externalities, because you have to have the will to enforce it. Western Canada is littered with tens of thousands of orphan wells that oil production companies said they would clean up and haven’t, and now the Canadian government is chipping in billions of dollars to do it for them. This means we must build in enforcement mechanisms at the same time that we’re designing principles for data governance, otherwise it’s little more than ethics-washing.

      Building in pre-payments or a tax on data leaks to prevent companies neglecting negative externalities could be an important stick in government regulation.

      While it should apply across the board, it should be particularly onerous for for-profit companies.

    3. Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me. People who refuse to wear a mask because they’re willing to risk getting Covid are often only thinking about their bodies as a thing to defend, whose sanctity depends on the strength of their individual immune system. They’re not thinking about their bodies as a thing that can also attack, that can be the conduit that kills someone else. People who are careless about their own data because they think they’ve done nothing wrong are only thinking of the harms that they might experience, not the harms that they can cause.

      What lessons might we draw from public health and epidemiology to improve our privacy lives in an online world? How might we wear social media "masks" to protect our friends and loved ones from our own posts?

    1. “For one of the most heavily guarded individuals in the world, a publicly available Venmo account and friend list is a massive security hole. Even a small friend list is still enough to paint a pretty reliable picture of someone's habits, routines, and social circles,” Gebhart said.

      Massive how? He's such a public figure that most of these connections are already widely reported in the media or easily guessable by an private invistigator. The bigger issue is the related data of transactions which might open them up for other abuses or potential leverage as in the other examples.

    1. Social media platforms work on a sort of flywheel of engagement. View->Engage->Comment->Create->View. Paywalls inhibit that flywheel, and so I think any hope for a return to the glory days of the blogosphere will result in disappointment.

      The analogy of social media being a flywheel of engagement is an apt one and it also plays into the idea of "flow" because if you're bored with one post, the next might be better.

    1. Yet apart from a few megastar “influencers”, most creators receive no reward beyond the thrill of notching up “likes”.

      But what are these people really making? Besides one or two of the highest paid, what is a fair-to-middling influencer really making?

    1. A strong and cogent argument for why we should not be listening to the overly loud cries from Tristan Harris and the Center for Human Technology. The boundary of criticism they're setting is not extreme enough to make the situation significantly better.

      It's also a strong argument for who to allow at the table or not when making decisions and evaluating criticism.

    2. These companies do not mean well, and we should stop pretending that they do.
    3. But “humane technology” is precisely the sort of pleasant sounding but ultimately meaningless idea that we must be watchful for at all times. To be clear, Harris is hardly the first critic to argue for some alternative type of technology, past critics have argued for: “democratic technics,” “appropriate technology,” “convivial tools,” “liberatory technology,” “holistic technology,” and the list could go on.

      A reasonable summary list of alternatives. Note how dreadful and unmemorable most of these names are. Most noticeable in this list is that I don't think that anyone actually built any actual tools that accomplish any of these theoretical things.

      It also makes more noticeable that the Center for Humane Technology seems to be theoretically arguing against something instead of "for" something.

    4. Big tech can patiently sit through some zingers about their business model, as long as the person delivering those one-liners comes around to repeating big tech’s latest Sinophobic talking point while repeating the “they meant well” myth.
    5. Thus, these companies have launched a new strategy to reinvigorate their all American status: engage in some heavy-handed techno-nationalism by attacking China. And this Sinophobic, and often flagrantly racist, shift serves to distract from the misdeeds of the tech companies by creating the looming menace of a big foreign other. This is a move that has been made by many of the tech companies, it is one that has been happily parroted by many elected officials, and it is a move which Harris makes as well.

      Perhaps the better move is to frame these companies as behemoths on the scale of foreign countries, but ones which have far more power and should be scrutinized more heavily than even China itself. What if the enemy is already within and it's name is Facebook or Google?

    1. Darren Dahly. (2021, February 24). @SciBeh One thought is that we generally don’t ‘press’ strangers or even colleagues in face to face conversations, and when we do, it’s usually perceived as pretty aggressive. Not sure why anyone would expect it to work better on twitter. Https://t.co/r94i22mP9Q [Tweet]. @statsepi. https://twitter.com/statsepi/status/1364482411803906048

    1. Ira, still wearing a mask, Hyman. (2020, November 26). @SciBeh @Quayle @STWorg @jayvanbavel @UlliEcker @philipplenz6 @AnaSKozyreva @johnfocook Some might argue the moral dilemma is between choosing what is seen as good for society (limiting spread of disinformation that harms people) and allowing people freedom of choice to say and see what they want. I’m on the side of making good for society decisions. [Tweet]. @ira_hyman. https://twitter.com/ira_hyman/status/1331992594130235393

    1. build and maintain a sense of professional community. Educator and TikTok user Jeremy Winkle outlines four ways teachers can do this: provide encouragement, share resources, provide quick professional development, and ask a question of the day (Winkler).

      I love all of these ideas. It's all-around edifying!

    1. ReconfigBehSci. (2021, February 17). The global infodemic has driven trust in all news sources to record lows with social media (35%) and owned media (41% the least trusted; traditional media (53%) saw largest drop in trust at 8 points globally. Https://t.co/C86chd3bb4 [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1362022502743105541

  4. Apr 2021
    1. Others are asking questions about the politics of weblogs – if it’s a democratic medium, they ask, why are there so many inequalities in traffic and linkage?

      This still exists in the social media space, but has gotten even worse with the rise of algorithmic feeds.

    1. I managed to do half the work. But that’s exactly it: It’s work. It’s designed that way. It requires a thankless amount of mental and emotional energy, just like some relationships.

      This is a great example of how services like Facebook can be like the abusive significant other you can never leave.

    2. I realized it was foolish of me to think the internet would ever pause just because I had. The internet is clever, but it’s not always smart. It’s personalized, but not personal. It lures you in with a timeline, then fucks with your concept of time. It doesn’t know or care whether you actually had a miscarriage, got married, moved out, or bought the sneakers. It takes those sneakers and runs with whatever signals you’ve given it, and good luck catching up.
    3. Pinterest doesn’t know when the wedding never happens, or when the baby isn’t born. It doesn’t know you no longer need the nursery. Pinterest doesn’t even know if the vacation you created a collage for has ended. It’s not interested in your temporal experience.This problem was one of the top five complaints of Pinterest users.
    4. So on a blindingly sunny day in October 2019, I met with Omar Seyal, who runs Pinterest’s core product. I said, in a polite way, that Pinterest had become the bane of my online existence.“We call this the miscarriage problem,” Seyal said, almost as soon as I sat down and cracked open my laptop. I may have flinched. Seyal’s role at Pinterest doesn’t encompass ads, but he attempted to explain why the internet kept showing me wedding content. “I view this as a version of the bias-of-the-majority problem. Most people who start wedding planning are buying expensive things, so there are a lot of expensive ad bids coming in for them. And most people who start wedding planning finish it,” he said. Similarly, most Pinterest users who use the app to search for nursery decor end up using the nursery. When you have a negative experience, you’re part of the minority, Seyal said.

      What a gruesome name for an all-too-frequent internet problem: miscarriage problem

    5. To hear technologists describe it, digital memories are all about surfacing those archival smiles. But they’re also designed to increase engagement, the holy grail for ad-based business models.

      It would be far better to have apps focus on better reasons for on this day features. I'd love to have something focused on spaced repetition for building up my memory for other things. Reminders at a week, a month, three months, and six months would be a useful thing for some posts.

    6. Our smartphones pulse with memories now. In normal times, we may strain to remember things for practical reasons—where we parked the car—or we may stumble into surprise associations between the present and the past, like when a whiff of something reminds me of Sunday family dinners. Now that our memories are digital, though, they are incessant, haphazard, intrusive.
    7. I still have a photograph of the breakfast I made the morning I ended an eight-year relationship and canceled a wedding. It was an unremarkable breakfast—a fried egg—but it is now digitally fossilized in a floral dish we moved with us when we left New York and headed west. I don’t know why I took the photo, except, well, I do: I had fallen into the reflexive habit of taking photos of everything. Not long ago, the egg popped up as a “memory” in a photo app. The time stamp jolted my actual memory.

      Example of unwanted spaced repetition via social media.

    1. This year’s Slow Art Day — April 10 — comes at a time when museums find themselves in vastly different circumstances.

      Idea: Implement a slow web week for the IndieWeb, perhaps to coincide with the summit at the end of the week.

      People eschew reading material from social media and only consume from websites and personal blogs for a week. The tough part is how to implement actually doing this. Many people would have a tough time finding interesting reading material in a short time. What are good discovery endpoints for that? WordPress.com's reader? Perhaps support from feed reader community?

  5. Mar 2021
    1. There's a reasonably good overview of some ideas about fixing the harms social media is doing to democracy here and it's well framed by history.

      Much of it appears to be a synopsis from the perspective of one who's only managed to attend Pariser and Stround's recent Civic Signals/New_Public Festival.

      There could have been some touches of other research in the social space including those in the Activity Streams and IndieWeb spaces to provide some alternate viewpoints.

    2. Tang has sponsored the use of software called Polis, invented in Seattle. This is a platform that lets people make tweet-like, 140-character statements, and lets others vote on them. There is no “reply” function, and thus no trolling or personal attacks. As statements are made, the system identifies those that generate the most agreement among different groups. Instead of favoring outrageous or shocking views, the Polis algorithm highlights consensus. Polis is often used to produce recommendations for government action.

      An example of social media for proactive government action.

    3. Matias has his own lab, the Citizens and Technology Lab at Cornell, dedicated to making digital technologies that serve the public and not just private companies.

      [[J. Nathan Matias]] Citizens and Technology Lab

      I recall having looked at some of this research and not thinking it was as strong as is indicated here. I also seem to recall he had a connection with Tristan Harris?

    4. What Fukuyama and a team of thinkers at Stanford have proposed instead is a means of introducing competition into the system through “middleware,” software that allows people to choose an algorithm that, say, prioritizes content from news sites with high editorial standards.

      This is the second reference I've seen recently (Jack Dorsey mentioning a version was the first) of there being a marketplace for algorithms.

      Does this help introduce enough noise into the system to confound the drive to the extremes for the average person? What should we suppose from the perspective of probability theory?

    5. One person writing a tweet would still qualify for free-speech protections—but a million bot accounts pretending to be real people and distorting debate in the public square would not.

      Do bots have or deserve the right to not only free speech, but free reach?

    6. The scholars Nick Couldry and Ulises Mejias have called it “data colonialism,” a term that reflects our inability to stop our data from being unwittingly extracted.

      I've not run across data colonialism before.

    1. What I’d like more of is a social web that sits between these two extremes, something with a small town feel. So you can see people are around, and you can give directions and a friendly nod, but there’s no need to stop and chat, and it’s not in your face. It’s what I’ve talked about before as social peripheral vision (that post is about why it should be build into the OS).

      I love the idea of social peripheral vision online.

    2. A status emoji will appear in the top right corner of your browser. If it’s smiling, there are other people on the site right now too.

      This is pretty cool looking. I'll have to add it as an example to my list: Social Reading User Interface for Discovery.

      We definitely need more things like this on the web.

      It makes me wish the Reading.am indicator were there without needing to click on it.

      I wonder how this sort of activity might be built into social readers as well?

    3. If somebody else selects some text, it’ll be highlighted for you.

      Suddenly social annotation has taken an interesting twist. @Hypothes_is better watch out! ;)

    1. ReconfigBehSci. (2020, December 5). As everyone’s focus turns to vaccine hesitancy, we will need to take a close look not just at social media but at Amazon- the “top” recommendations I get when typing in ‘vaccine’ are all anti-vaxx https://t.co/ug5QAcKT9Q [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1335181088818388992

    1. So Substack has an editorial policy, but no accountability. And they have terms of service, but no enforcement.

      This is also the case for many other toxic online social media platforms. A fantastic framing.

    1. Q: So, this means you don’t value hearing from readers?A: Not at all. We engage with readers every day, and we are constantly looking for ways to hear and share the diversity of voices across New Jersey. We have built strong communities on social platforms, and readers inform our journalism daily through letters to the editor. We encourage readers to reach out to us, and our contact information is available on this How To Reach Us page.

      We have built strong communities on social platforms

      They have? Really?! I think it's more likely the social platforms have built strong communities which happen to be talking about and sharing the papers content. The paper doesn't have any content moderation or control capabilities on any of these platforms.

      Now it may be the case that there are a broader diversity of voices on those platforms over their own comments sections. This means that a small proportion of potential trolls won't drown out the signal over the noise as may happen in their comments sections online.

      If the paper is really listening on the other platforms, how are they doing it? Isn't reading some or all of it a large portion of content moderation? How do they get notifications of people mentioning them (is it only direct @mentions)?

      Couldn't/wouldn't an IndieWeb version of this help them or work better.

    1. A question on CSS or accessibility or even content management is a rare thing indeed. This isn't a community centred on helping people build their own websites, as I had first imagined[5]. Instead, it's a community attempting to shift the power in online socialising away from Big Tech and back towards people[6].

      There is more of the latter than the former to be certain, but I don't think it's by design.

      Many of the people there are already experts in some of these sub-areas, so there aren't as many questions on those fronts. Often there are other resources that are also better for these issues and links to them can be found within the wiki.

      The social portions are far more difficult, so this is where folks are a bit more focused.

      I think when the community grows, we'll see more of these questions about CSS, HTML, and accessibility. (In fact I wish more people were concerned about accessibility and why it was important.)