470 Matching Annotations
  1. Jan 2023
  2. Dec 2022
    1. 3NO SELF PROMOTION, RECRUITING, OR DM SPAMMINGMembers love our group because it's SAFE. We are very strict on banning members who blatantly self promote their product or services in the group OR secretly private message members to recruit them.
    2. 2NO POST FROM FAN PAGES / ARTICLES / VIDEO LINKSOur mission is to cultivate the highest quality content inside the group. If we allowed videos, fan page shares, & outside websites, our group would turn into spam fest. Original written content only
    3. 1NO POSTING LINKS INSIDE OF POST - FOR ANY REASONWe've seen way too many groups become a glorified classified ad & members don't like that. We don't want the quality of our group negatively impacted because of endless links everywhere. NO LINKS
    1. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem.
    1. On Facebook, we identified 51,269 posts (0.25% of all posts)sharing links to Russian propaganda outlets, generating 5,065,983interactions (0.17% of all interactions); 80,066 posts (0.4% of allposts) sharing links to low-credibility news websites, generating28,334,900 interactions (0.95% of all interactions); and 147,841 postssharing links to high-credibility news websites (0.73% of all posts),generating 63,837,701 interactions (2.13% of all interactions). Asshown in Figure 2, we notice that the number of posts sharingRussian propaganda and low-credibility news exhibits an increas-ing trend (Mann-Kendall 𝑃 < .001), whereas after the invasion ofUkraine both time series yield a significant decreasing trend (moreprominent in the case of Russian propaganda); high-credibilitycontent also exhibits an increasing trend in the Pre-invasion pe-riod (Mann-Kendall 𝑃 < .001), which becomes stable (no trend)in the period afterward. T
    2. We estimated the contribution of veri-fied accounts to sharing and amplifying links to Russian propagandaand low-credibility sources, noticing that they have a dispropor-tionate role. In particular, superspreaders of Russian propagandaare mostly accounts verified by both Facebook and Twitter, likelydue to Russian state-run outlets having associated accounts withverified status. In the case of generic low-credibility sources, a sim-ilar result applies to Facebook but not to Twitter, where we alsonotice a few superspreaders accounts that are not verified by theplatform.
    1. I often think back to MySpace’s downfall. In 2007, I penned a controversial blog post noting a division that was forming as teenagers self-segregated based on race and class in the US, splitting themselves between Facebook and MySpace. A few years later, I noted the role of the news media in this division, highlighting how media coverage about MySpace as scary, dangerous, and full of pedophiles (regardless of empirical evidence) helped make this division possible. The news media played a role in delegitimizing MySpace (aided and abetted by a team at Facebook, which was directly benefiting from this delegitimization work).

      danah boyd argued in two separate pieces that teenagers self-segregated between MySpace and Facebook based on race and class and that the news media coverage of social media created fear, uncertainty, and doubt which fueled the split.


  3. Nov 2022
    1. Some of the sensitive data collection analyzed by The Markup appears linked to default behaviors of the Meta Pixel, while some appears to arise from customizations made by the tax filing services, someone acting on their behalf, or other software installed on the site. Report Deeply and Fix Things Because it turns out moving fast and breaking things broke some super important things. Give Now For example, Meta Pixel collected health savings account and college expense information from H&R Block’s site because the information appeared in webpage titles and the standard configuration of the Meta Pixel automatically collects the title of a page the user is viewing, along with the web address of the page and other data. It was able to collect income information from Ramsey Solutions because the information appeared in a summary that expanded when clicked. The summary was detected by the pixel as a button, and in its default configuration the pixel collects text from inside a clicked button.  The pixels embedded by TaxSlayer and TaxAct used a feature called “automatic advanced matching.” That feature scans forms looking for fields it thinks contain personally identifiable information like a phone number, first name, last name, or email address, then sends detected information to Meta. On TaxSlayer’s site this feature collected phone numbers and the names of filers and their dependents. On TaxAct it collected the names of dependents.

      Meta Pixel default behavior is to parse and send sensitive data

      Wait, wait, wait... the software has a feature that scans for privately identifiable information and sends that detected info to Meta? And in other cases, the users of the Meta Pixel decided to send private information ot Meta?

  4. Oct 2022
    1. Trolls, in this context, are humans who hold accounts on social media platforms, more or less for one purpose: To generate comments that argue with people, insult and name-call other users and public figures, try to undermine the credibility of ideas they don’t like, and to intimidate individuals who post those ideas. And they support and advocate for fake news stories that they’re ideologically aligned with. They’re often pretty nasty in their comments. And that gets other, normal users, to be nasty, too.

      Not only programmed accounts are created but also troll accounts that propagate disinformation and spread fake news with the intent to cause havoc on every people. In short, once they start with a malicious comment some people will engage with the said comment which leads to more rage comments and disagreements towards each other. That is what they do, they trigger people to engage in their comments so that they can be spread more and produce more fake news. These troll accounts usually are prominent during elections, like in the Philippines some speculates that some of the candidates have made troll farms just to spread fake news all over social media in which some people engage on.

    2. So, bots are computer algorithms (set of logic steps to complete a specific task) that work in online social network sites to execute tasks autonomously and repetitively. They simulate the behavior of human beings in a social network, interacting with other users, and sharing information and messages [1]–[3]. Because of the algorithms behind bots’ logic, bots can learn from reaction patterns how to respond to certain situations. That is, they possess artificial intelligence (AI). 

      In all honesty, since I don't usually dwell on technology, coding, and stuff. I thought when you say "Bot" it is controlled by another user like a legit person, never knew that it was programmed and created to learn the usual patterns of posting of some people may be it on Twitter, Facebook, and other social media platforms. I think it is important to properly understand how "Bots" work to avoid misinformation and disinformation most importantly during this time of prominent social media use.

  5. Sep 2022
    1. Facebook users claim to hate the service, but they keep using it, leading many to describe Facebook as "addictive." But there's a simpler explanation: people keep using Facebook though they hate it because they don't want to lose their connections to the people they love.

      Facebook isn't addictive; people don't want to face the switching cost

  6. Aug 2022
    1. The landscape of social media is ever-changing, especially among teens who often are on the leading edge of this space. A new Pew Research Center survey of American teenagers ages 13 to 17 finds TikTok has rocketed in popularity since its North American debut several years ago and now is a top social media platform for teens among the platforms covered in this survey. Some 67% of teens say they ever use TikTok, with 16% of all teens saying they use it almost constantly. Meanwhile, the share of teens who say they use Facebook, a dominant social media platform among teens in the Center’s 2014-15 survey, has plummeted from 71% then to 32% today.

      Instagram up, Facebook down, TikTok and Snapchat are big

      This echos Meta’s concerns that Facebook was losing ground in this age demographic, and likely also the reasoning to make Instagram more TikTok-like. This may also dovetail with the recently announced change to the Facebook algorithm to be even more sticky and TikTok-like.

    1. "Facebook is fundamentally an advertising machine"—it hasn't been about bringing people closer together in a long time (if that was ever its real mission). And as a better advertising machine comes along—TikTok—Facebook is forced to redesign its user interaction to be more addictive just to stand still. Will a a more human-scale social network...or series of social networks...replace it?

  7. Jul 2022
    1. Recommendation media is the new standard for content distribution. Here’s why friend graphs can‘t compete in an algorithmic world.
    1. Mark last week as the end of the social networking era, which began with the rise of Friendster in 2003, shaped two decades of internet growth, and now closes with Facebook's rollout of a sweeping TikTok-like redesign.
    1. Data Policy and related materialsometimes, on the contrary, demonstrate an oversupply of very high level, generalised information atthe expense of a more concise and meaningful delivery of the essential information necessary for thedata subject to understand the processing being undertaken and to exercise his/her rights in ameaningful way. Furthermore, while Facebook has chosen to provide its transparency information byway of pieces of text, there are other options available, such as the possible incorporation of tables,which might enable Facebook to provide the information required in a clear and concise manner,particularly in the case of an information requirement comprising a number of linked elements. Theimportance of concision cannot be overstated nonetheless. Facebook is entitled to provide additionalinformation to its user above and beyond that required by Article 13 and can provide whateveradditional information it wishes. However, it must first comply with more specific obligations under theGDPR, and then secondly ensure that the additional information does not have the effect of creatinginformation fatigue or otherwise diluting the effective delivery of the statutorily required information.That is simply what the GDPR requires.

      DPC again schools facebook in reality.

    2. In expressing disagreement with the proposed literal interpretation of Article 13(1)(c) GDPR set outin the Preliminary Draft Decision, Facebook submitted that “Facebook Ireland’s interpretation directlytracks the actual wording of the relevant GDPR provision which stipulates only that two items ofinformation be provided about the processing (i.e. purposes and legal bases). It says nothing aboutprocessing operations.”102 Facebook submitted that because, in its view, Article 13(1) GDPR applies “atthe time data is collected”, and therefore refers only to “prospective processing”. It submits that, onthis basis, Article 13(1)(c) GDPR does not relate to ongoing processing operations, but is concernedsolely with information on “intended processing”.103 Facebook’s position is therefore that Article13(1)(c) GDPR is future-gazing or prospective only in its application and that such an interpretation issupported by a literal reading of the GDPR

      This is both a ballsy, and utterly stupid argument. The kind of argument that well-paid lawyers will make in order to keep getting paid.

    3. For these reasons, I conclude that, as a matter of fact, Facebook did not rely, or purport torely, on the Complainant’s consent as a legal basis for the processing of personal data under theTerms of Service

      First conclusion: No consent. It's 6(1)(b) time.

    4. In light of this confirmation by the data controller that it does not seek to rely on consent in thiscontext, there can be no dispute that, as a matter of fact, Facebook is not relying on consent as thelawful basis for the processing complained of. It has nonetheless been argued on the Complainant’sbehalf that Facebook must rely on consent, and that Facebook led the Complainant to believe thatit was relying on consent

      Here Helen bitchslaps Max by noting that despite what they hope and wish for, FB is relying on contract, and not consent.

    5. On this basis, the issues that I will address in this Draft Decision are as follows: Issue 1 – Whether clicking on the “accept” button constitutes or must be considered consentfor the purposes of the GDPR Issue 2 – Reliance on Article 6(1)(b) as a lawful basis for personal data processing Issue 3 – Whether Facebook provided the requisite information on the legal basis forprocessing on foot of Article 6(1)(b) GDPR and whether it did so in a transparent manner.

      Key issues identified in the draft opinion. Compare later if this differs in final.

    1. 这篇文章负面部分偏多,但也算是公平:当一个产品已经走过十年之后,它需要面对的内部和外部挑战太多了,老的用户正在失去兴趣,而新的用户却被对手抢走了注意力。因此,Facebook 的产品成了一种尴尬的范例:如何在一个老产品中增加新功能。Armstrong 这样写道:

      如果你打开你的 Facebook 应用程序,我想你会对目前存放在那里的产品数量感到震惊……然而,每一个产品都解决了不同的工作,所以下载 Facebook 的价值主张变得越来越模糊。

      而另一个准确的捕捉则是对于 Messenger 的。这个从 Facebook 的私信功能中拆分出来的 app,现在又将被整合回到主端当中,甚至一项雄心勃勃的工作是:要把 Facebook、Instagram 和 Whatsapp 这几个风马牛不相及的产品中即时通信的部分整合成一个。


      Armsrong 对这段历史的总结是这样的: 在过去的几年里,我们对所谓的社交网络学到了一些东西。首先,我们实际上并不关心大多数人的想法。社交部分并不像最初想象的那样吸引人,因为人们的很多日常生活都很无聊。我们根本不关心,根本不关心你高中时的某个人的政治观点。来自熟人的内容并不那么有趣。 我们也想明白了,在网上发布我们的大部分社会存在是有负面机会成本的。我们所有人,我是说我们中的每一个人,都曾在公共领域使用过污言秽语,开过种族主义的玩笑,说过一些贬义词,或者只是单纯的愚蠢。自 2007 年以来,越来越明显的是,在网上分享你的生活的好处并不值得让我们冒险发言,虽然这些话现在感觉很好,但在 10 年后可能会摧毁你的生活。 简而言之,社交图谱只是兴趣图谱的一个可怜的替代品。人们从他们所谓的「社交」媒体中寻找的只是他们感兴趣的媒体。

  8. Jun 2022
    1. What's become clear is that our relationships are experiencing a profound reset. Across generations, having faced a stark new reality, a decades-long trend1 reversed as people are now shifting their energy away from maintaining a wide array of casual connections to cultivating a smaller circle of the people who matter most.

      ‘how the demand for deeper human connection has sparked a profound reset in our relationships’.

      The Meta Foresight (formerly Facebook IQ) team conducted a survey of 36,000 adults across 12 markets.

      Among their key findings:

      72% of respondents said that the pandemic caused them to reprioritize their closest friends
      Young people are most open to using more immersive tech to foster connections (including augmented and virtual reality), though all users indicated that tech will play a bigger role in enhancing personal connections moving forward
      37% of people surveyed globally reported reassessing their life priorities as a result of the pandemic
    1. And when corporations start to dominate the Internet, they became de-facto governments. Slowly but surely, the tech companies began to act like old power. They use the magic of tech to consolidate their own power, using money to increase their influence, blocking the redistribution of power from the entrenched elites to ordinary people.

      "Money is its own kind of power"

      The corporations built by white, male, American, and vaguely libertarian people became a focal point of power because of the money they had to influence governments and society. They started looking like "old power."


      Facebook took advantage of tech's tradition of openness [importing content from MySpace], but as soon as it got what it wanted, it closed its platform off.

  9. May 2022
    1. We believe that Facebook is also actively encouraging people to use tools like Buffer Publish for their business or organization, rather than personal use. They are continuing to support the use of Facebook Pages, rather than personal Profiles, for things like scheduling and analytics.

      Of course they're encouraging people to do this. Pushing them to the business side is where they're making all the money.

    1. Facebook provides some data portability, but makes an odd plea for regulation to make more functionality possible.

      Why do this when they could choose to do the right thing? They don't need to be forced and could certainly try to enforce security. It wouldn't be any worse than unveiling the tons of personal data they've managed not to protect in the past.

    1. Zugsystem ausklügelt, das dieOpfer möglichst schnell und reibungslos nach Auschwitz bringt, darübervergißt, was in Auschwitz mit ihnen geschieht.

      nicht vergleichbar, aber auch ein Bsp für Technik, die dadurch, dass sie als Selbstzweck missverstanden wird, missbraucht wird - Facebook und der Menschenhandel -- siehe Jan Böhmermann Folge zu Facebookleaks



    1. Content moderation takes place within this ecosystem.

      The essay makes the point that "Facebook has many faces - it is not a monolith". But algorithmic content moderation is monolithic. Let's see whether this tension is investigated.

  10. Apr 2022
    1. dentical to the content of information which was previously declared to be unlawful, or to block access to that information, irrespective of who requested the storage of that information;

      This establishes that identical or equivalent content once struck down can be made stayed down by automatic tools.

    1. Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’), in particular Article 15(1), must be interpreted as meaning that it does not preclude a court of a Member State from:–        ordering a host provider to remove information which it stores, the content of which is identical to the content of information which was previously declared to be unlawful, or to block access to that information, irrespective of who requested the storage of that information;–        ordering a host provider to remove information which it stores, the content of which is equivalent to the content of information which was previously declared to be unlawful, or to block access to that information, provided that the monitoring of and search for the information concerned by such an injunction are limited to information conveying a message the content of which remains essentially unchanged compared with the content which gave rise to the finding of illegality and containing the elements specified in the injunction, and provided that the differences in the wording of that equivalent content, compared with the wording characterising the information which was previously declared to be illegal, are not such as to require the host provider to carry out an independent assessment of that content, and–        ordering a host provider to remove information covered by the injunction or to block access to that information worldwide within the framework of the relevant international law.

      C‑18/18 Eva Glawischnig-Piesczek v Facebook

      Key for Art 17 AG opinion: this line of arguments justifies the ex ante blocking of manifestly infringing content.

  11. Mar 2022
    1. Ben Collins. (2022, February 28). Quick thread: I want you all to meet Vladimir Bondarenko. He’s a blogger from Kiev who really hates the Ukrainian government. He also doesn’t exist, according to Facebook. He’s an invention of a Russian troll farm targeting Ukraine. His face was made by AI. https://t.co/uWslj1Xnx3 [Tweet]. @oneunderscore__. https://twitter.com/oneunderscore__/status/1498349668522201099

    1. First is that it actually lowers paid acquisition costs. It lowers them because the Facebook Ads algorithm rewards engaging advertisements with lower CPMs and lots of distribution. Facebook does this because engaging advertisements are just like engaging posts: they keep people on Facebook. 

      Engaging advertisements on Facebook benefit from lower acquisition costs because the Facebook algorithm rewards more interesting advertisements with lower CPMs and wider distribution. This is done, as all things surveillance capitalism driven, to keep eyeballs on Facebook.

      This isn't too dissimilar to large cable networks that provide free high quality advertising to mass manufacturers in late night slots. The network generally can't sell all of their advertising inventory, particularly in low viewing hours, so they'll offer free or incredibly cheap commercial rates to their bigger buyers (like Coca-Cola or McDonalds, for example) to fill space and have more professional looking advertisements between the low quality advertisements from local mom and pop stores and the "as seen on TV" spots. These higher quality commercials help keep the audience engaged and prevents viewers from changing the channel.

  12. Jan 2022
  13. Dec 2021
    1. She thinks the companies themselves are behind this, trying to manipulate their users into having certain opinions and points of view.

      The irony is that this is, itself, somewhat a conspiracy theory.

      Though, I think a nuanced understanding may be closer:

      • The real purpose is not to influence people to believe anything. It's money. It's ad spend and data collection to sell. We need to demonstrate to advertisers that their ads are actually getting seen. The more they get seen, the more money we make. And, the more time is spent on the service, the more data we have to sell... which is as valuable as the add spend.
      • Companies jigger algorithms to maximize time spent on the service.
      • As the Bible is clear, the heart of man is wicked, and the kinds of things that maximize time spent are themselves attitudes of evil, malice, wickedness, and hatred, and the list of things Paul repeatedly tells us to avoid. Go figure.
      • So, people feel the platforms are basically like smoking, and yet, they can't stop.
    2. Only 10 percent say Facebook has a positive impact on society, while 56 percent say it has a negative impact and 33 percent say its impact is neither positive nor negative. Even among those who use Facebook daily, more than three times as many say the social network has a negative rather than a positive impact.

      Here's the rub. Only 1 out of 10 Americans surveyed think Facebook is a good idea.

      Over half of Americans surveyed actually think Facebook is bad for them and society as a whole. And yet, the general sense is now that life is impossible without it.

      How does the church respond to this? Do we tell people to get off or "use in moderation?"

  14. Nov 2021
    1. Source: De Agostini Picture Library / Getty

      This is a searing image for what this article is about:

      Muted dull painting of what appears to be a 17th century gallows being erected in front of a line of soldiers with guns and bayonets and a  crowd with shovels. Instead of a gallows, the structure being erected is a large Facebook thumbs up image on a pole. Various flags with the Facebook logo fly around the scene.

      Could be entitled "A different kind of social justice."

    1. 《TIME》杂志的 10 月份封面是扎克伯格的照片上面写着「Delete "Facebook"?」,这个 Alert 使用的是 iOS 样式的弹框,但搭配了一个鼠标的选择方式,其实完全可以理解为什么这么做,但还是引起了一些讨论,其中最有趣的是一群苹果前员工出来讲自己过去在苹果的设计流程:之前负责重设计 me.com 的设计师 Majd Taby 说,他当时的工作就是把 iPad 的样式完全移植到 me.com 的 web 中,最终的表现就是用鼠标选择 iPad 样式的 Alert,和这个封面展示的效果一模一样。之前负责 iCloud web apps 的 Sebastiaan de With 说,苹果团队内部完全没有 UIKit 或设计源文件的共享,每一个团队要做类似的事情时只能从头开始,要知道那时可是拟物化设计的时代,绘制一套 UI 需要耗费非常多的时间,这一切的原因都是要「保密」。Martin Pedrick 在加入用户界面组被告知的第一件事就是不要用任何资源共享工具来分享设计源文件。

    1. <small><cite class='h-cite via'> <span class='p-author h-card'>David Dylan Thomas</span> in Come and get yer social justice metaphors! (<time class='dt-published'>11/05/2021 11:26:10</time>)</cite></small>

  15. Oct 2021
    1. https://www.theatlantic.com/ideas/archive/2021/10/facebook-papers-democracy-election-zuckerberg/620478/

      Adrienne LaFrance outlines the reasons we need to either abandon Facebook or cause some more extreme regulation of it and how it operates.

      While she outlines the ills, she doesn't make a specific plea about the solution of the problem. There's definitely a raging fire in the theater, but no one seems to know what to do about it. We're just sitting here watching the structure burn down around us. We need clearer plans for what must be done to solve this problem.

    2. When the most powerful company in the world possesses an instrument for manipulating billions of people—an instrument that only it can control, and that its own employees say is badly broken and dangerous—we should take notice.
    3. Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content. It could hold its employees accountable for preventing users from finding these too-harmful versions of the platform, thereby preventing those versions from existing.

      The "moral majority" has screamed for years about the dark corners of the internet, and now they seem to be actively supporting a company that actively pushes people to those very extremes.

    4. Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right.

      Nice to see someone else essentially saying something along the lines that "free speech" is not the same as "free reach".

      Traditional journalism has always had thousands of gatekeepers who filtered and weighed who got the privilege of reach. Now anyone with an angry, vile, or upsetting message can get it for free. This is one of the worst parts of what Facebook allows.

    5. “While we have other systems that demote content that might violate our specific policies, like hate speech or nudity, this intervention reduces all content with equal strength. Because it is so blunt, and reduces positive and completely benign speech alongside potentially inflammatory or violent rhetoric, we use it sparingly.”)

      If it's neither moral nor legal for one to shout "fire" in a crowded theater, why is it somehow both legal and moral for a service like Facebook to allow their service to scream "fire, fire, fire" within a crowded society?

    6. Facebook wants people to believe that the public must choose between Facebook as it is, on the one hand, and free speech, on the other. This is a false choice.
    7. One example is a program that amounts to a whitelist for VIPs on Facebook, allowing some of the users most likely to spread misinformation to break Facebook’s rules without facing consequences.
    8. “I am worried that Mark’s continuing pattern of answering a different question than the question that was asked is a symptom of some larger problem,” wrote one Facebook employee in an internal post in June 2020, referring to Zuckerberg. “I sincerely hope that I am wrong, and I’m still hopeful for progress. But I also fully understand my colleagues who have given up on this company, and I can’t blame them for leaving. Facebook is not neutral, and working here isn’t either.”

      Glad to see that others are seeing Mark Zuckerberg seems to be the one with the flaws that are killing Facebook.

    9. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.

      Meaningful social interactions don't need algorithmic help.

    10. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
    11. Facebook offers a collection of one-tap emoji reactions. Today, they include “like,” “love,” “care,” “haha,” “wow,” “sad,” and “angry.” Company researchers had found that the posts dominated by “angry” reactions were substantially more likely to go against community standards, including prohibitions on various types of misinformation, according to internal documents.

      "Angry" reactions can be a measure of posts being against community standards and providing misinformation.

      What other signals might misinformation carry that could be used to guard against them at a corporate level?

    12. that many of Facebook’s employees believe their company operates without a moral compass.

      Not just Facebook, but specifically Mark Zuckerberg who appears to be on the spectrum and isn't capable of being moral in a traditional sense.

    13. Facebook has dismissed the concerns of its employees in manifold ways. One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me.
      1. Share opinions
      2. Opinions viewed as "fact"
      3. "Facts" spread as news.
      4. Platform accelerates "news".
      5. Bad things happen
      6. Profit
    1. Not only is Zuckerberg being called out for negligence, but it’s obvious that his ridiculously proposed idea “Instagram for Kids”, a social platform targeting children under the age of 13, is projected to only exacerbate the problem.
    1. FaceBook对未来的设想构建在AR(增强现实)上,而显然现有的AR设备的交互方式还不足以支持平时生活中的使用。他们团队为了实现愿景对全新的交互方式有下面这些设想:

      • 超低摩擦的输入模式:从想法到行动之间没有阻力,比如利用手腕的电信号
      • 可以理解上下文的个性化AI
      • 可以全天候穿戴的设备
  16. Sep 2021
    1. Ben Collins on Twitter: “A quick thread: It’s hard to explain just how radicalized ivermectin and antivax Facebook groups have become in the last few weeks. They’re now telling people who get COVID to avoid the ICU and treat themselves, often by nebulizing hydrogen peroxide. So, how did we get here?” / Twitter. (n.d.). Retrieved September 26, 2021, from https://twitter.com/oneunderscore__/status/1441395300002848769?s=20

    1. We may think of Pinterest as a visual form of commonplacing, as people choose and curate images (and very often inspirational quotations) that they find motivating, educational, or idealistic(Figure 6). Whenever we choose a passage to cite while sharing an article on Facebook or Twitter, we are creating a very public commonplace book on social media. Every time wepost favorite lyrics from a song or movie to social media or ablog, weare nearing the concept of Renaissance commonplace book culture.

      I'm not the only one who's thought this. Pinterest, Facebook, twitter, (and other social media and bookmarking software) can be considered a form of commonplace.

  17. Aug 2021
  18. Jul 2021
    1. 过去 Facebook 是一家由 Mark Zuckerberg 和 Sheryl Sandberg 双核心领导的公司,但自从特朗普上任后,这个双核心发生了很多变化,公司员工普遍认为 Facebook 的权利结构已经由过去的双核心变成了单核外加其他人构成。至于原因,则是因为 Sheryl Sandberg 没有在特朗普的任期内处理好 Facebook 与华盛顿的关系。

      这篇内容节选自还未上市的新书《An Ugly Truth: Inside Facebook's Battle for Domination》。

    1. Facebook AI. (2021, July 16). We’ve built and open-sourced BlenderBot 2.0, the first #chatbot that can store and access long-term memory, search the internet for timely information, and converse intelligently on nearly any topic. It’s a significant advancement in conversational AI. https://t.co/H17Dk6m1Vx https://t.co/0BC5oQMEck [Tweet]. @facebookai. https://twitter.com/facebookai/status/1416029884179271684