52 Matching Annotations
  1. Mar 2024
    1. We need a better catch-all term for the ills perpetrated on humanity and society by technology companies' extractive practices and general blindness to their own effects while they become rich. It should have a terrifically pejorative tone.

      Something which subsumes the crazy bound up in some of the following: - social media machine guns - toxic technology - mass produced toxicity - attention economy - bad technology - surveillance capitalism - technology and the military - weapons of math destruction

      It should be the polar opposite of: - techno-utopianism

  2. Feb 2024
    1. Joy, Bill. “Why the Future Doesn’t Need Us.” Wired, April 1, 2000. https://www.wired.com/2000/04/joy-2/.

      Annotation url: urn:x-pdf:753822a812c861180bef23232a806ec0

      Annotations: https://jonudell.info/h/facet/?user=chrisaldrich&url=urn%3Ax-pdf%3A753822a812c861180bef23232a806ec0&max=100&exactTagSearch=true&expanded=true

    2. the prevention of knowledge-enabled massdestruction
    3. The GNR technologies do not divide clearly into commercial andmilitary uses; given their potential in the market, it’s hard to imaginepursuing them only in national laboratories. With their widespreadcommercial pursuit, enforcing relinquishment will require a verificationregime similar to that for biological weapons, but on an unprecedentedscale. This, inevitably, will raise tensions between our individual pri-vacy and desire for proprietary information, and the need for verifica-tion to protect us all. We will undoubtedly encounter strong resistanceto this loss of privacy and freedom of action.

      While Joy looks at the Biological and Chemical Weapons Conventions as well as nuclear nonproliferation ideas, the entirety of what he's looking at is also embedded in the idea of gun control in the United States as well. We could choose better, but we actively choose against our better interests.

      What role does toxic capitalism have in pushing us towards these antithetical goals? The gun industry and gun lobby have had tremendous interest on that front. Surely ChatGPT and other LLM and AI tools will begin pushing on the profitmaking levers shortly.

    4. We have embodied our relinquish-ment of biological and chemical weapons in the 1972 BiologicalWeapons Convention (BWC) and the 1993 Chemical Weapons Con-vention (CWC).
  3. Jan 2024
    1. Thus we have the possibility not just of weapons of mass destructionbut of knowledge-enabled mass destruction (KMD), this destructive-ness hugely amplified by the power of self-replication.

      coinage of the phrase knowledge-enabled mass destruction here?

  4. Dec 2023
    1. its easy to get lost in complexity here, but i prefer to keep it simple: our *only* problem is overpopulation, which is caused by pacifism = civilization. *all* other problems are only symptoms of overpopulation. these "financial weapons of mass destruction" (warren buffett) have the only purpose of mass murder = to kill the 95% useless eaters. so yes, this is a "controlled demolition" aka "global suicide cult". most of us will die, but we are happy...

      financial weapons of mass destruction: the useful idiots believe that they can defeat risk (or generally, defeat death) by centralization on a global scale. they want to build a system that is "too big to fail" and which will "live forever". they use all kinds of tricks to make their slaves "feel safe" and "feel happy", while subconsciously, everything is going to hell in the long run. so this is just another version of "stupid and evil people trying to rule the world". hubris comes before the fall, nothing new. their system will never work, but idiots must try... because "fake it till you make it" = constructivism, mind over matter, fantasy defeats reality, ...

      the video and soundtrack are annoying, they add zero value to the monolog.

  5. Nov 2023
    1. the jarrow have even worse things to tell us they're offering us tobacco and they want to show us how to chew it 00:07:28 it's not good for us they give us alcohol we don't want that either but they still try and make us drink it we don't want any it's bad
      • for: example - cultural destruction - Jawara - cigarettes and alcohol, example - indigenous genocide, example - forced addiction

      • comment

      • example - cultural destruction
      • example - indigenous genocide
      • example: forced addiction
        • Growing up in Canada in an indigenous community, this struck a nerve.In my childhood, I experience how the Haida first nations people of the Queen Charlotte Islands were reduced from a once proud and self-reliant culture to a dependent one living in government housing, the land they lived on denied to them and forced to live on small parcels of "Indian Reservations", their dignity stripped, and made dependent on alcohol and cigarettes.
        • It seems that modernity is simply an arrogant and corrupting force on indigeneity.
        • We see the beginning of indigenous genocide by the attempted infection by ignorant modern citizens who interact with the Jawara by attempting to hook them on the extremely destructive and addictive substances of our culture, alcohol and cigarettes
    2. there are armed poachers who shoot at us they steal they kill our pigs we think about it all the time 00:06:53 after the wild pigs it's deer their numbers have decreased dramatically since the poachers forced the jarrow to hunt for them wild game is being sold illegally on the 00:07:12 indian market
      • for: cultural destruction - Jawara - poachers, modernity - disruption of ecological cycle, example - ecosystem disruption

      • comment

      • example: ecosystem disruption
      • example: human cultural ecosystem in balance
      • the uncontrolled influence of the outside world always follows. Governments are too shortsighted to understand that this always happens and feel they can control the situation. They cannot. Greed breeds resourcefulness
        • In a matter of years, poachers have disrupted the Jawara's traditional diet, forcing them to overhunt deer and disrupt the entire ecological cycle that existed up until then.It's an example of how modernity ruthlessly and rapidly disrupts ecosystems. In this case, ecosystems where humans have integrated in a balanced way.
  6. Jun 2023
  7. Feb 2023
    1. Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.

      Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.

  8. Jan 2023
    1. Weread, for example, of Philistine incursions into the hill country, toMichmash in Benjamin (1 Samuel 13:23), and the Rephaim Valley nearJerusalem (2 Samuel 5:17–22). It was in one of these border disputes thatthe city at Khirbet Qeiyafa was conquered and destroyed.
    2. Ekron was destroyed in603 BCE by the Babylonians.
    3. Gath was destroyed at the end of the 9th centuryBCE by Hazael, the Aramean king of Damascus
  9. May 2022
    1. We don’t know how many media outlets have been run out of existence because of brand safety technology, nor how many media outlets will never be able to monetize critical news coverage because the issues important to their communities are marked as “unsafe.”
    1. With Alphabet Inc.’s Google, and Facebook Inc. and its WhatsApp messaging service used by hundreds of millions of Indians, India is examining methods China has used to protect domestic startups and take control of citizens’ data.

      Governments owning citizens' data directly?? Why not have the government empower citizens to own their own data?

  10. Mar 2022
    1. The current mass media such as t elevision, books, and magazines are one-directional, and are produced by a centralized process. This can be positive, since respected editors can filter material to ensure consistency and high quality, but more widely accessible narrowcasting to specific audiences could enable livelier decentralized discussions. Democratic processes for presenting opposing views, caucusing within factions, and finding satisfactory compromises are productive for legislative, commercial, and scholarly pursuits.

      Social media has to some extent democratized the access to media, however there are not nearly enough processes for creating negative feedback to dampen ideas which shouldn't or wouldn't have gained footholds in a mass society.

      We need more friction in some portions of the social media space to prevent the dissemination of un-useful, negative, and destructive ideas swamping out the positive ones. The accelerative force of algorithmic feeds for the most extreme ideas in particular is one of the most caustic ideas of the last quarter of a century.

    2. Since any powerful tool, such as a genex, can be used for destructive purposes, the cautions are discussed in Section 5.

      Given the propensity for technologists in the late 90s and early 00s to have rose colored glasses with respect to their technologies, it's nice to see at least some nod to potential misuses and bad actors within the design of future tools.

  11. Feb 2022
  12. Nov 2021
    1. The Ouroboros is a Greek word meaning “tail devourer,” and is one of the oldest mystical symbols in the world. It can be perceived as enveloping itself, where the past (the tail) appears to disappear but really moves into an inner domain or reality, vanishing from view but still existing.

      Mark Smith asked me if I was familiar with the term ouroboros. I replied, “No.” So he sent me this link.

      This symbolizes the cyclic Nature of the Universe: creation out of destruction, Life out of Death.

  13. Oct 2021
    1. https://www.theatlantic.com/ideas/archive/2021/10/facebook-papers-democracy-election-zuckerberg/620478/

      Adrienne LaFrance outlines the reasons we need to either abandon Facebook or cause some more extreme regulation of it and how it operates.

      While she outlines the ills, she doesn't make a specific plea about the solution of the problem. There's definitely a raging fire in the theater, but no one seems to know what to do about it. We're just sitting here watching the structure burn down around us. We need clearer plans for what must be done to solve this problem.

  14. Mar 2021
  15. Feb 2021
  16. Dec 2020
    1. The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning. The rise of QAnon, for example, is one of the social web’s logical conclusions. That’s because Facebook—along with Google and YouTube—is perfect for amplifying and spreading disinformation at lightning speed to global audiences. Facebook is an agent of government propaganda, targeted harassment, terrorist recruitment, emotional manipulation, and genocide—a world-historic weapon that lives not underground, but in a Disneyland-inspired campus in Menlo Park, California.

      The original goal with a bit of moderation may have worked. Regression to the mean forces it to a bad place, but when you algorithmically accelerate things toward our bases desires, you make it orders of magnitude worse.

      This should be though of as pure social capitalism. We need the moderating force of government regulation to dampen our worst instincts, much the way the United State's mixed economy works (or at least used to work, as it seems that raw capitalism is destroying the United States too).

  17. Oct 2020
    1. But these lookalike audiences aren’t just potential new customers — they can also be used to exclude unwanted customers in the future, creating a sort of ad targeting demographic blacklist.
    2. How consumers would be expected to navigate this invisible, unofficial credit-scoring process, given that they’re never informed of its existence, remains an open question.
    3. “It sure smells like the prescreening provisions of the FCRA,” Reidenberg told The Intercept. “From a functional point of view, what they’re doing is filtering Facebook users on creditworthiness criteria and potentially escaping the application of the FCRA.”
    4. In an initial conversation with a Facebook spokesperson, they stated that the company does “not provide creditworthiness services, nor is that a feature of Actionable Insights.” When asked if Actionable Insights facilitates the targeting of ads on the basis of creditworthiness, the spokesperson replied, “No, there isn’t an instance where this is used.” It’s difficult to reconcile this claim with the fact that Facebook’s own promotional materials tout how Actionable Insights can enable a company to do exactly this. Asked about this apparent inconsistency between what Facebook tells advertising partners and what it told The Intercept, the company declined to discuss the matter on the record,
    1. YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention.

      Talk radio has had this formula for years and they've almost had to use it to drive any listenership as people left radio for television and other media.

      I can still remember the different "loudness" level of talk between Bill O'Reilly's primetime show on Fox News and the louder level on his radio show.

    2. A 2015 clip about vaccination from iHealthTube.com, a “natural health” YouTube channel, is one of the videos that now sports a small gray box.

      Does this box appear on the video itself? Apparently not...

      Examples:

      But nothing on the embedded version:

      A screengrab of what this looks like:

    3. When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.“They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.

      Obviously they took the easy route. You may need to measure what matters, but getting to that goal by any means necessary or using indefensible shortcuts is the fallacy here. They could have had that North Star, but it's the means they used by which to reach it that were wrong.

      This is another great example of tech ignoring basic ethics to get to a monetary goal. (Another good one is Marc Zuckerberg's "connecting people" mantra when what he should be is "connecting people for good" or "creating positive connections".

    4. The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.#lazy-img-336042387:before{padding-top:66.68334167083543%;}

      This is a great summation of the issue.

    5. Somewhere along the last decade, he added, YouTube prioritized chasing profits over the safety of its users. “We may have been hemorrhaging money,” he said. “But at least dogs riding skateboards never killed anyone.”
    1. A more active stance by librarians, journalists, educators, and others who convey truth-seeking habits is essential.

      In some sense these people can also be viewed as aggregators and curators of sorts. How can their work be aggregated and be used to compete with the poor algorithms of social media?

    1. Meta co-founder and CEO Sam Molyneux writes that “Going forward, our intent is not to profit from Meta’s data and capabilities; instead we aim to ensure they get to those who need them most, across sectors and as quickly as possible, for the benefit of the world.”

      Odd statement from a company that was just acquired by Facebook founder's CVI.

    1. Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they've kicked this firm off their site, but I think they've got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, #Breaking Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let's start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn't dominated by a single censor. #BreakUpBigTech.”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.

      Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it's inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.

      If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the "masses" (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning's coffee or today's lunch marginalized.

      To analogize it, we've provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.

      If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn't have the need for as much human curation.

    2. It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.

      But platforms are making huge decisions about who is allowed to speak. While they're generally allowing everyone to have a voice, they're also very subtly privileging many voices over others. While they're providing space for even the least among us to have a voice, they're making far too many of the worst and most powerful among us logarithmic-ally louder.

      It's not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn't have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.

      The issue we ought to be looking at is the dynamic range between people and the messages they're able to send through social platforms.

      We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they're able to buy, we're imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.

      If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I'd have to actively search that content out for it to cause me that sort of harm.

    1. A spokeswoman for Summit said in an e-mail, “We only use information for educational purposes. There are no exceptions to this.” She added, “Facebook plays no role in the Summit Learning Program and has no access to any student data.”

      As if Facebook needed it. The fact that this statement is made sort of goes to papering over the idea that Summit itself wouldn't necessarily do something as nefarious or worse with it than Facebook might.

    1. Having low scores posted for all coworkers to see was “very embarrassing,” said Steph Buja, who recently left her job as a server at a Chili’s in Massachusetts. But that’s not the only way customers — perhaps inadvertently — use the tablets to humiliate waitstaff. One diner at Buja’s Chili’s used Ziosk to comment, “our waitress has small boobs.”According to other servers working in Ziosk environments, this isn’t a rare occurrence.

      This is outright sexual harrassment and appears to be actively creating a hostile work environment. I could easily see a class action against large chains and/or against the app maker themselves. Aggregating the data and using it in a smart way is fine, but I suspect no one in the chain is actively thinking about what they're doing, they're just selling an idea down the line.

      The maker of the app should be doing a far better job of filtering this kind of crap out and aggregating the data in a smarter way and providing a better output since the major chains they're selling it to don't seem to be capable of processing and disseminating what they're collecting.

    2. Systems like Ziosk and Presto allow customers to channel frustrations that would otherwise end up on public platforms like Yelp — which can make or break a restaurant — into a closed system that the restaurant controls.

      I like that they're trying to own and control their own data, but it seems like they've relied on a third party company to do most of the thinking for them and they're not actually using the data they're gathering in the proper ways. This is just painfully deplorable.

    1. I literally couldn’t remember when I’d last looked at my RSS subscriptions. On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination.
    1. Safiya Noble, Algorithms of Oppression (New York: New York University Press, 2018). See also Mozilla’s 2019 Internet Health Report at https://internethealthreport.org/2019/lets-ask-more-of-ai/.
    1. eight years after release, men are 43% more likely to be taken back under arrest than women; African-Americans are 42% more likely than whites, and high-school dropouts are three times more likely to be rearrested than college graduates.

      but are these possibly the result of external factors (like racism?)

  18. Jul 2020
  19. Jun 2020
  20. Oct 2018
    1. Some anxieties relate to practical issues, timeframes, and possible abuses. Concerns about these are reasonable and certainly need debate: the required technologies may be difficult to achieve, some may elude us indefinitely or turn out to be beyond our grasp. Some may be all too possible, if they fall into the wrong hands.

      As people are creating new technologies for the benefit of other, there are many technology that leads to human destruction. Take for example the creation of nuclear weapons. Once was created to end destruction, but yet has never ended because the lack of trust and compromise. As human continue to create new and advance technology for the world, when is a there a time to stop? Are the people in need of "improvement"? As there is many good things in life, there is always something to contradict it.

  21. Oct 2017
    1. that we owe what ecologists like David Tilman call an ‘extinction debt’ (Tilman et al., 1994, pp. 65–6)—and that this debt will be paid.

      For more on the concept of 'extinction debt'; read this article: http://www.nature.com/nature/journal/v371/n6492/abs/371065a0.html

      Essentially my understanding of 'extinction debt' refers to species becoming extinct in the future because of things that have happened in the past. Tilman refers to the destruction of a species' habitat as the main cause of that species becoming extinct. Makes absolute when I think about it.