61 Matching Annotations
  1. Oct 2024
  2. Jul 2024
    1. example is weaponry

      for - progress trap - example - weapons leading to nuclear weapons

      progress trap - example - weapons leading to nuclear weapons - These are the most ironic inventions of civilization - We spend significant percentages of our budgets maintaining and escalating them, meanwhile, we all know we cannot use them as it would mean billions would die

      quote progress trap - nuclear weapons - When you're talking nuclear weapons that can never be used you're investing in something that's completely useless - that you're maybe burying in the ground in the form of missile silos or - you're putting into submarines or - into aircraft or missiles - You can never use these things and they are draining off the surplus that might otherwise be used into - wealth redistribution and into - long-term sustainability

  3. Jun 2024
    1. military power and Technology progress have been tightly linked historically and with extraordinarily rapid technological 00:34:11 progress will come military revolutions

      for - progress trap - AI and even more powerful weapons of destruction

      progress trap - AI and even more powerful weapons of destruction - The podcaster's excitement seems to overshadow any concern of the tragic unintended consequences of weapons even more powerful than nuclear warheads. - With human base emotions still stuck in the past and our species continued reliance on violence to solve problems, more powerful weapons is not the solution, - indeed, they only make the problem worse - Here is where Ronald Wright's quote is so apt: - We humans are running modern software on 50,000 year old hardware systems - Our cultural evolution, of which AI is a part of, is happening so quickly, that - it is racing ahead of our biological evolution - We aren't able to adapt fast enough for the rapid cultural changes that AI is going to create, and it may very well destroy us

  4. Apr 2024
    1. 14:58 "criminals are hiding among legitimate asylum seekers"<br /> haha, no. there are ZERO "legitimate asylum seekers"

  5. Mar 2024
    1. We need a better catch-all term for the ills perpetrated on humanity and society by technology companies' extractive practices and general blindness to their own effects while they become rich. It should have a terrifically pejorative tone.

      Something which subsumes the crazy bound up in some of the following: - social media machine guns - toxic technology - mass produced toxicity - attention economy - bad technology - surveillance capitalism - technology and the military - weapons of math destruction

      It should be the polar opposite of: - techno-utopianism

    1. Silent weapons for quiet wars<br /> Operations Research Technical Manual<br /> TW-SW7905.1

      Welcome Aboard

      This publication marks the 25th anniversary of the Third World War, called the "Quiet War", being conducted using subjective biological warfare, fought with "silent weapons".<br /> This book contains an introductory description of this war, its strategies, and its weaponry.<br /> May 1979 #74-1120

      Security

      It is patently impossible to discuss social engineering or the automation of a society, i.e., the engineering of social automation systems (silent weapons) on a national or worldwide scale without implying extensive objectives of social control and destruction of human life, i.e., slavery and genocide.<br /> This manual is in itself an analog declaration of intent. Such a writing must be secured from public scrutiny. Otherwise, it might be recognized as a technically formal declaration of domestic war. Furthermore, whenever any person or group of persons in a position of great power and without full knowledge and consent of the public, uses such knowledge and methodologies for economic conquest - it must be understood that a state of domestic warfare exists between said person or group of persons and the public.<br /> The solution of today's problems requires an approach which is ruthlessly candid, with no agonizing over religious, moral or cultural values.<br /> You have qualified for this project because of your ability to look at human society with cold objectivity, and yet analyze and discuss your observations and conclusions with others of similar intellectual capacity without the loss of discretion or humility. Such virtues are exercised in your own best interest. Do not deviate from them.

      https://ia802300.us.archive.org/10/items/silent-weapons-for-quiet-wars_202110/Silent%20Weapons%20for%20Quiet%20Wars.pdf

  6. Feb 2024
    1. The GNR technologies do not divide clearly into commercial andmilitary uses; given their potential in the market, it’s hard to imaginepursuing them only in national laboratories. With their widespreadcommercial pursuit, enforcing relinquishment will require a verificationregime similar to that for biological weapons, but on an unprecedentedscale. This, inevitably, will raise tensions between our individual pri-vacy and desire for proprietary information, and the need for verifica-tion to protect us all. We will undoubtedly encounter strong resistanceto this loss of privacy and freedom of action.

      While Joy looks at the Biological and Chemical Weapons Conventions as well as nuclear nonproliferation ideas, the entirety of what he's looking at is also embedded in the idea of gun control in the United States as well. We could choose better, but we actively choose against our better interests.

      What role does toxic capitalism have in pushing us towards these antithetical goals? The gun industry and gun lobby have had tremendous interest on that front. Surely ChatGPT and other LLM and AI tools will begin pushing on the profitmaking levers shortly.

    2. We have embodied our relinquish-ment of biological and chemical weapons in the 1972 BiologicalWeapons Convention (BWC) and the 1993 Chemical Weapons Con-vention (CWC).
  7. Jan 2024
    1. also remember "non-conventional" wars like "weapons of mass migration", targetting north america and western europe. the young white males in america and europe will be drafted for "already lost wars" against russia/hamas/ethiopia (suicide mission), and the young black males (migrant invaders) will finally conquer the young white females, creating the "brown race" of slaves for the global elite (the same elite that is preaching the "racism is bad" gospel)

    1. Thus we have the possibility not just of weapons of mass destructionbut of knowledge-enabled mass destruction (KMD), this destructive-ness hugely amplified by the power of self-replication.

      coinage of the phrase knowledge-enabled mass destruction here?

  8. Dec 2023
    1. its easy to get lost in complexity here, but i prefer to keep it simple: our *only* problem is overpopulation, which is caused by pacifism = civilization. *all* other problems are only symptoms of overpopulation. these "financial weapons of mass destruction" (warren buffett) have the only purpose of mass murder = to kill the 95% useless eaters. so yes, this is a "controlled demolition" aka "global suicide cult". most of us will die, but we are happy...

      financial weapons of mass destruction: the useful idiots believe that they can defeat risk (or generally, defeat death) by centralization on a global scale. they want to build a system that is "too big to fail" and which will "live forever". they use all kinds of tricks to make their slaves "feel safe" and "feel happy", while subconsciously, everything is going to hell in the long run. so this is just another version of "stupid and evil people trying to rule the world". hubris comes before the fall, nothing new. their system will never work, but idiots must try... because "fake it till you make it" = constructivism, mind over matter, fantasy defeats reality, ...

      the video and soundtrack are annoying, they add zero value to the monolog.

  9. Nov 2023
  10. Mar 2023
    1. Hatti at War

      Iron at this time was meteoric in origin and incredibly valuable, so it wouldn't have been used until after the collapse of the Hittite Empire when iron smelting began its rise; bronze weapons would have been more common as a result.

  11. Feb 2023
    1. Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.

      Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.

  12. Aug 2022
    1. "Whistling" or "screaming" arrows (shaojian) made by the horseback archers of the steppes were described by the Chinese chronicler Sima Qian in about 100 B.C. A small, perforated bone or wood sound chamber—the whistle—was attached to the shaft behind the arrowhead.
    2. Alexander had learned from King Porus during his 326 B.C. Indian campaign that elephants have sensitive hearing and poor eyesight, which makes them averse to unexpected loud, discordant sounds.
    3. Since 2016, American diplomats in Cuba, Russia, China and elsewhere have experienced "Havana Syndrome," associated with mysterious neurological and brain injuries thought to be inflicted by unknown high-powered microwave or targeted sonic energy systems. Sound wave transmitters are not only psychologically toxic but can cause pain and dizziness, burns, irreversible damage to inner ears and possibly neurological and internal injuries.
    4. Numerous other technologies to produce booming detonations to disorient and frighten enemies were described in ancient Chinese war manuals. These explosive devices employed gunpowder, invented in China around A.D. 850, reaching Europe about 1250.

      What does the history of shock and awe in history look like?

    5. Bloodcurdling war cries are a universal way of striking terror in foes. Maori war chants, the Japanese battle cry "Banzai!" (Long Live the Emperor) in World War II, the Ottomans' "Vur Ha!" (Strike), the Spanish "Desperta Ferro!" (Awaken the Iron), and the "Rebel Yell" of Confederate soldiers are examples. In antiquity, the sound of Greek warriors bellowing "Alala!" while banging swords on bronze shields was likened to hooting owls or a screeching flock of monstrous birds.
    6. Perseus of Macedon prepared for a Roman attack with war elephants in 168 B.C. by having artisans build wooden models of elephants on wheels.
    7. in 202 B.C., blasts of Roman war trumpets panicked Carthaginian general Hannibal's war elephants in the Battle of Zama, ending the Second Punic War.
    8. In 280 B.C., the Romans first encountered war elephants, brought to Italy by Greek King Pyrrhus.
    9. Captured as a boy from Bisaltia in northeastern Greece, a prisoner named Naris heard about the marvelous dancing horses in the Kardian barbershop where he worked.

      Naris usedhis knowledge of the trained dancing horses of the Kardians of Thrace against them in battle to win.

    10. Deploying sound in war has evolved over millennia, from natural animal sounds and music to today's advanced sonic devices.

      Can't help but think about the blasting of music by US forces used heavy metal to blast out Manuel Antonio Noriega, the former military leader of Panama.

  13. Jun 2022
    1. War assault weapons have no place except with military?

      It's strange how the same people who imagine a disarmed populace as a good thing are playing catch-up to arm Ukranian civilians against a military. I've lost count of the children massacred by militaries that are the only groups of people magically trustworthy enough to be armed apparently.

      If you left a murderer alone with a room full of kids and a knife for 77 minutes, you'd have the same result - and if you're a student of recent history, you'd know that's exactly the kind of attack that has happened time and again in gun-free victim zones around the world.

      To address this issue properly, citizens must understand the #JustPowers Clause of The Declaration of Independence, the foundation that the US Constitution is laid upon; and a universal document that recognizes the rights of ALL humans.

      Put simply, it states that neither you, nor I, nor anyone else may justly grant powers to others that we do not have.

      If you or I stole our neighbors' firearms, even if we claimed it was for "safety" or "the common good", we'd face criminal charges. We all know this, and the evidence is in our conduct.

      Instead, why not focus just powers such has holding adults responsible for the safety of others accountable for negligence?

  14. May 2022
    1. We don’t know how many media outlets have been run out of existence because of brand safety technology, nor how many media outlets will never be able to monetize critical news coverage because the issues important to their communities are marked as “unsafe.”
    1. With Alphabet Inc.’s Google, and Facebook Inc. and its WhatsApp messaging service used by hundreds of millions of Indians, India is examining methods China has used to protect domestic startups and take control of citizens’ data.

      Governments owning citizens' data directly?? Why not have the government empower citizens to own their own data?

  15. Mar 2022
    1. The current mass media such as t elevision, books, and magazines are one-directional, and are produced by a centralized process. This can be positive, since respected editors can filter material to ensure consistency and high quality, but more widely accessible narrowcasting to specific audiences could enable livelier decentralized discussions. Democratic processes for presenting opposing views, caucusing within factions, and finding satisfactory compromises are productive for legislative, commercial, and scholarly pursuits.

      Social media has to some extent democratized the access to media, however there are not nearly enough processes for creating negative feedback to dampen ideas which shouldn't or wouldn't have gained footholds in a mass society.

      We need more friction in some portions of the social media space to prevent the dissemination of un-useful, negative, and destructive ideas swamping out the positive ones. The accelerative force of algorithmic feeds for the most extreme ideas in particular is one of the most caustic ideas of the last quarter of a century.

    2. Since any powerful tool, such as a genex, can be used for destructive purposes, the cautions are discussed in Section 5.

      Given the propensity for technologists in the late 90s and early 00s to have rose colored glasses with respect to their technologies, it's nice to see at least some nod to potential misuses and bad actors within the design of future tools.

  16. Feb 2022
  17. Dec 2021
    1. https://publish.obsidian.md/danallosso/Bloggish/Actual+Books

      I've often heard the phrase, usually in historical settings, "little book" as well and presupposed it to be a diminutive describing the ideas. I appreciate that Dan Allosso points out here that the phrase may describe the book itself and that the fact that it's small means that it can be more easily carried and concealed.

      There's also something much more heartwarming about a book as a concealed weapon (from an intellectual perspective) than a gun or knife.

  18. Oct 2021
    1. https://www.theatlantic.com/ideas/archive/2021/10/facebook-papers-democracy-election-zuckerberg/620478/

      Adrienne LaFrance outlines the reasons we need to either abandon Facebook or cause some more extreme regulation of it and how it operates.

      While she outlines the ills, she doesn't make a specific plea about the solution of the problem. There's definitely a raging fire in the theater, but no one seems to know what to do about it. We're just sitting here watching the structure burn down around us. We need clearer plans for what must be done to solve this problem.

  19. Mar 2021
  20. Dec 2020
    1. The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning. The rise of QAnon, for example, is one of the social web’s logical conclusions. That’s because Facebook—along with Google and YouTube—is perfect for amplifying and spreading disinformation at lightning speed to global audiences. Facebook is an agent of government propaganda, targeted harassment, terrorist recruitment, emotional manipulation, and genocide—a world-historic weapon that lives not underground, but in a Disneyland-inspired campus in Menlo Park, California.

      The original goal with a bit of moderation may have worked. Regression to the mean forces it to a bad place, but when you algorithmically accelerate things toward our bases desires, you make it orders of magnitude worse.

      This should be though of as pure social capitalism. We need the moderating force of government regulation to dampen our worst instincts, much the way the United State's mixed economy works (or at least used to work, as it seems that raw capitalism is destroying the United States too).

  21. Oct 2020
    1. But these lookalike audiences aren’t just potential new customers — they can also be used to exclude unwanted customers in the future, creating a sort of ad targeting demographic blacklist.
    2. How consumers would be expected to navigate this invisible, unofficial credit-scoring process, given that they’re never informed of its existence, remains an open question.
    3. “It sure smells like the prescreening provisions of the FCRA,” Reidenberg told The Intercept. “From a functional point of view, what they’re doing is filtering Facebook users on creditworthiness criteria and potentially escaping the application of the FCRA.”
    4. In an initial conversation with a Facebook spokesperson, they stated that the company does “not provide creditworthiness services, nor is that a feature of Actionable Insights.” When asked if Actionable Insights facilitates the targeting of ads on the basis of creditworthiness, the spokesperson replied, “No, there isn’t an instance where this is used.” It’s difficult to reconcile this claim with the fact that Facebook’s own promotional materials tout how Actionable Insights can enable a company to do exactly this. Asked about this apparent inconsistency between what Facebook tells advertising partners and what it told The Intercept, the company declined to discuss the matter on the record,
    1. YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention.

      Talk radio has had this formula for years and they've almost had to use it to drive any listenership as people left radio for television and other media.

      I can still remember the different "loudness" level of talk between Bill O'Reilly's primetime show on Fox News and the louder level on his radio show.

    2. A 2015 clip about vaccination from iHealthTube.com, a “natural health” YouTube channel, is one of the videos that now sports a small gray box.

      Does this box appear on the video itself? Apparently not...

      Examples:

      But nothing on the embedded version:

      A screengrab of what this looks like:

    3. When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.“They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.

      Obviously they took the easy route. You may need to measure what matters, but getting to that goal by any means necessary or using indefensible shortcuts is the fallacy here. They could have had that North Star, but it's the means they used by which to reach it that were wrong.

      This is another great example of tech ignoring basic ethics to get to a monetary goal. (Another good one is Marc Zuckerberg's "connecting people" mantra when what he should be is "connecting people for good" or "creating positive connections".

    4. The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.#lazy-img-336042387:before{padding-top:66.68334167083543%;}

      This is a great summation of the issue.

    5. Somewhere along the last decade, he added, YouTube prioritized chasing profits over the safety of its users. “We may have been hemorrhaging money,” he said. “But at least dogs riding skateboards never killed anyone.”
    1. A more active stance by librarians, journalists, educators, and others who convey truth-seeking habits is essential.

      In some sense these people can also be viewed as aggregators and curators of sorts. How can their work be aggregated and be used to compete with the poor algorithms of social media?

    1. Meta co-founder and CEO Sam Molyneux writes that “Going forward, our intent is not to profit from Meta’s data and capabilities; instead we aim to ensure they get to those who need them most, across sectors and as quickly as possible, for the benefit of the world.”

      Odd statement from a company that was just acquired by Facebook founder's CVI.

    1. Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they've kicked this firm off their site, but I think they've got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, #Breaking Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let's start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn't dominated by a single censor. #BreakUpBigTech.”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.

      Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it's inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.

      If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the "masses" (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning's coffee or today's lunch marginalized.

      To analogize it, we've provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.

      If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn't have the need for as much human curation.

    2. It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.

      But platforms are making huge decisions about who is allowed to speak. While they're generally allowing everyone to have a voice, they're also very subtly privileging many voices over others. While they're providing space for even the least among us to have a voice, they're making far too many of the worst and most powerful among us logarithmic-ally louder.

      It's not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn't have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.

      The issue we ought to be looking at is the dynamic range between people and the messages they're able to send through social platforms.

      We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they're able to buy, we're imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.

      If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I'd have to actively search that content out for it to cause me that sort of harm.

    1. A spokeswoman for Summit said in an e-mail, “We only use information for educational purposes. There are no exceptions to this.” She added, “Facebook plays no role in the Summit Learning Program and has no access to any student data.”

      As if Facebook needed it. The fact that this statement is made sort of goes to papering over the idea that Summit itself wouldn't necessarily do something as nefarious or worse with it than Facebook might.

    1. Having low scores posted for all coworkers to see was “very embarrassing,” said Steph Buja, who recently left her job as a server at a Chili’s in Massachusetts. But that’s not the only way customers — perhaps inadvertently — use the tablets to humiliate waitstaff. One diner at Buja’s Chili’s used Ziosk to comment, “our waitress has small boobs.”According to other servers working in Ziosk environments, this isn’t a rare occurrence.

      This is outright sexual harrassment and appears to be actively creating a hostile work environment. I could easily see a class action against large chains and/or against the app maker themselves. Aggregating the data and using it in a smart way is fine, but I suspect no one in the chain is actively thinking about what they're doing, they're just selling an idea down the line.

      The maker of the app should be doing a far better job of filtering this kind of crap out and aggregating the data in a smarter way and providing a better output since the major chains they're selling it to don't seem to be capable of processing and disseminating what they're collecting.

    2. Systems like Ziosk and Presto allow customers to channel frustrations that would otherwise end up on public platforms like Yelp — which can make or break a restaurant — into a closed system that the restaurant controls.

      I like that they're trying to own and control their own data, but it seems like they've relied on a third party company to do most of the thinking for them and they're not actually using the data they're gathering in the proper ways. This is just painfully deplorable.

    1. I literally couldn’t remember when I’d last looked at my RSS subscriptions. On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination.
    1. Safiya Noble, Algorithms of Oppression (New York: New York University Press, 2018). See also Mozilla’s 2019 Internet Health Report at https://internethealthreport.org/2019/lets-ask-more-of-ai/.
    1. eight years after release, men are 43% more likely to be taken back under arrest than women; African-Americans are 42% more likely than whites, and high-school dropouts are three times more likely to be rearrested than college graduates.

      but are these possibly the result of external factors (like racism?)

  22. Jan 2019
  23. Aug 2018
    1. There's also potential for confusion within the CRDC itself. While this particular item refers clearly to "a shooting," the previous item asks about a long list of incidents, some involving "a firearm or explosive device" and others involving "a weapon."
  24. Jul 2018
    1. But they found stark differences in shooting outcomes depending on the caliber of gun used. They divided the calibers of guns used in the shootings into three categories: small, which included .22-, .25- and .32-caliber handguns; medium, including .380s, .38s and 9mms; and large, including .40s, .44 magnums, .45s, 10mms and 7.62 x 39s.
  25. Nov 2017
    1. In January 2004, David Kay, the former top U.S. weapons inspector, tells Congress: "We were almost all wrong." A presidential commission concludes in March 2005 "not one bit" of prewar intelligence on Iraqi weapons of mass destruction panned out
  26. Aug 2017
  27. Dec 2016