586 Matching Annotations
  1. Oct 2022
    1. If the link is in a Proud Boys forum, would you not take any action against it, even if it’s like, “Click this link to help plan”?Are you asking if we have people out there clicking every link and checking if the forum comports with the ideological position that Signal agrees with?Yeah. I think in the most abstract way, I’m asking if you have a content moderation team.No, we don’t have that. We are also not a social media platform. We don’t amplify content. We don’t have Telegram channels where you can broadcast to thousands and thousands of people. We have been really careful in our product development side not to develop Signal as a social network that has algorithmic amplification that allows that “one to millions” amplification of content. We are a messaging platform. We don’t have a content moderation team because (1) we are fully private, we don’t see your content, we don’t know who you’re talking about; and (2) we are not a content platform, so it is a different paradigm.

      Signal president, Meredith Wittaker, on Signal's product vision and the difference between Signal and Telegram.

      They deliberately steered the product away from "one to millions" amplification of content, like for example Telegram's channels.

    1. The February 15 report of the OSCE Special Monitoring Mission to Ukraine recorded some 41 explosions in the ceasefire areas. This increased to 76 explosions on Feb 16, 316 on Feb 17, 654 on Feb 18, 1413 on Feb 19, a total of 2026 of Feb 20 and 21 and 1484 on Feb 22. The OSCE mission reports showed that the great majority of impact explosions of the artillery were on the separatist side of the ceasefire line.

      The OSCE (Organization for Security and Co-operation in Europe) showed a dramatic increase in ceasefire violations predominantly on the side of the separatists in Luhansk and Donetsk.

  2. Sep 2022
    1. The US then began integrating Ukraine into NATO, such that by June of 2020 it was recognised as an “Enhanced Opportunities Partner”. A year later, the two countries signed a “Charter on Strategic Partnership”, which declared that the US supports Ukraine’s “aspirations to join NATO”.

      After the Maidan revolution the US started to integrate Ukraine into NATO through unofficial means.

      By June 2020 they were recognised as an Enhanced Opportunities Partner (https://www.nato.int/cps/en/natohq/news_176327.htm)

      A year later in 2021 the US and Ukraine signed a "Charter on Strategic Partnership" which declared that the US supports Ukrain's aspirations to join NATO (https://www.state.gov/u-s-ukraine-charter-on-strategic-partnership/).

    1. Working backwards, Google isn’t legally compelled to give Mark a hearing about his digital life (Sixth Amendment); they are wrong not to. Google isn’t legally compelled to give Mark due process before permanently deleting his digital life (Fifth Amendment); they are wrong not to. Google isn’t legally compelled to not search all of the photographs uploaded to Google (by default, if you click through all of the EULA’s); they are…well, this is where it gets complicated.

      Ben Thompson makes the case that although Google is acting within legal bounds, morally their behavior is wrong and incompatible with the spirit of the Fifth, Sixth and possibly Fourth Amendments.

    2. Again, Google is not covered by the Bill of Rights; all of these Amendments, just like the First, only apply to the government. The reason why this case is useful, though, is it is a reminder that specific legal definitions are distinct from questions of right or wrong.

      Ben Thompson reminds us that the Bill of Rights applies to the government, but not to Google. At the same time he makes the point that legal definitions are distinct from moral questions.

    3. In short, the questions about Google’s behavior are not about free speech; they do, though, touch on other Amendments in the Bill of Rights. For example: The Fourth Amendment bars “unreasonable searches and seizures”; while you can make the case that search warrants were justified once the photos in question were discovered, said photos were only discovered because Mark’s photo library was indiscriminately searched in the first place. The Fifth Amendment says no person shall be deprived of life, liberty, or property, without due process of law; Mark lost all of his data, email account, phone number, and everything else Google touched forever with no due process at all. The Sixth Amendment is about the rights to a trial; Mark was not accused of any crime in the real world, but when it came to his digital life Google was, as I noted, “judge, jury, and executioner” (the Seventh Amendment is, relatedly, about the right to a jury trial for all controversies exceeding $20).

      Ben Thompson argues that questions about Google's behavior towards a false positive case of CSAM does not pertain to free speech or to the First Amendment. But it does pertain to other Amendments in the Bill of Rights.

    4. The Google case is not about the First Amendment, either legally or culturally. The First Amendment is not absolute, and CSAM is an obvious example. In 1957’s Roth v. United States the Supreme Court held that obscene speech was not protected by the First Amendment; Justice William Brennan Jr. wrote: All ideas having even the slightest redeeming social importance — unorthodox ideas, controversial ideas, even ideas hateful to the prevailing climate of opinion — have the full protection of the guaranties, unless excludable because they encroach upon the limited area of more important interests. But implicit in the history of the First Amendment is the rejection of obscenity as utterly without redeeming social importance. This rejection for that reason is mirrored in the universal judgment that obscenity should be restrained, reflected in the international agreement of over 50 nations, in the obscenity laws of all of the 48 States, and in the 20 obscenity laws enacted by the Congress from 1842 to 1956.

      Ben Thompson argues that the First Amendment is not absolute and that there are established exceptions, such as laws prohibiting obscene speech. But adjudicating these laws is done by taking scienter — whether you knowingly committed the act — into account.

    5. I found this paragraph in a New York Times article about Elon Musk’s attempts to buy Twitter striking: The plan jibes with Mr. Musk’s, Mr. Dorsey’s and Mr. Agrawal’s beliefs in unfettered free speech. Mr. Musk has criticized Twitter for moderating its platform too restrictively and has said more speech should be allowed. Mr. Dorsey, too, grappled with the decision to boot former President Donald J. Trump off the service last year, saying he did not “celebrate or feel pride” in the move. Mr. Agrawal has said that public conversation provides an inherent good for society. Their positions have increasingly become outliers in a global debate over free speech online, as more people have questioned whether too much free speech has enabled the spread of misinformation and divisive content. In other words, the culture has changed; the law persists, but it does not and, according to the New York Times, ought not apply to private companies.

      Ben Thompson argues that it is precisely culture that has now changed, seemingly in favor of being less tolerant towards the expression of certain opinions.

    6. Munroe, though, assumes the opposite: liberty, in this case the freedom of speech, is an artifact of law, only stretching as far as government action, and no further. Pat Kerr, who wrote a critique of this comic on Medium in 2016, argued that this was the exact wrong way to think about free speech: Coherent definitions of free speech are actually rather hard to come by, but I would personally suggest that it’s something along the lines of “the ability to voluntarily express (and receive) opinions without suffering excessive penalties for doing so”. This is a liberal principle of tolerance towards others. It’s not an absolute, it isn’t comprehensive, it isn’t rigorously defined, and it isn’t a law. What it is is a culture.

      Ben Thompson by highlighting an argument made by Pat Kerr, that free speech (although lacking a widely accepted definition) is about the tolerance we show others in expressing their opinions, equates it to culture.

    7. Ben Thompson discusses the tradeoffs of Google pre-emptively scanning mobile pictures to combat child pornography with reference to a recent case where a concerned father had his digital life wiped out by Google (and not reinstated after being cleared by law enforcement) after sending a picture of his son's penis to his family doctor.

    8. Even if you grant the arguments that this awesome exercise of surveillance is warranted, given the trade-offs in question, that makes it all the more essential that the utmost care be taken in case the process gets it wrong. Google ought to be terrified it has this power, and be on the highest alert for false positives; instead the company has gone in the opposite direction, setting itself as judge, jury, and executioner, even when the people we have collectively entrusted to lock up criminals ascertain there was no crime. It is beyond arrogant, and gives me great unease about the company generally, and its long-term investments in AI in particular.

      Ben Thompson argues that Google should be incredibly wary in finding themselves in the position where they can lawfully act as the judge, jury and executioner presiding over someone's digital life. And yet they don't seem to be.

    9. This Article is a manifestation of Madison’s hope. Start with the reality that it seems quaint in retrospect to think that any of the Bill of Rights would be preserved absent the force of law. This is one of the great lessons of the Internet and the rise of Aggregators: when suppressing speech entailed physically disrupting printing presses or arresting pamphleteers, then restricting government, which retains a monopoly on real world violence, was sufficient to preserve speech. Along the same lines, there was no need to demand due process or a restriction on search and seizure on any entity but the government, because only the government could take your property or send you to jail.

      Ben Thompson makes the point that during the time of printing presses and pamphleteers, when free speech laws were drafted, the threat to free speech could come only from one entity: the government (with its monopoly on violence). Thus, placing restrictions on one entity — the government — would be sufficient to safeguard free speech.

    10. Aggregators, though, make private action much more possible and powerful than ever before: yes, if you are kicked off of Twitter or Facebook, you can still say whatever you want on a street corner; similarly, if you lose all of your data and phone and email, you are still not in prison — and thank goodness that is the case! At the same time, it seems silly to argue that getting banned from a social media platform isn’t an infringement on individual free speech rights, even if it is the corporations’ own free speech rights that enable them to do just that legally, just as it is silly to argue that losing your entire digital life without recourse isn’t a loss of property without due process. The big Internet companies are manifesting Madison’s fears of the majority operating against the minority, and there is nothing the Bill of Rights can do about it.

      Ben Thompson argues that in a world of aggregators, restricting one entity — the government — no longer safeguards free speech. Because getting banned from a platform effectively infringes on that right (even though the platforms are within their rights to do so). Also, the dynamic has changed, since there's more than one entity to rein in.

  3. Aug 2022
    1. Right, it’s a problem of authority. When people don’t trust those charged with conveying the truth, they won’t accept it. And at some point, like I said, we’ll have to reconfigure our democracy. Our politicians and institutions are going to have to adjust to the new world in which the public can’t be walled off or controlled. Leaders can’t stand at the top of pyramids anymore and talk down to people. The digital revolution flattened everything. We’ve got to accept that.

      Martin Gurri holds that we need to reconfigure our democracy where the public cannot be walled off or controlled by politicians or institutions because the digital revolution flattened everything.

    2. Sean Illing I do want to at least point to an apparent paradox here. As you’ve said, because of the internet, there are now more voices and more perspectives than ever before, and yet at the same time there’s a massive “herding effect,” as a result of which we have more people talking about fewer subjects. And that partly explains how you get millions of people converging on something like QAnon. Martin Gurri Yeah, and that’s very mysterious to me. I would not have expected that outcome. I thought we were headed to ever more dispersed information islands and that that would create a fragmentation in individual beliefs. But instead, I’ve noticed a trend toward conformism and a crystallizing of very few topics. Some of this is just an unwillingness to say certain things because you know if you said them, the internet was going to come after you. But I think Trump had a lot to do with it. The amount of attention he got was absolutely unprecedented. Everything was about him. People were either against him or for him, but he was always the subject. Then came the pandemic and he simply lost the capacity to absorb and manipulate attention. The pandemic just moved him completely off-kilter. He never recovered.

      Martin Gurri holds that there's an emergent herding effect in the public conversation, driven by the internet, which leads us to have conversations clustered in relatively few different topics.

    3. But that’s not really how truth works. Truth is essentially an act of trust, an act of faith in some authority that is telling you something that you could not possibly come to realize yourself. What’s a quark? You believe that there are quarks in the universe, probably because you’ve been told by people who probably know what they’re talking about that there are quarks. You believe the physicists. But you’ve never seen a quark. I’ve never seen a quark. We accept this as truth because we’ve accepted the authority of the people who told us it’s true.

      Martin Gurri argues that there's something like a "practical truth" that exists. We believe a quark exists not because we know it ourselves to be true, but because we trust experts that say it is true.

    4. It should be a truism that material conditions matter much less than expectations. That was true during the Great Depression and it’s true today. The rhetoric of the rant on the web feeds off extreme expectations — any imperfection in the economy will be treated as a crisis and a true crisis will be seen as the Apocalypse. Take the example of Chile. For 40 years, it had high economic growth, rising into the ranks of the wealthiest nations. During this time, Chile enjoyed a healthy democracy, in which political parties of left and right alternated in office. Everyone benefited. Yet in 2019, with many deaths and much material destruction, the Chilean public took to the streets in revolt against the established order. Its material expectations had been deeply frustrated, despite the country’s economic and political successes.

      Martin Gurri talks about how material conditions matter less than expectations. For 40 years Chile had enjoyed a healthy democracy, yet in 2019 the people revolted. They had become deeply frustrated with their material expectations despite the country's successes.

    5. With few exceptions, most market democracies have recovered from the 2008 financial crisis. But the public has not recovered from the shock of watching supposed experts and politicians, the people who posed as the wise pilots of our prosperity, sound and act totally clueless while the economy burned. In the past, when the elites controlled the flow of information, the financial collapse might have been portrayed as a sort of natural disaster, a tragedy we should unify around our leadership to overcome. By 2008, that was already impossible. The networked public perceived the crisis (rightly, I think) as a failure of government and of the expert elites.

      Martin Gurri argues that had the financial crisis of 2008 happened in the 20th century, the elites, through their control of the flow of information, might have portrayed it as a natural disaster we should rally around our leadership to overcome. But with the advent of the internet, we got the "networked public", and the elites and government lost their monopoly on information.

    1. As humans lower their time preference, they develop a scope for carrying out tasks over longer time horizons. Instead of spending all our time producing goods for immediate consumption, we can choose to spend time creating superior goods that take longer to complete but benefit us more in the long run. Only by lowering time preference can humans produce goods that are not meant to be consumed themselves but are instead used in the production of other goods.

      Only when humans are able to lower their time preference are they able to focus on producing goods that benefit them in the long term, rather than those that are meant for immediate consumption.

    1. All in all the internet seems to be getting smaller and smaller, I don't use any social media apart from HN and Reddit, and I only use Reddit because I seem to still be addicted to it since it's probably one of the most censored of all of them.10 years ago as a 20 year old I benefited greatly from how the internet was, here is an example: I grew up on the idea that there was nothing wrong with porn, and there isn't per se, and no one ever spoke about addiction like behavior when it came to watching it, then one day I discover a controversial post on Reddit and dove down the rabbit hole and lo and behold I had the same problems as this community of people trying to quit watching it, and I benefited from their experiences and knowledge, same about discovering communities against social media like Facebook, which pushed me to research the subject and deleting my account, etc. but now it seems like any controversial community is quickly banned or pushed aside in its own unfindable bubble and that to me is a great loss.I want to see people have an opposite opinion than mine, and I want to be able to get into heated non censored discussions in comment sections and get suggestions about articles, studies and content to challenge my views.

      Opinions different from your own exist only in "unfindable bubbles" on today's internet.

    2. Google and Youtube search are heavily censored, for example if you open Youtube and type "JRE alex" then Alex Jones will be the last suggestion despite his episode having the most views, if you type "JRE Robert" then Youtube will suggest Robert Downey Jr and other guests whose name starts with Robert, but it won't show Robert Malone, and if you write "JRE Robert Malon" it still won't suggest it.

      Popular episodes of Joe Rogan are being censored by YouTube. You won't find them with a normal search.

    1. Why does TikTok export so well? Because video is universal (or at least relatively so). A funny clip in one country is often funny in another. Moreover, music — at the heart of a platform like TikTok — has global appeal. The result is that content in an English-speaking country can go viral just as easily in a French-speaking one. Reddit doesn’t have this luxury. Since most of its content is written in English, its appeal is fundamentally limited in non-English speaking countries. 

      TikToks can easily go viral across the world, because video and music can be interpreted regardless of language.

    1. Yet, as even their supplementary table shows, there were no quarantines anywhere in Lodi until 24 February, maybe five days after the estimated R(t) peak there. The tepid early measures in Bergamo and Cremona, meanwhile, weren’t imposed until 2 March, long after the observed growth in infections had entered its terminal decline in both region

      A study on the COVID outbreak in northern Italy shows that the effective reproductive number (Rt) was already decreasing in Milan and neighboring regions, before any measures were implemented.

  4. Jul 2022
    1. The provision of Acceptance and Commitment Therapy whets patients’ appetite for meaning only to deprive them of real nourishment by extracting the very substance on which meaning depends: its orientation toward the absolute.

      That what provides meaning is the orientation towards the absolute.

    2. The idea that “we as therapists shouldn’t talk about right and wrong” has become the very different idea that there is no right and wrong in the first place.

      James Mumford says that therapists took the idea that they shouldn't talk to patients about right and wrong (to help create a safe space) morphed into: there IS no right and wrong.

      I would add that this seems to have bled over into society.

    3. Harmless, surely? Who would deny that it’s vital that my values be ones I’ve properly signed up for rather than had simply foisted upon me — by my parents, my teachers, my culture? But this truism — that I will more likely be able to live out a set of values if I have consciously adopted them — doesn’t exhaust the sense of what’s being said. My psychologist is implying something more radical when he insists on the pivotal importance of choosing your own values. When he claims that “values are subjective,” he is painting a picture of the world according to which the only values that exist are ones we have created. To say values are subjective is to say there is nothing independent of our own minds that answers to our talk of right and wrong. It is to say that our ethical beliefs do not track a reality which is “there anyway.” According to his picture, values are determined, not discovered, and selfhood — what it means to be a person — is therefore fundamentally about choice, not vision. It is about picking a course of action arbitrarily, not about seeing a reality that transcends you — goodness — and integrating with it.

      By saying that values are subjective you are saying that values are determined and not discovered. That there is no "reality that transcends you" which contains good and band, that you can integrate with.

  5. Jun 2022
    1. A solution for deciding canonicity that derives entirely from mathematics generates the remarkable property that the answer is independent from whoever computes it. This is the sense in which a consensus mechanism is capable of being objective. There is one important caveat though: one must assume that all parties agree on a singular reference point, such as the genesis block or its hash digest. An objective consensus mechanism is one that enables any party to extrapolate the canonical view of history from this reference point.

      A solution for deciding canonicity that derives entirely from mathematics has the property that the answer is entirely independent from who computes it.

      This is the sense in which a consensus mechanism can be objective. There is one important caveat. We must assume that all parties agree on a singular reference point, such as the genesis block.

    2. It is easy to derive canonicity from trusted authorities or, according to some, from a digital voting scheme backed by a citizen identity scheme. However, trusted authorities are security holes, and relying on the government to provide trusted identification services becomes a tool of politics rather than one that is independent of it. Moreover, both solutions assume agreement about the identities and the trustworthiness of third parties. We want to reduce trust assumptions; ideally we have a solution that derives entirely from mathematics.

      Canonicity can be derived from third parties, but third parties are security holes. It would be preferable to derive canonicity from mathematics alone.

    3. Blockchains address this problem in two ways. First, they enforce a complete ordering on all transactions, which generates a tree of alternative views of history. Second, they define canon for histories, along with a fork-choice rule that selects the canonical branch from the tree of histories.

      Blockchains solve the double spending problem in two ways.

      (1) They enforce a complete ordering of all transactions, which results in a tree of possible histories. (2) They define a fork-choice rule that selects the canonical branch from the tree of all possible histories.

    1. If we don't create good large-scale aggregates of social data, then we risk ceding market share to opaque and centralized social credit scores instead.

      Vitalik argues that if we don't create good decentralized social credit score systems, that vacuum will be filled by centralized alternatives.

    2. This is equivalent to the famous double-spend problem in designing decentralized currencies, except instead of the goal being to prevent a previous owner of a coin from being able to send it again, here the goal is to prevent the previous key controlling an account from being able to change the key. Just like creating a decentralized currency, doing account management in a decentralized way requires something like a blockchain. A blockchain can timestamp the key change messages, providing common knowledge over whether B or C came first.

      Decentralized account management may also run into a problem analogous to the double spend problem. Someone with key A signs a message they are now using key B, and an attacker gets a hold of that key and signs a message they are using key C. An observer has no way of knowing whether the message about B or C happened first.

    1. Loans in the crypto world tend to be overcollateralized, requiring users to put up more value in crypto than what they receive in a loan. Although this works reasonably well for users who have already accumulated capital and want to use that capital in a different format (i.e. borrowing fiat currency against their crypto holdings), it doesn’t work well for the more standard reason people take out loans: because they don’t already have the money they need. Needless to say in an ecosystem whose advocates like to promise will “bank the unbanked” and help the marginalized, this is a bit of a setback. The need for these overcollateralized loans again stems from a lack of indicators to a person’s trustworthiness like those that are used in traditional finance, such as credit scores or banking records. Overcollateralized crypto loans are also made even more necessary on some anonymity-preserving loan platforms that choose not to require know-your-customer (KYC), who otherwise would see an influx of anonymous users borrowing money and making off with it.

      A consequence of not being able to narrow down who is behind an address is that the risk of loaning that address money goes up. This is expressed in highly collaterized loans.

      The permissionless network lowers the barrier for entry for people around the world to get access to finance, but the lack of a barrier increases the risk for the one granting a loan.

    2. Unlike in offline organizations and societies where centrally-controlled identifiers or even just in-person attendance are fairly successfully used to ensure one individual gets one vote, this has been a very difficult nut to crack in the crypto world, where one individual can trivially create endless new wallet addresses—known as a Sybil attack.1

      Centralized organizations have long been able to ensure one vote per participant, but in decentralized organizations it has been difficult to discern whether someone is using multiple addresses to vote.

    1. It is NOT an exception if the username is not valid or the password is not correct. Those are things you should expect in the normal flow of operation. Exceptions are things that are not part of the normal program operation and are rather rare.

      Exceptions are things that are not part of the normal program operation and are rather rare.

    2. Exceptions should be reserved for what's truly exceptional.

      Exceptions should be reserved for what is truly exceptional.

    3. Make sure the exceptions are at the same level of abstraction as the rest of your routine.

      Make sure that exceptions are at the same level of abstraction as the rest of your routine.

    4. Don't use exceptions if the error can be handled locally

      Don't use exceptions if the error can be handled locally.

    5. Use exceptions to notify about things that should not be ignored.

      Use exceptions for things that should not be ignored.

    6. In linguistics this is sometimes called presupposition failure. The classic example is due to Bertrand Russell: "Is the King of France bald" can't be answered yes or no, (resp. "The King of France is bald" is neither true nor false), because it contains a false presupposition, namely that there is a King of France. Presupposition failure is often seen with definite descriptions, and that's common when programming. E.g. "The head of a list" has a presupposition failure when a list is empty, and then it's appropriate to throw an exception.

      Presupposition failure is a term from linguistics. The classical example is from Bertrand Russel and pertains to the questions: Is the King of France bald? It contains a false presupposition, since there is no King of France. So the answer is neither true nor false.

    7. if the function's assumptions about its inputs are violated, it should throw an exception instead of returning normally.

      If a function's assumptions about it's inputs are violated, throw an exception.

  6. Apr 2022
    1. Let's try to examine the roots of the Ukrainian conflict. It starts with those who for the last eight years have been talking about "separatists" or "independentists" from Donbass. This is a misnomer. The referendums conducted by the two self-proclaimed Republics of Donetsk and Lugansk in May 2014, were not referendums of "independence" (независимость), as some unscrupulous journalists have claimed, but referendums of "self-determination" or "autonomy" (самостоятельность). The qualifier "pro-Russian" suggests that Russia was a party to the conflict, which was not the case, and the term "Russian speakers" would have been more honest. Moreover, these referendums were conducted against the advice of Vladimir Putin.

      The referenda of Donestk and Lugansk were not about independence but about self-determination.

  7. Dec 2021
    1. Most of the descriptions I’ve seen focus on mechanisms - block chains, smart contracts, tokens, etc - but I would argue those are implementation details and some are much more likely to succeed than others. (E.g. I think using private keys for authentication/authorization is obviously better if you can get over the UX hump - SSH has shown us that for decades.)

      Most descriptions of Web3 focus on mechanisms — blockchains, smart contracts, etc — but those are implementation details.

  8. Nov 2021
    1. It remains unclear whether the reduction in the neutralization sensitivity of the N501Y.V2 strain to vaccine-induced antibodies is enough to seriously reduce vaccine efficacy. First, mRNA vaccines also induce virus-specific helper T cells and cytotoxic T cells, both of which might be involved in protection against challenge. Also, the mRNA vaccines, in particular, induce such a strong NAb response that there could be enough “spare capacity” to deal with reductions in the sensitivity of the variant to NAbs. In other words, N501Y.V2 (and the related virus from Brazil) may be less sensitive to NAbs, but not to an extent that will cause widespread vaccine failure.

      Variants that show reduced sensitivity to NAbs don't necessarily mean mRNA vaccine failure

      New variants may emerge that show reduced sensitivity to NAbs.

      This may not result in vaccine failure because:

      1. The mRNA vaccines induce such a strong NAb response, there will be enough spare capacity to deal with the virus.
      2. The mRNA vaccines also induce other virus specific protection such as helper T cells and cytotoxic T cells, which may not be affected by the reduction in NAb sensitivity.
    1. The study demonstrated the capacity of a third dose to broaden antibody-based immunity and boost protection against circulating variants of concern. However, it is interesting that neutralizing responses against the Beta variant, known to markedly escape vaccine-elicited antibody responses4, were only fractionally better in those receiving a Beta-specific booster immunization.

      Choi et al. showed that a Beta-targeted booster shot broadened antibody-based immunity and boosted protection against circulating variants, the neutralizing response against the Beta variant was only slightly better.

      @gerdosi thinks this points to Original Antigenic Sin.

    1. Interestingly, all four vaccine breakthrough infection subjects who had previous COVID-19 were seropositive for anti-membrane IgG during acute infection, while no breakthrough subjects without prior COVID-19 had detectable anti-membrane antibodies in the acute infection period (Figure 1I).

      Vaccinated individuals that experience a breakthrough infection do not develop antibodies to the parts of the virus that is not encoded by the vaccine.

    1. As that tagline suggests, an assumption runs quietly through Needle Points that Covid vaccines are by and large safe, necessary, and generally beneficial for personal and public health. Therefore, opposition to them must be explained in psychological or sociological terms, because we all know that, scientifically speaking, opposition is baseless.

      This assumption runs through all appeals to the unvaccinated.

    1. The survey was vague -- the only product-specific query asked about a “Discord-native crypto wallet” -- but it showed that Discord was aware of the web3 community’s growing usage of its product and at least exploring how it might play in the space. 

      Discord might be mulling a native wallet.

    2. Discord’s bot ecosystem extends into crypto. In a recent piece on DAOs, The Generalist outlined a few integrations that have caught on with the web3 world. In particular, products like Collab.Land — which allows holders of unique tokens or NFTs to access private channels — have become essential. Other players in this subspace include Tip (accept crypto tips!) and Piggy (an RPG with crypto rewards).

      Discord integrates with web3. One example of this are channels that are only accessible for people holding a specific NFT.

    3. Discord allows for intra-group socialization, but also adds a social layer on top of this structure.

      Discord allows for intra-group socializations (like Slack), but also allows socialization across groups.

    4. Whereas Slack was clearly designed to be the home for one company and its employees -- each time you get invited to a new Slack workspace, you need to re-enter your email and go through the signup flow -- Discord was built for promiscuity. Discord users are expected to jump from server to server, and to slide into any other Discord user’s DMs. 

      Slack was designed for monogamous relationships between a user and their company, Discord was designed for promiscuity.

    1. Readministration of influenza vaccine has become an annual event for much of the population, in response to both waning immunity and the appearance of variants, termed antigenic drift, necessitating updated vaccines. Even when there is no substantial drift, revaccination is recommended because of waning immunity. But antigenic drift is a constant issue and is monitored globally, with vaccine composition updated globally twice a year on the basis of recommendations from a World Health Organization consultation.

      Influenza vaccines need to be updated yearly to counter (1) waning immunity and (2) antigenic drift.

      Antigenic drift is monitored globally and the WHO makes recommendations for the updates.

    2. Thus, the value of influenza vaccines, now given to as many as 70% of people in some age groups, lies not in eliminating outbreaks but in reducing them and preventing severe complications.

      The goal of influenza vaccines is to prevent severe complications and to reduce outbreaks — not to prevent them.

      As many as 70% of some age groups get influenza vaccines.

    3. Vaccine effectiveness against laboratory-confirmed symptomatic infection is never higher than 50 to 60%, and in some years it is much lower.

      Vaccine effectiveness for influenza vaccines for symptomatic infection is never higher than 50-60% and some years it is much slower.

    4. Eliminating Covid-19 seemed theoretically possible, because the original 2002 SARS virus ultimately disappeared.

      Eliminating SARS-CoV-2 was deemed plausible, because SARS-CoV-1 had been eliminated.

    5. The effect on asymptomatic infections was a welcome surprise, because it has been thought that most vaccines for respiratory illnesses, including influenza, are “leaky” — that is, they allow some degree of asymptomatic infection and are better at preventing symptomatic infection.

      Most vaccines for respiratory illnesses are leaky.

      The efficacy the mRNA vaccines showed in preventing asymptomatic transmission was therefore a welcome surprise.

    1. The spike protein was a target of human SARS-CoV-2 CD8+ T cell responses, but it is not dominant. SARS-CoV-2 M was just as strongly recognized, and significant reactivity was noted for other antigens, mostly nsp6, ORF3a, and N, which comprised nearly 50% of the total CD8+ T cell response, on average. Thus, these data indicate that candidate COVID-19 vaccines endeavoring to elicit CD8+ T cell responses against the spike protein will be eliciting a relatively narrow CD8+ T cell response compared to the natural CD8+ T cell response observed in mild to moderate COVID-19 disease.

      When looking at CD8+ T cell responses, the spike protein was not immuno-dominant. M was just as strongly recognized and significant reactivity was observed for other antigens.

    2. In the case of CD4+ T cell responses, data for other coronaviruses found that spike accounted for nearly two-thirds of reported CD4+ T cell reactivity, with N and M accounting for limited reactivity, and no reactivity in one large study of human SARS-CoV-1 responses (Li et al., 2008). Our SARS-CoV-2 data reveal that the pattern of immunodominance in COVID-19 is different. In particular, M, spike, and N proteins were clearly co-dominant, each recognized by 100% of COVID-19 cases studied here. Significant CD4+ T cell responses were also directed against nsp3, nsp4, ORF3s, ORF7a, nsp12, and ORF8. These data suggest that a candidate COVID-19 vaccine consisting only of SARS-CoV-2 spike would be capable of eliciting SARS-CoV-2-specific CD4+ T cell responses of similar representation to that of natural COVID-19 disease, but the data also indicate that there are many potential CD4+ T cell targets in SARS-CoV-2, and inclusion of additional SARS-CoV-2 structural antigens such as M and N would better mimic the natural SARS-CoV-2-specific CD4+ T cell response observed in mild to moderate COVID-19 disease.

      When looking at CD4+ T cell (which coordinate the immune response by activating other immune cells) responses, other proteins besides the spike protein were immuno-dominant, such as M and N. There were also other significant responses detected against other parts of SARS-CoV-2.

    1. 4) More viral replication produces more particles that stimulate stronger immune responses inside and outside of cells. The immune system is able to recognize and differentiate active viral replication inside cells compared to replication of self-DNA and transcription into mRNA. As viruses infect neighboring cells and spread, this results in a strong signal to local immune cells that help activate T and B cells. Although an mRNA vaccine mimics this signal, spike proteins can’t replicate beyond the spike-encoding mRNA contained in the vaccine, and as a result the signal isn’t as strong and doesn’t affect as many cells, limiting the strength and durability of downstream immunity. This is overcome to some extent with a second dose and with a booster vaccination, which will improve the quality of antibody binding in some indviduals, but not others.

      Viral replication triggers a stronger immune response because more cells are involved in triggering it than through the limited number of cells that express spike induced by vaccination.

    2. 3) Most SARS-CoV-2 vaccines only stimulate immunity against the spike protein. The spike protein of coronaviruses allows for virus attachment to and invasion of host cells. A strong immune response to the spike protein will result in the production of antibodies that prevent the virus from binding the viral receptor (ACE2) on human cells, thus preventing or slowing viral spread. The vaccine consists of mRNA that only codes for the SARS-CoV-2 spike protein, and is packaged to allow cells to uptake spike mRNA and translate the message into protein. That makes those muscle cells look like they’ve been infected to the immune system, which responds with activation and multiplication of spike-recognizing T and B cells. In contrast to this limited scope of immunity in response to vaccination, T and B cells are activated in response to infection that recognize all parts of the virus, including the nucleocapsid and other viral proteins. Although antibodies to these proteins are less likely to block viral entry of host cells, more T cells will recognize these antigens and will be able to kill infected cells due to a broader activation of the immune repertoire. However, this also increases the opportunity for autoimmune pathology (as does any strong immune response), which is an important contributor to severe SARS-CoV-2 infection. In other words, stronger protective immunity comes with a tradeoff of a higher potential for immune destruction and long-term effects.

      Spike-based vaccines only induce an immune response to spike epitopes, and not other parts of the virus such as the nucleocapsid.

    3. 2) Viral antigen may persist after infection, but is less likely to persist after vaccination. This is an important difference between influenza vaccine-induced and infection-induced immunity. Even after symptoms have resolved and live virus has been cleared, the lungs still harbor a reservoir of influenza proteins and nucleic acids that continuously stimulate the development of immunity for extended periods of time. That doesn’t happen in response to vaccine injection, where inactivated virus stimulates an immune response that is cleared quickly and efficiently. Scientists are working on ways to develop vaccines that mimic this antigen persistence to stimulate longer-lasting immunity to influenza vaccination, with some proposing viral antigen packaged in slow-degrading nanoparticles. It is very likely that antigen persistence also occurs during SARS-CoV-2 infection, as viral mRNA and antigens have been detected for months in the small intestines of previously infected individuals. It is unknown how viral nucleic acids and proteins persist after clearance of infection, but it appears to be an important factor in the development of durable antiviral immune memory. In contrast, spike proteins produced by mRNA vaccination may only persist for a few days, thus limiting the time for stimulation and subsequent memory development.

      Viral antigens in influenza are more likely to persist and stimulate continued maturation of the immune response after a natural infection than after vaccination

    4. In response to a vaccine, the immune response starts in the deltoid muscle of the arm. The spike protein of the virus is produced in muscle cells, and spike-recognizing T and B cells in the arm-draining lymph nodes (in the armpit) are activated. The T cells that are activated do not express lung-homing molecules, and neither do the memory T cells that develop later. Activated B cells secrete virus-neutralizing antibodies, but little mucosal IgA is produced. If an infection occurs, memory cells from vaccination will respond quickly, but there won’t be many located in or immediately targeted to the lung, and viral-binding IgA won’t immediately block airway cell-invading viruses.

      In response to the vaccine, the immune response starts in the deltoid muscle of the arm. The spike protein is produced primarily in the muscle cells which activates T and B cells in the arm-draining lymph nodes.

      Unlike the T cells that are activated through a respiratory infection, these T-cells do not express lung-homing molecules, nor do the memory T-cells that develop later.

      If a reinfection occurs, there won't be many memory cells in the lung, and little mucosal IgA is produced to fend off the infection.

    5. In response to a respiratory viral infection, an immune response begins after viruses infect and spread among cells in the airways. This results in the activation of many airway and mucosal-specific immune responses. In the lungs, the lymphatic system drains to lung-associated lymph nodes, where T cells and B cells become activated after recognizing their specific antigen, which consists of pieces of viral proteins that can bind to the T or B cell surface receptors. In lung-associated lymph nodes, these cells are “imprinted” by activation of specific molecules that help them migrate to lung tissues. B cells get specific signals to make antibodies, including a specific type called IgA that is secreted into airways. When an individual recovers from infection, some of these immune cells become long lasting lung-resident and memory cells that can be activated and targeted much more quickly during a reinfection and thus limit spread in the lungs and disease severity.

      The immune response to a respiratory viral infection starts with mucosal-specific immune responses.

      The lymphatic system that takes care of the lungs drains to specific lymph nodes where B and T cells can become activated when they recognize specific viral antigens. There they can become "imprinted" which helps the migrate to the lungs effectively.

      B-cells produce IgA — a type of antibody associated with mucosal immunity — and secrete them into the airways.

      Some of these immune cells become long lasting lung residents and memory cells that, due in part to their new residence, can activated quickly and easily upon a reinfection.

  9. Sep 2021
    1. Neither the vaccinated nor unvaccinated individuals are to be blamed.

      Neither the vaccinated or the unvaccinated should be blamed.

    2. As I’ve been explaining in one of my previous articles (https://trialsitenews.com/why-is-the-ongoing-mass-vaccination-experiment-driving-a-rapid-evolutionary-response-of-sars-cov-2/), this selection was most likely due to overcrowding (e.g., in favelas or slums in certain cities in Brazil or South-Africa) or possibly even due to prolonged infection-prevention measures in other regions (as prolonged infection-prevention measures lead to suppression of innate immunity and could now, indeed, provide a competitive advantage to more infectious variants).

      The selection that led to the emergence of these variants was most likely due to either overcrowding in certain cities in Brazil or South-Africa, or possibly due to prolonged infection-prevention measures in other regions. Prolonged infection-prevention measures lead to a suppression of innate immunity, which confers a competitive advantage to more infectious variants.

    3. Boosters and/ or extending mass vaccination campaigns to younger age groups will only expedite the occurrence of viral resistance to the vaccines and cause substantial harm to both the unvaccinated and vaccinated.

      Boosters and vaccination of younger age groups will expedite the emergence of variants that are resistant to vaccine-induced immunity.

    4. The ‘more humane’ response, therefore, is to treat people at an early stage of the disease instead of preventing herd immunity from getting established.

      A better strategy for bringing the pandemic under control is treating people at an early stage of the disease, instead of pursuing a mass vaccination campaign which prevents herd immunity from being established.

    5. It should suffice to ask him how mass vaccination is going to tame the dramatic expansion of increasingly infectious viral variants as it is now generally acknowledged that mass vaccination will not enable herd immunity and as it is too well understood that no pandemic can be tamed without achieving herd immunity.

      Goldman doesn't provide an answer to the question of how mass vaccination is going to bring under control the expansion of increasingly infectious variants.

      It is now generally acknowledged that mass vaccination will not enable herd immunity.

      It is well understood that no pandemic can be brought under control without achieving herd immunity.

    6. On the contrary, the unvaccinated are the only hope for the human population to build herd immunity, either by virtue of their innate immunity (if asymptomatically infected) or by virtue of their naturally acquired immunity (if symptomatically infected).

      The unvaccinated are the only hope for the population to build herd immunity through either innate immunity or through naturally acquired immunity.

    7. Deaths under the unvaccinated will not lead to diminished viral infectivity as the unvaccinated are not a breeding ground for more infectious variants.

      [Refuting Goldman's point about the unvaccinated being the breeding ground for variants]

      Deaths of the unvaccinated will not avoid the emergence of variants that escape vaccine-induced immunity, because they are not a breeding ground for more infectious-variants.

    8. Why would the unvaccinated even survive if – according to Goldman – they’re not vaccinated and hence, not protected? It’s, of course, thanks to their innate immunity which they should try to boost and, more importantly, preserve by avoiding repeated exposure to the circulating (more infectious) variants.

      [Refuting this point]

      The unvaccinated survive by virtue of their innate immunity, which Goldman doesn't consider.

      Innate immunity can be preserved by avoiding repeated exposure to the circulating more-infectious variants.

      The unvaccinated should try to boost and preserve their innate immunity to avoid disease.

    9. Darwinian selection may also yet solve the problem with a much crueler calculus. The unvaccinated will either get sick and survive, and therefore be the equivalent of vaccinated, or they will die and therefore be removed as breeding grounds for the virus.

      Goldman claims that the problem of the emergence of a variant which escapes vaccine-induced immunity might solve itself in a crueler way. The unvaccinated will either survive and be equivalent to being vaccinated, or they will die and no longer be a breeding ground for the virus.

    10. For lack of any fundamental knowledge in immunology, Goldman doesn’t understand that exactly the opposite applies!

      [Refuting this point]

      The emergence of a variant which evades vaccine-induced immunity can be avoided by abandoning universal vaccination.

    11. This dire prediction need not occur if universal vaccination is adopted, or mandated, to protect everyone, including those who are already vaccinated.

      Goldman claims that the emergence of a variant which escapes vaccine-induced immunity — which would put the vaccinated at risk once again — would not occur if universal vaccination is adopted or mandated.

    12. Again, there is only one single culprit: MASS vaccination across all age groups during a pandemic of more infectious variants.

      [Agreeing with the premise, but identifying a different cause]

      If a variant emerges which escapes vaccine-induced immunity, the cause will have been mass-vaccination across all age groups during a pandemic of more-infectious variants.

    13. Progress we have made in overcoming the pandemic will be lost. New vaccines will have to be developed. Lockdowns and masks will once again be required. Many more who are currently protected, especially among the vulnerable, will die.

      Goldman claims that if this happens, our progress in overcoming the pandemic will be lost. We will have to develop new vaccines, lockdowns and masks will once again be required. Many who are currently protected against severe disease and death will die.

    14. A variant could arise that is resistant to current vaccines, rendering those already vaccinated susceptible again.

      Goldman claims a variant could arise that is resistant to the current vaccines, rendering the vaccinated susceptible to severe disease once again.

    15. Because of mass vaccination, there is now a large part of the population that exerts increasing S-directed immune selection pressure that provides more infectious variants to gain a strong competitive advantage and reproduce more effectively on a background of highly S-specific neutralizing antibodies.

      [GVD disputes this claim]

      Mass vaccination has created a situation where a large part of the population is exerting spike-protein-directed immune selection pressure on the virus, which confers a competitive advantage to more-infectious variants.

    16. The real danger is a future variant, which will be the legacy of those people who are not getting vaccinated providing a breeding ground for the virus to continue to generate variants.

      Goldman claims a future variant is the primary risk to be considered.

      Goldman claims that if such a variant emerges it will have been the result of the unvaccinated. Because those people provide breeding grounds for the virus to continue to generate variants.

    17. Nevertheless, the Delta variant is exhibiting increased frequency of breakthrough infections among the vaccinated (4).

      Goldman claims we're seeing an increased frequency of breakthrough infections among the vaccinated.

    18. The more infectious variants that started circulating before mass vaccination had already been subject to S-directed immune selection pressure! How could one otherwise explain that all these variants developed mutations that were converging towards immunodominant domains in the S protein?

      [Refuting Goldman's point]

      [Definition] Immunodominant The ability of a specific antigen or epitope to induce a measurable or clinically meaningful immune response when other structurally related antigens do not.

      The more-infectious variants that started circulating before mass vaccination had already been subjected to S-directed immune pressure.

      This is the most plausible explanation for these variants developing mutations that converged towards immunodominant domains in the spike protein.

    19. So far, we have been lucky that the variants that have emerged can still be somewhat controlled by current vaccines, probably because these variants evolved in mostly unvaccinated populations and were not subject to selective pressure of having to grow in vaccinated hosts.

      Goldman claims we're lucky that the vaccines are still effective against the variants that emerged.

      He claims this is most likely due to these variants having evolved in unvaccinated populations, not subject to the selective pressure of vaccinated hosts.

    20. Yes, natural selection of more infectious variants happens within the vaccinated population, but not in the non-vaccinated population. This already explains why there was a fall in cases when the lockdown measures in the UK were abandoned and society opened up again. Opening-up society resulted in absorption of more infectious variants (i.e., the Delta variant) by non-vaccinated people. In this population, the Delta variant had no longer a competitive advantage (as unvaccinated individuals can effectively deal with ALL Sars-CoV-2 lineages).

      [Partially agreeing with Goldman]

      Natural selection of more-infectious variants (such as escape variants) happens in the vaccinated population, but not in the unvaccinated.

      When the UK re-opened it resulted in an absorption of more infectious variants by non-vaccinated people.

      Due to this absorption, and the non-specific response mounted by the unvaccinated, the Delta variant no longer had a competitive advantage. The result was that cases fell.

    21. When this occurs within a background of a largely vaccinated population, natural selection will favor a variant that is resistant to the vaccine.

      Goldman claims that against a background of a largely vaccinated population, natural selection will favor variants that are resistant to the vaccine.

    22. Goldman’s interpretation does not take into account that unvaccinated people do have protective immunity, either due to innate or naturally acquired immunity.

      [conclusion]

      Goldman's interpretation does not take into account that people do have protective immunity through innate or acquired immunity.

    23. The unvaccinated part of the population is, therefore, anything but a reservoir for the virus! On the contrary, their capacity to eliminate the virus in a non-selective manner will lead to a diminished concentration of more infectious immune escape variants in the unvaccinated population, and even in the overall population provided the unvaccinated part of the population represents a significant part of the overall population!(which is now increasingly becoming problematic).

      Because the unvaccinated mount a non-specific response, they get rid of any advantage that more-infectious variants might have had.

      If the unvaccinated constitute a significant portion of the population, this could lead to diminished concentrations of escape variants in the general population.

    24. In contrast, the unvaccinated do not provide such competitive advantage to more infectious variants as they eliminate Sars-CoV-2 lineages without exerting immune selection pressure on viral infectiousness (i.e., on spike protein). This is because unvaccinated either get asymptomatically infected, i.e., they overcome the infection thanks to their innate immunity, which is known to be multi-specific ( i.e., NOT variant-specific) or they contract symptomatic infection, which equally results in multi-variant-specific acquired immunity. In none of these cases does an unvaccinated person exert any immune selection pressure on viral infectiousness, i.e., on spike protein.

      There are essentially two pathways that befall the unvaccinated:

      1. They get asymptomatically infected, where they overcome the virus through their innate immunity, which is multi-specific.

      2. They get symptomatically infected, which results in an acquired immune response which is multi-variant-specific.

      In both of these cases no specific response is mounted, so the infection of an unvaccinated does not lead to targeted selection pressure being exerted on viral infectiousness (by, for instance, targeting the spike protein) or otherwise.

      Thus, the unvaccinated do not give more infectious variants a competitive advantage, because they do not exert immune selection pressure on viral infectiousness.

    25. When people get jabbed in large numbers with S(pike)-based vaccines, this undoubtedly leads to massive S-directed immune selection pressure in the vaccinated part of the population.

      Large numbers of people getting vaccinated with a spike-based vaccine leads to significant immune selection pressure exerted on viral infectivity among the vaccinated portion of the population.

    26. It seems logical that more infectious variants can only enjoy a competitive advantage on a background that exerts selective immune pressure on viral infectiousness, i.e. on spike protein (as the latter is responsible for viral infectiousness).

      More infectious variants will only experience a competitive advantage in an environment where there is selective pressure on infectivity.

      The infectivity of SARS-CoV-2 is mostly determined by the spike protein.

    27. As Goldman has no clue about immunology, he does not understand that the overall (i.e., population-level) immune status of the population constitutes the barrier that is critical to Darwin’s selection and survival of the fittest (as virus replication and transmission critically depends on the ‘resistance’ mounted by the host immune system).

      Variants emerge as a result of natural selection, which is governed by barriers the virus experiences in replication and transmission.

      At the population level these barriers are determined by the individual hosts' immune status - the ability of the host to demonstrate an immune response or to defend itself against disease or foreign substances.

      Thus, by leaving this information out, Goldman incorrectly concludes that a population of unvaccinated individuals is somehow sufficient for variants to emerge.

    28. SARS-CoV-2 has shown that it can mutate into many variants of the original agent (3). An unvaccinated pool of individuals provides a reservoir for the virus to continue to grow and multiply, and therefore more opportunities for such variants to emerge.

      Goldman claims that :

      SARS-CoV-2 has shown that it can mutate into many variants.

      An unvaccinated population provides opportunities for the virus to replicate and transmit and thus opportunities for such variants to emerge.

    29. Goldman doesn’t seem to realize that protection against disease has nothing to do with Darwin’s principles of natural selection and survival of the fittest. In case of viruses, the latter have to do with replication and transmission. So, what viruses care about is barriers that prevent them from replicating / transmitting, not from external influences that prevent them from being more or less pathogenic. This is to say that natural selection of viruses in the presence of neutralizing antibodies does not occur as a result of vaccine-mediated pressure on viral pathogenicity.

      Natural selection and survival of the fittest in viruses is governed by the barriers viruses experience to their ability to replicate and transmit, not by barriers to their ability to make us more or less sick (pathogenicity).

      Therefore natural selection of SARS-CoV-2 does not occur as a result of the vaccinated being more protected against severe illness and the unvaccinated not at all.

      If natural selection does indeed occur, it will necessarily be the result of barriers the virus experienced in its ability to replicate and transmit.

    30. In addition, Goldman doesn’t seem to realize that more infectious variants were already circulating before mass vaccination started.

      More infectious variants were already circulating before mass vaccination started.

    31. In 1859, Charles Darwin published On the Origin of Species (2), in which he outlined the principles of natural selection and survival of the fittest. The world presently has the unwelcome opportunity to see the principles of evolution as enumerated by Darwin play out in real time, in the interactions of the human population with SARS-CoV-2. The world could have easily skipped this unpleasant lesson, had there not been such large numbers of the human population unwilling to be vaccinated against this disease.

      Goldman claims that the world — thanks to the unvaccinated — will now witness Darwin's principles of natural selection and survival of the fittest play out in the interactions between the human population and SARS-CoV-2.

    32. Imai et al. (1) have characterized yet another variant of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus responsible for COVID-19, this one originating in Brazil. The good news is that it appears that vaccines currently available are still expected to provide protection against this variant. However, what about the next variant, one we have not seen yet? Will we still be protected?

      With new variants emerging, Goldman questions whether or not the vaccinated will remain protected.

    1. It might strike you as odd that a technical decision about vaccine booster shots would be a “blow” to a government, and frankly it is odd. But that is where we now are: politicians are forming their own strong views about vaccines, quite apart from their expert committees. They have a clear bias towards showing taking action and moving fast, especially since they stand accused of doing neither at the start of the pandemic; they also watch opinion polls, which in this case showed a 76% majority of people in favour of booster shots.

      Why is a technical decision about vaccine boosters framed as a blow to governments?

    1. And vaccines sit in an awkward spot at the intersection of science, medicine and public health, which do not mix. Science is about examining things as carefully as possible with no agenda. Public health is ALL agenda, identifying one single course of action and trying to make people follow it.

      Tension between science and public health.

      Science is about examining all evidence without an agenda. Public health is all about agenda, identifying one course of action and trying to make people follow it.

    1. So, the mutation rate tells us (technically, this can be defined in context, but usually for a virus…) how many single nucleotide polymorphisms (SNPs, like "snips") we expect to see from one viral generation to the next. But the mutation frequency measures the abundance of SNPs relative to the virions in a generational pool.

      The Mutation Rate tells us how many Single Nucleotide Polymorphisms are introduced from one generation to the next.

      The Mutation Frequency tells us how many Single Nucleotide Polymorphisms already exist relative to a generational pool.

    2. Reading the Competing Interests Statement gave me a rash.AP, PJL, ES, MJN, JC, AJV, and VS are employees of nference and have financial interests in the company. nference is collaborating with Moderna, Pfizer, Janssen, and other bio-pharmaceutical companies on data science initiatives unrelated to this study. These collaborations had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. JCO receives personal fees from Elsevier and Bates College, and receives small grants from nference, Inc, outside the submitted work. ADB is supported by grants from NIAID (grants AI110173 and AI120698), Amfar (#109593), and Mayo Clinic (HH Shieck Khalifa Bib Zayed Al-Nahyan Named Professorship of Infectious Diseases). ADB is a paid consultant for Abbvie, Gilead, Freedom Tunnel, Pinetree therapeutics Primmune, Immunome and Flambeau Diagnostics, is a paid member of the DSMB for Corvus Pharmaceuticals, Equilium, and Excision Biotherapeutics, has received fees for speaking for Reach MD and Medscape, owns equity for scientific advisory work in Zentalis and nference, and is founder and President of Splissen Therapeutics. MDS received grant funding from Pfizer via Duke University for a vaccine side effect registry. JH, JCO, AV, MDS and ADB are employees of the Mayo Clinic. The Mayo Clinic may stand to gain financially from the successful outcome of the research. This research has been reviewed by the Mayo Clinic Conflict of Interest Review Board and is being conducted in compliance with Mayo Clinic Conflict of Interest policies.

      The Puranik et. al paper contains a competing interest statement which mentions that some of the authors are working for a company that works with Moderna and Pfizer.

    3. The paper found a [much] lower efficacy reduction than is being seen in Israel for Pfizer's vaccine, or in the UK where both mRNA vaccines are in use. But I'm not going to go much further than that because I doubt many serious people will take this particularly seriously, except as biased and conflicted secondary evidence of efficacy against variants that is contradicted by that from other nations.

      The paper seems biased because of the stated competing interests, the needless inclusion of "highly-effective mRNA vaccines" in their title and their results showing a much lower efficacy reduction than seen in Israel and the UK.

    4. The other side of the argument seems to have a steep uphill climb, and I've so far not talked to anyone in genetics who feels otherwise.

      The argument that the unvaccinated are the source of the variants is much more difficult to support and Mathew has so far not spoken to anyone in genetics that supports it.

    5. Multiple recent papers have emerged that relate to the debate over whether vaccinated or unvaccinated people drive the emergence of SARS-CoV-2 variants since I wrote my first article on the topic. Let's take a look at what they tell us, and how vaccine partisans are misinterpreting them as publicly as possible.

      Multiple papers have come out which relate to the question of whether variants are driven by the unvaccinated population or the vaccinated population. The results of these papers are being misinterpreted in public by some.

    6. Sadly, this is how the public winds up being misinformed at about every turn during the pandemic.

      These incorrect interpretations of the data, biased research and further amplification on Twitter is unfortunately how the public winds up frequently being misinformed during the pandemic.

    7. Predictably, Edward Nirenberg seems to have jumped on the incorrect interpretation as well. Since he is young, with ample opportunity to shed the notion that he understands more than he does (trained to that point no doubt by the educational institutions that fail us all), he has plenty of time to establish a locus of reality. However, his influential pandemic-era writing reads like a peacock-display of understanding much more than he does while cross-troping Reality Show political phrases like "deplatform [disease]" and "[vaccine] nationalism". Sigh. a.image2.image-link.image2-272-586 { padding-bottom: 46.41638225255973%; padding-bottom: min(46.41638225255973%, 272px); width: 100%; height: 0; } a.image2.image-link.image2-272-586 img { max-width: 586px; max-height: 272px; } I wonder what evolutionary theory he would cite if he cited any.

      Edward Nirenberg, an influential pandemic-era writer — who's writing reads like he understands more than he actually does — further amplified Eric Topol's erroneous take on Twitter.

    8. Now, here he is, declaring a myth debunked while overgeneralizing an incorrect interpretation of data that says exactly the opposite of what he seems to think it does. And his many thousands of followers have no sense of how little he understands statistics, much less statistical genetics.

      Eric Topol declares a myth debunked to his many thousand followers, based on an erroneous assumption that Yeh and Carreras' interpretation generalizes, even though their interpretation is wrong.

      Meanwhile his followers don't realize how little he understands about statistics or statistical genetics.

    9. The problems get substantially compounded by the viral variant of abused reputation. Specifically, Scripps Research Institute founder Eric Topol, a man whose great achievement in genetics was an undergraduate paper opining about prospects for genetic therapy, got ahold of the paper and tweeted out what looks to be an even worse interpretation than that of the authors: that the result, if true, even as misinterpreted, necessarily generalizes:

      One implication of Yeh and Contreras' misinterpretation is Eric Topol amplifying the misinterpretation by tweeting out an even worse intepretation by assuming the the erroneous conclusion also generalizes.

    10. The correct interpretation is that vaccination campaigns channeled mutations through the bottleneck toward their moment of immune escape.

      The correct interpretation of the Tajima's D values going negative, given that the variants emerged in geographies where vaccine trials were taking place, is that the vaccine campaigns created a bottleneck which selected for mutations that could escape immunity.

    11. This is the second graph in the paper, and it shows values of Tajima's D that go negative in India and the UK in particular---just prior to Delta variant breakouts! While I hate to quote Wikipedia, there is a simple table that explains what I noted above about the genomic "resets":

      In Yeh's and Contreras' Tajima's D graph it goes negative in India and the UK just prior to Delta variant breakouts.

      A negative Tajima D values indicates a recent selective sweep — a population expansion after a recent bottleneck.

    12. Still, as I said, I am glad to know the information in the paper. While their confusion over the meaning of the information seems consistent, their computations and graphs give us important confirmation about what is really going on. This includes their Tajima's D computations.

      While the authors' [Yeh and Contreras] interpretation of their results is incorrect, the information presented gives us important information about what is going on.

    13. Looking back at the introduction, I see a hint that these authors are not particularly deep in the statistical genetics field.

      There are hints in the Yeh and Contreras paper which point to the authors not being well versed in the statistical genetics field.

    14. This graph does not tell us that the rate of mutation (SNPs per generation) changes in any way. In fact, that rate likely does not to any appreciable degree, though I suspect that many readers confirming their media-seeded "unvaccinated are variant factories" biases interpret it that way.

      The graph does not show that the rate of mutation changes in any way, and it probably doesn't.

    15. The obvious conclusion is that the self-similar viral pools are more likely to be vaccine resistant.

      Because self-similarity is a result of selection pressure, we can conclude the vaccinated viral pool is under selection pressure.

      Jesse: Mathew assumes this selection pressure is exerted by vaccine-induced immunity, which makes that viral pool more likely to be vaccine resistant.

    16. So, which virions do you imagine are being selected for in a highly vaccinated pool?I'll give you one guess, and it's not the virions most easily neutralized by vaccination.

      In a highly vaccinated population, the virions that are selected for are the ones which most readily escape vaccine-induced immunity.

    17. What we see in the graph above is that greater vaccination results in greater selection pressure to eliminate those virions least like the others.

      We can infer from the graph that high vaccination coverage results in greater selection pressure to eliminate those virions least like the others.

      Jesse: I'm not sure if we can directly infer selection pressure from such a graph, but it seems like a plausible argument. But it would seem that the selection pressure is not simply directed at removing non-self-similarity, but rather directed towards avoiding vaccine-induced immunity, and the self-similarity of the virion population is a consequence of that.

    18. But vaccine trials involve thousands of individuals in a population of many millions.There is no conceivable way for a few thousand vaccinated to drive the evolution of new variants.But this takes no conditional into account. While geography is certainly not the most restrictive conditional (after all, we're talking about...vaccine-resistant variants as per antibody escape, not T cell or PK cell escape), this response disrespects the coincidence factor. Take the individual probability coincidence factor to the fourth power and we get something like a black swan.

      Morris' argument is incorrect because he does not take into account the co-incidence of multiple factors such as:

      1. Individuals under selection pressure (e.g. those undergoing the trials) are more likely to be sources of variants
      2. The odds of a variant emerging in exactly the geographies the vaccine trials occurred, 4 times in a row, is vanishingly small.
    19. The closest thing to that was an email exchange with Biostatistics Professor Jeffrey Morris, who began with the position, "All of these variants were around before vaccination started so, if vaccination produces any variants, they haven’t appeared yet." After I explained his mistake, he shifted to,But vaccine trials involve thousands of individuals in a population of many millions.There is no conceivable way for a few thousand vaccinated to drive the evolution of new variants.

      Biostatistics Professor Jeffrey Morris argued that there is no way a few thousand vaccinated can drive the evolution of new variants, therefore the variants could not have been caused by the vaccine trials.

    20. And given conditions such as when and where the variants emerged...I'll throw my wager on those receiving COVID-19 vaccines being the source...of...variants that fall into the category of vaccine-resistant strains.

      Given the temporal and geographical vicinity of the emergence of the variants to the vaccine trials, it's more likely that the vaccinated are the source of the variants.

    21. On July 30, Rella et al reported in their paper Rates of SARS-CoV-2 transmission and vaccination impact the fate of vaccine-resistant strains on the results of computer simulations testing for variant emergence during the ups-and-downs of seasonal infection waves. From the abstract:As expected, we found that a fast rate of vaccination decreases the probability of emergence of a resistant strain. Counterintuitively, when a relaxation of non-pharmaceutical interventions happened at a time when most individuals of the population have already been vaccinated the probability of emergence of a resistant strain was greatly increased. Consequently, we show that a period of transmission reduction close to the end of the vaccination campaign can substantially reduce the probability of resistant strain establishment.

      Rella et al. reported on the results of computer simulations testing for variant emergence during the ups-and-downs of seasonal infection waves.

      They found that fast vaccination decreases the probability of emergence of a resistant strain.

      They also found that relaxing of non-pharmaceutical interventions when most of the population has been vaccinated, the likelihood of a variant strain emerging is greatly increased.

      Lastly Rella et al claim that reducing transmissions close to the end of vaccination campaign can substantially reduce the likelihood of a variant emerging.

    22. For those not familiar, Tajima's D is a statistical test conjured by Japanese researcher Fumihiro Tajima to test the "neutral mutation hypothesis". Like my wife sometimes does, Tajima studied mutations in fruit flies (Drosophila).

      Tajima's D is a statistical test to test the neutral mutation hypothesis.

    23. Mutations just happen. They are then selected according to contextual fitness in their environments. Left to random chance (sans vaccine), the locus of genetic diversity can get large, and so in the rare instances a single virion piles up enough SNPs to affect a functional domain (one functional specifically in escaping immunity), it nearly always get outcompeted locally before it can establish itself in another host. Even worse---en route to piling up multiple SNPs, such virions become increasingly unstable (less fit) as per Muller's ratchet.

      Mutations are random and happen all the time. They are then selected locally according to the contextual fitness of their environment.

      Without selection pressure the locus of genetic diversity can get large. In this setting if a virion piles up enough mutations to affect a functional domain, it nearly always gets outcompeted locally before it can establish itself in another host.

      Additionally, as a virion piles up mutations, it's increasingly likely to become unstable (less fit) as it is subjected to Muller's ratchet.

    24. They leap---quite incorrectly---to the notion that the suppression of mutation frequency equates to suppression of emergent mutations.

      Restating an earlier argument:

      The authors misinterpret their own results equating a suppression of mutation frequency with mutation rate.

    25. Note that when selection [for environment] occurs, the genetic variance of the genome "resets" relative to a new baseline because the other branches are "forgotten" in the sense that they are not present in the remaining population. Thus, we get low mutation frequency.

      When selection for environment has occurred, the mutation frequency drops (because a certain chunk of the viral pool doesn't pass through the sieve).

    26. This graph tells us that the SARS-CoV-2 samples from highly vaccinated nations are more similar to one another than are those from less vaccinated nations.

      A repeat from the argument above.

      The graph tells us SARS-CoV-2 samples from highly vaccinated countries are more self-similar than those from less vaccinated countries.

    27. Before we move any further, we need to discuss one term defined in the paper, and another one which is not. For many readers unfamiliar with statistical genetics, the distinction will help to disambiguate between a correct understanding of the results of this graph, and an intuitive but false one.Mutation frequency. Simply put, mutation frequency is the measured frequency of mutations (as they exist) in a population. A low mutation frequency represents a population with sequences that are highly similar, while a high mutation frequency represents a population with sequences that are less similar to each other.Mutation rate. The mutation rate is the measured frequency of mutations over time. The higher the mutation rate, the more likely that the "offspring" of a virus differ (at any particular location or base pair) from the immediate progenitor (or per time/ancestral distance from any progenitor assuming nothing like a speciation event).

      A distinction in jargon needs to be made in order to disambiguate the results of the Yeh and Contreras paper.

      Mutation frequency is the measured occurrence of mutations in a population (as they exist). It is a measure of how self-similar a gene pool is.

      Mutation rate is the measured frequency of mutations over time. It is a measure of how quickly a gene pool is mutating.

    28. Three days ago Yeh and Contreras posted this preprint on medRxiv entitled Full vaccination suppresses SARS-CoV-2 delta variant mutation frequency. The abstract goes a step further, claiming this is the "first evidence that full vaccination against COVID-19 suppresses emergent mutations of SARS-CoV-2 delta variants". The problem is that this is a plainly incorrect interpretation of the results. While this paper isn't Lyu and Wehby absurd, it almost appears (to me) designed to mislead.

      The Yeh and Contreras delta variant mutation frequency paper claims vaccines suppress the emergence of SARS-CoV-2 delta variants, but they are incorrectly interpreting their own results.

    29. And while I haven't performed those calculations, my belief is strongly that either there is >99.9% chance the variants are driven primarily by the vaccinated or a >99.9% chance that the variants are driven primarily by the unvaccinated.

      It's either very likely the variants are driven primarily by the vaccinated or very likely the variants are driven by the unvaccinated.

    30. Also, the Variants of Interest strongly display the quality of escape from antibody classes:

      The Variants of Interest show escape from 2 out of 3 classes of antibodies.

    31. Variants of Interest never seemed to emerge until vaccine trials were held, then emerged in the vicinities of where those trials were held.

      Variants of interest did not emerge until vaccine trials started, and then only emerged in those locations.

    32. Those who claim the sloganesque "Unvaccinated are Variant Factories" would claim that vaccine-resistent strains are a subset of Variants of Interest. While they're not wrong, technically, let us remind ourselves that the Variants of Interest never seemed to emerge until vaccine trials were held, then emerged in the vicinities of where those trials were held.

      Those who claim the unvaccinated are variant factories see vaccine-resistent strains as a subset of Variants of Interest.

    1. More importantly, it makes no sense.

      The unvaccinated are variant factories hypthesis makes no sense.

    2. Sadly, either those running the mass vaccination program don't get it, or they just don't care to be honest about it. Just in time for Independence Day, CNN interviewed a single professor and doctor specializing in infectious diseases who declared that the unvaccinated are "variant factories".

      Those running the mass vaccination program don't get it or don't care to be honest about it.

    3. According to Muller's ratchet, we should expect the ordinary process of evolutionary mutation to also lead to the virus tripping over itself. As random mutations that do not immediately harm the ability of a virus to survive pile up, the probability that further mutation results in an organism that can no longer survive piles up. This further puts weakening evolutionary pressure on a highly virulent asexual organism. This tendency works to our advantage---so long as we don't screw it up.

      Muller's ratchet holds that the amount of mutations carried by a progeny is at least as many as that of its parents. As the mutations pile up, it becomes less likely they result in a combination that can survive. This works to our advantage so long as we don't screw it up.

    4. The Alpha variant emerged in the UK in October, which was when Oxford-AstraZeneca was holding vaccine trials there.The Beta variant emerged in South Africa, and was first detected in December, 2020, at the tail end of trial periods for both Oxford-AstraZeneca and Pfizer vaccines. This variant carries three mutations in the spike protein.The Gamma variant was first detected in Japan, but soon after in Brazil, making the origin a little harder to determine. But since Japan has had far lower viral spread than Brazil, it makes the most sense that Brazil was the source. Both Oxford-AstraZeneca and Pfizer trialed their vaccines in Brazil.The Delta variant was first detected in India in October, 2020. India hosted numerous vaccine trials including one for Oxford-AstraZeneca and one for Covishield.

      The Variants of Interest (Alpha, Beta, Gamma, Delta) seem all to have emerged in temporal and geographical vicinity to vaccine trials.

    5. The Delta variant was first detected in India in October, 2020. India hosted numerous vaccine trials including one for Oxford-AstraZeneca and one for Covishield.

      The Delta variant was first detected in India in October 2020. India hosted vaccine trials for Oxford-AstraZeneca and Covishield.

    6. The Gamma variant was first detected in Japan, but soon after in Brazil, making the origin a little harder to determine. But since Japan has had far lower viral spread than Brazil, it makes the most sense that Brazil was the source. Both Oxford-AstraZeneca and Pfizer trialed their vaccines in Brazil.

      The Gamma variant was first detected in Japan, and soon after in Brazil. Brazil was probably the source because it saw high viral spread. Oxford-AstraZeneca and Pfizer held trials in Brazil.

    7. The Beta variant emerged in South Africa, and was first detected in December, 2020, at the tail end of trial periods for both Oxford-AstraZeneca and Pfizer vaccines. This variant carries three mutations in the spike protein.

      The Beta variant was first detected in December 2020 in South Africa, which coincided with the tail end of trial periods for both Oxford-AstraZeneca and Pfizer.

    8. The Alpha variant emerged in the UK in October, which was when Oxford-AstraZeneca was holding vaccine trials there.

      The Alpha variant emerged in the UK in October at the same time AstraZeneca was holding vaccine trials there.

    9. It seems more likely that the sudden emergence of this "variant factory" story is a coordinated response to warnings about leaky vaccines put forth by vaccine expert Geert Vander Bossche, echoed in basic principle by evolutionary biologists Bret Weinstein and Heather Heying, along with evidence of dwindling vaccine efficacy.

      The emergence of the variant factory narrative is likely a coordinated response to the argument put forward by Geert Vanden Bossche and Bret Weinstein and Heather Heying.

    10. Were this the case, wouldn't it make sense that the "experts" (a plurality of pure illusion manufactured by the multitude of media parrots) would have warned us about this story last year while standing next to both Dr. Anthony Fauci and President "Operation Warp Speed" Donald Trump? Doesn't it seem odd that this story would suddenly make the headlines in July, after seven months of mass vaccination?

      If experts indeed consider the unvaccinated variant factories, it's odd this narrative only started to make headlines after 7 months of mass vaccination.

    11. And while COVID-19 cases and SARS-CoV-2 infections do not necessarily go hand-in-hand, it has certainly been true that CFR has generally declined almost everywhere in the world as the pandemic has moved on, regardless of health care practices.

      This is supported by the fact that the CFR for COVID-19 has generally declined almost everywhere in the world.

    12. As a pandemic wages on, the default expectation is for surviving strains of a virus to be those that find a way to push the boundaries of infectivity in order to keep the infection rate, R, above 1, while lowering the infection fatality rate (IFR) substantially.

      The default expectation for the current viral pandemic is that surviving viral strains will be those that are able to find ways to keep the infection rate (R) above 1 while lowering the infection fatality rate (IFR).

    13. One of the most dreadful propositions of mass vaccination is that large scale vaccination during a pandemic or epidemic promotes the selection of mutations that escape immunity. Generally speaking, most viruses tend to evolve toward greater ability to survive and thrive in a host, but with lessened ability to harm that host. After all, a harmed host is more likely to perish, generally along with all the many living things hitching a ride inside. For these and other reasons, it has been understood by the scientific community that imperfect vaccination can enhance the transmission of viruses (Read et al). This is sometimes referred to as the imperfect vaccine hypothesis or the "leaky" vaccine hypothesis.

      Large scale vaccination during a pandemic or epidemic may promote the selection of mutations that escape immunity by virtue of the vaccine or the campaign being imperfect/leaky.

      This has been understood by the scientific community and is known as the imperfect vaccine hypothesis.

    14. These examples both call into question the strategy of targeting the spike protein, but also give us a hint at the potential for disaster. What would happen if one of these variants included an additional mutation that makes COVID-19 explosively more deadly as happened with a leaky vaccine targeting Marek's disease in chickens. The result was an explosively more deadly viral variant that has caused $2 billion in damage to the poultry industry because the escape variants get so hot that they kill every infected bird within just 10 days.

      These results call into question the strategy of targeting the spike protein. What if another mutation makes the virus more deadly like in Marek's disease?

    15. These are most likely not just variants. These appear to be escape variants.

      These variants aren't random genetic drift, they appear to be escape variants.

    16. In another paper (McCallum et al) the Epsilon variant (B.1.427/B.1.429) showed substantial escape from immunity:Plasma from individuals vaccinated with a Wuhan-1 isolate-based mRNA vaccine or convalescent individuals exhibited neutralizing titers, which were reduced 2-3.5 fold against the B.1.427/B.1.429 variant relative to wildtype pseudoviruses. The L452R mutation reduced neutralizing activity of 14 out of 34 RBD-specific monoclonal antibodies (mAbs). The S13I and W152C mutations resulted in total loss of neutralization for 10 out of 10 NTD-specific mAbs since the NTD antigenic supersite was remodeled by a shift of the signal peptide cleavage site and formation of a new disulphide bond, as revealed by mass spectrometry and structural studies.

      A paper by McCallum et al showed that the Epsilon variant showed substantial escape from immunity.

    17. Now, let us consider the specific scientific literature examining some of these variants. Virologist Delphine Planas of the Institut Pasteur, along with colleagues, have found that antibodies of vaccinated patients have greatly diminished efficacy in fighting off the Delta strain (emphasis added):Sera from convalescent patients collected up to 12 months post symptoms were 4 fold less potent against variant Delta, relative to variant Alpha (B.1.1.7). Sera from individuals having received one dose of Pfizer or AstraZeneca vaccines barely inhibited variant Delta. Administration of two doses generated a neutralizing response in 95% of individuals, with titers 3 to 5 fold lower against Delta than Alpha. Thus, variant Delta spread is associated with an escape to antibodies targeting non-RBD and RBD Spike epitopes.That seems indicative of vaccine-specific escape.

      Researchers at the Institut Pasteur showed that:

      (1) Sera from convalescent and vaccinated individuals was multiple fold less effective at neutralizing Delta than Alpha.

      They posit that the spread of Delta is associated with an escape to antibodies targeting non-RBD and RBD Spike epitopes.

    18. But those that have emerged did so in geographies where vaccine trials were held---that is several variants from a far smaller genetic pool.

      The variants that did emerge did so where vaccine trials were held.

    19. It is noteworthy that variants of interest did not emerge during the early stages of the pandemic, despite mass spread of SARS-CoV-2 around the globe. That's a pretty huge sample size of unvaccinated people.

      Matthew claims that

      Variants of interest did not emerge during the early stages of the pandemic, despite SarS-CoV-2 spreading around the globe.

    20. The reason public health authorities did not talk about evolutionary escape of variants six or ten or fifteen months ago is that, generally speaking, that conversation does not favor the logic of a mass vaccination program in the middle of a pandemic.

      Public health authorities did not mention the possibility of evolutionary escape of variants, because it doesn't support the narrative of mass vaccination.

    21. To be clear: every host is an evolutionary factory for viruses. What should concern us is the nature of streamlining of the process.

      All hosts are variant factories, but what should concern us is the degree to which a sieve is introduced to sieve out certain genes selectively.

    22. In a highly vaccinated population, mutations occur at random, but the genetic spread among versions of the virus is narrowed to those that can evade immunity, which has now been made more uniform among the vaccinated population. This further encourages such lineages even when they would not have won out within individual hosts in competition among its cousins. Such evasion increases chances of reinfection.

      In highly vaccinated populations mutations also occur at random but the spread in variants is narrowed to those that evade immunity. Where they would not have outcompeted their cousins in the unvaccinated, in the vaccinated they will win out.

    23. In an unvaccinated population, mutations occur at random producing a wide genetic spread with very few progeny resulting in long lasting lineages (Muller's ratchet), with a selection pressure that favors those variants that can (a) win the competition of replication among its cousins within a host, and (b) not kill the host so that it can thrive in new hosts.

      In an unvaccinated population one would expect there to be random mutations and a spread in genetic variation and selection pressure favoring the variants that can (a) win the competition of replication and (b) not kill their host.

    1. Published clinical data on the safety of mRNA-LNP vaccines are scarce, in comparison with siRNA, and are limited to local administration (ID and IM).

      Safety of mRNA vaccines.

    2. Although LNPs are promising delivery systems, safety issues need to be addressed to enable proper clinical development of LNP-formulated mRNA vaccines. LNPs’ potential toxicity could be complex and might manifest in systemic effects due to innate immune activation (induction of pro-inflammatory cytokine production), and/or in local, cellular toxicity due to accumulation of lipids in tissues (Hassett et al. 2019; Semple et al. 2010; Sabnis et al. 2018). Toxicity could potentially be abrogated, or reduced, by the administration of prophylactic anti-inflammatory steroids or other molecules and/or using biodegradable lipids (Hassett et al. 2019; Abrams et al. 2010; Tabernero et al. 2013; Tao et al. 2011). LNPs can also activate the complement system and might potentially elicit a hypersensitivity reaction known as complement activation-related pseudoallergy (CARPA) (Dezsi et al. 2014; Mohamed et al. 2019; Szebeni 2005, 2014), which can be alleviated using different strategies such as steroid and anti-allergic premedication (i.e., dexamethasone, acetaminophen, and antihistaminic drugs) or the use of low infusion rates during intravenous administration (Mohamed et al. 2019; Szebeni et al. 2018). Alternatively, co-delivery of regulatory cytokines (i.e., IL-10) using LNPs might be a viable strategy to reduce potential LNP-associated adverse events.

      Safety of mRNA Liquid Nanoparticles

  10. Jul 2021
    1. Powerful suppliers, including suppliers of labor, can squeeze profi tability out of an industry that is unable to pass on cost increases in its own prices.

      Suppliers with bargaining power can squeeze the profitability out of an industry by raising prices on industry participants that cannot pass on cost increases in their own prices.

    2. It is the threat of entry, not whether entry actually occurs, that holds down profi tability.
    3. The threat of entry in an industry depends on the height of entry barriers that are present and on the reaction en-trants can expect from incumbents. If entry barriers are low and newcomers expect little retaliation from the entrenched competitors, the threat of entry is high and industry profi t-ability is moderated.

      The threat of entry depends on the barriers (i.e. moat) that are present and the reaction entrants can expect from incumbents. If both are low, the threat of new entrants is high.

    4. Particularly when new entrants are diversifying from other markets, they can leverage exist-ing capabilities and cash fl ows to shake up competition, as Pepsi did when it entered the bottled water industry, Micro-soft did when it began to offer internet browsers, and Apple did when it entered the music distribution business.

      When new entrants enter a market, they can often leverage existing cash flows and capabilities e.g. Apple when it entered the music distribution business.

    5. Industry structure drives competition and profi tability, not whether an industry produces a product or service, is emerging or mature, high tech or low tech, regulated or unregulated.

      Profitability is not driven by market maturation but by industry structure carved out by the five forces.

    1. One of the fundamental goals of a blockchain is resolving the “double spend” problem. In a nutshell, this means preventing someone from sending the same coin to two people. However, beyond just simple spend transactions, it applies any time two transactions want to update the same state. This could be someone trying to duplicate Bitcoin, or two people trying to buy the same CryptoKitty. For the sake of generality, we’ll call it the “double update” problem. Fundamentally it’s about ordering: when we see two things, how do we decide which is first, and what happens to the second one?

      The double spend problem is a subset of what can be called the double update problem. How do we order two updates to the same state?

  11. Jun 2021
    1. Furthermore, multiple coexisting or alternate mechanisms of action likely explain the clinical effects observed, such as the competitive binding of ivermectin with the host receptor-binding region of SARS-CoV-2 spike protein, as proposed in 6 molecular modeling studies.21–26

      The mechanism through which ivermectin works on SARS-CoV-2 may be by competitive binding with the receptor-binding region of the SARS-CoV-2 spike protein as proposed in 4 molecular modelling studies.

    1. DID infrastructure can be thought of as a global key-value database in which the database is all DID-compatible blockchains, distributed ledgers, or decentralized networks. In this virtual database, the key is a DID, and the value is a DID document. The purpose of the DID document is to describe the public keys, authentication protocols, and service endpoints necessary to bootstrap cryptographically-verifiable interactions with the identified entity.

      DID infrastructure can be thought of as a key-value database.

      The database is a virtual database consisting of various different blockchains.

      The key is the DID and the value is the DID document.

      The purpose of the DID document is to hold public keys, authentication protocols and service endpoints necessary to bootstrap cryptographically-verifiable interactions with the identified entity.

    1. DigiNotar was a Dutch certificate authority owned by VASCO Data Security International, Inc.[1][2] On September 3, 2011, after it had become clear that a security breach had resulted in the fraudulent issuing of certificates, the Dutch government took over operational management of DigiNotar's systems.[3]

      Dutch Certificate Authority gets hacked.

    1. New Trusted Third Parties Can Be Tempting Many are the reasons why organizations may come to favor costly TTP based security over more efficient and effective security that minimizes the use of TTPs: Limitations of imagination, effort, knowledge, or time amongst protocol designers – it is far easier to design security protocols that rely on TTPs than those that do not (i.e. to fob off the problem rather than solve it). Naturally design costs are an important factor limiting progress towards minimizing TTPs in security protocols. A bigger factor is lack of awareness of the importance of the problem among many security architects, especially the corporate architects who draft Internet and wireless security standards. The temptation to claim the "high ground" as a TTP of choice are great. The ambition to become the next Visa or Verisign is a power trip that's hard to refuse. The barriers to actually building a successful TTP business are, however, often severe – the startup costs are substantial, ongoing costs remain high, liability risks are great, and unless there is a substantial "first mover" advantage barriers to entry for competitors are few. Still, if nobody solves the TTP problems in the protocol this can be a lucrative business, and it's easy to envy big winners like Verisign rather than remembering all the now obscure companies that tried but lost. It's also easy to imagine oneself as the successful TTP, and come to advocate the security protocol that requires the TTP, rather than trying harder to actually solve the security problem. Entrenched interests. Large numbers of articulate professionals make their living using the skills necessary in TTP organizations. For example, the legions of auditors and lawyers who create and operate traditional control structures and legal protections. They naturally favor security models that assume they must step in and implement the real security. In new areas like e-commerce they favor new business models based on TTPs (e.g. Application Service Providers) rather than taking the time to learn new practices that may threaten their old skills. Mental transaction costs. Trust, like taste, is a subjective judgment. Making such judgement requires mental effort. A third party with a good reputation, and that is actually trustworthy, can save its customers from having to do so much research or bear other costs associated with making these judgments. However, entities that claim to be trusted but end up not being trustworthy impose costs not only of a direct nature, when they breach the trust, but increase the general cost of trying to choose between trustworthy and treacherous trusted third parties.

      There are strong incentives to stick with trusted third parties

      1. It's more difficult to design protocols that work without a TTP
      2. It's tempting to imagine oneself as a successful TTP
      3. Entrenched interests — many professions depend on the TTP status quo (e.g. lawyers, auditors)
      4. Mental transaction costs — It can be mentally easier to trust a third party, rather than figuring out who to trust.
    2. The high costs of implementing a TTP come about mainly because traditional security solutions, which must be invoked where the protocol itself leaves off, involve high personnel costs. For more information on the necessity and security benefits of these traditional security solutions, especially personnel controls, when implementing TTP organizations, see this author's essay on group controls. The risks and costs borne by protocol users also come to be dominated by the unreliability of the TTP – the DNS and certificate authorities being two quite commom sources of unreliability and frustration with the Internet and PKIs respectively.

      The high costs of TTPs have to do with the high personnel costs that are involved in the centralized solutions.

    3. The certificate authority has proved to be by far the most expensive component of this centralized public key infrastructure (PKI). This is exacerbated when the necessity for a TTP deemed by protocol designers is translated, in PKI standards such as SSL and S/MIME, into a requirement for a TTP. A TTP that must be trusted by all users of a protocol becomes an arbiter of who may and may not use the protocol. So that, for example, to run a secure SSL web server, or to participate in S/MIME, one must obtain a certifcate from a mutually trusted certificate authority. The earliest and most popular of these has been Verisign. It has been able to charge several hundred dollars for end user certificates – far outstripping the few dollars charged (implicitly in the cost of end user software) for the security protocol code itself. The bureaucratic process of applying for and renewing certificates takes up far more time than configuring the SSL options, and the CA's identification process is subject to far greater exposure than the SSL protocol itself. Verisign amassed a stock market valuation in the 10's of billions of U.S. dollars (even before it went into another TTP business, the Internet Domain Name System(DNS) by acquiring Network Solutions). How? By coming up with a solution – any solution, almost, as its security is quite crude and costly compared to the cryptographic components of a PKI – to the seemingly innocuous assumption of a "trusted third party" made by the designers of public key protocols for e-mail and the Web.

      The most expensive (and wasteful) part of Central Public Key Infrastructure is the Certificate Authority (the Trusted Third Party).

      Verisign became a billion dollar company by charging hundreds of dollars in subscription fees for issuing certificates. Even though their security wasn't anything out of the ordinary. It also takes far longer to request a certificate than it does to configure one for actual use.

      Meanwhile the costs paid for the protocol code, captured implicitly in the software's price, is a mere few bucks.

    4. Personal Property Has Not and Should Not Depend On TTPs For most of human history the dominant form of property has been personal property. The functionality of personal property has not under normal conditions ever depended on trusted third parties. Security properties of simple goods could be verified at sale or first use, and there was no need for continued interaction with the manufacturer or other third parties (other than on occasion repair personel after exceptional use and on a voluntary and temporary basis). Property rights for many kinds of chattel (portable property) were only minimally dependent on third parties – the only problem where TTPs were neededwas to defend against the depredations of other third parties. The main security property of personal chattel was often not other TTPs as protectors but rather its portability and intimacy. Here are some examples of the ubiquity of personal property in which there was a reality or at least a strong desire on the part of owners to be free of dependence on TTPs for functionality or security: Jewelry (far more often used for money in traditional cultures than coins, e.g. Northern Europe up to 1000 AD, and worn on the body for better property protection as well as decoration) Automobiles operated by and house doors opened by personal keys. Personal computers – in the original visions of many personal computing pioneers (e.g. many members of the Homebrew Computer Club), the PC was intended as personal property – the owner would have total control (and understanding) of the software running on the PC, including the ability to copy bits on the PC at will. Software complexity, Internet connectivity, and unresolved incentive mismatches between software publishers and users (PC owners) have substantially eroded the reality of the personal computer as personal property. This desire is instinctive and remains today. It manifests in consumer resistance when they discover unexpected dependence on and vulnerability to third parties in the devices they use. Suggestions that the functionality of personal property be dependent on third parties, even agreed to ones under strict conditions such as creditors until a chattel loan is paid off (a smart lien) are met with strong resistance. Making personal property functionality dependent on trusted third parties (i.e. trusted rather than forced by the protocol to keep to the agreement governing the security protocol and property) is in most cases quite unacceptable.

      Personal property did not depend on trusted third parties

      For most of human history personal property did not depend on Trusted Third Parties (TTP). To the extent that TTPs were needed, was to defend property from depredataions of other third parties.

      Jewelry, automobile keys, house keys — these all show that humans had a preference for having sovereign access to their property, without relying on third parties.

      This preference remains with us today and you can see it manifest itself in people's anger when they discover that part of their product is not owned by them.

    5. The main security property of personal chattel was often not other TTPs as protectors but rather its portability and intimacy.

      The security properties of personal chattel was not a Trusted Third Party (TTP), but their portability and intimacy.

    1. So, what problem is blockchain solving for identity if PII is not being stored on the ledger? The short answer is that blockchain provides a transparent, immutable, reliable and auditable way to address the seamless and secure exchange of cryptographic keys. To better understand this position, let us explore some foundational concepts.

      What problem is blockchain solving in the SSI stack?

      It is an immutable (often permissionless) and auditable way to address the seamless and secure exchange of cryptographic keys.

    1. But, as I have said many times here at AVC, I believe that business model innovation is more disruptive that technological innovation. Incumbents can adapt to and adopt new technological changes (web to mobile) way easier than they can adapt to and adopt new business models (selling software to free ad-supported software). So this new protocol-based business model feels like one of these “changes of venue” as my partner Brad likes to call them. And that smells like a big investable macro trend to me.

      Business model innovation is more disruptive than technological innovation.

    2. This is super important because the more open protocols we have, the more open systems we will have.

      Societal benefits of cryptocurrencies

      The more open protocols we have, the more open systems we have.

    1. From a comment by Muneeb Ali:

      The original Internet protocols defined how data is delivered, but not how it's stored. This lead to centralization of data.

      The original Internet protocols also didn't provide end-to-end security. This lead to massive security breaches. (Other reasons for security breaches as well, but everything was based on a very weak security model to begin with.)

    2. Because we didn’t know how to maintain state in a decentralized fashion it was the data layer that was driving the centralization of the web that we have observed.

      We didn't know how to maintain state in a decentralized fashion, and this is what drove centralization.

    3. I can’t emphasize enough how radical a change this is to the past. Historically the only way to make money from a protocol was to create software that implemented it and then try to sell this software (or more recently to host it). Since the creation of this software (e.g. web server/browser) is a separate act many of the researchers who have created some of the most successful protocols in use today have had little direct financial gain. With tokens, however, the creators of a protocol can “monetize” it directly and will in fact benefit more as others build businesses on top of that protocol.

      Tokens allow protocol creators to profit from their creation, whereas in the past they would need to create an app that implemented the protocol to do so.

    4. Organizationally decentralized but logically centralized state will allow for the creation of protocols that can undermine the power of the centralized incumbents.

      Organizationally decentralized but logically centralized

    1. The important innovation provided by the blockchain is that it makes the top right quadrant possible. We already had the top left. Paypal for instance maintains a logically centralized database for its payments infrastructure. When I pay someone on Paypal their account is credited and mine is debited. But up until now all such systems had to be controlled by a single organization.

      The top right quadrant is the innovation that blockchain represents.

    2. organizationally organizationally centralized decentralized logically eg *new* centralized Paypal Bitcoin logically eg eg decentralized Excel e-mail

      Organizationally decentralized, logically centralized

      Organizationally centralized are systems that are controlled by a single organization. Organizationally decentralized are systems that are not under control of any one entity.

      Logically decentralized are systems that have multiple databases, where participants control their own database entirely. Excel is logically decentralized. Logically centralized are systems that appear as if they have a single global database (irrespective of how it's implemented).

  12. May 2021
    1. It’s estimated there will be over 20 billion connected devices by 2020, all of which will require management, storage, and retrieval of data. However, today’s blockchains are ineffective data receptacles, because every node on a typical network must process every transaction and maintain a copy of the entire state. The result is that the number of transactions cannot exceed the limit of any single node. And blockchains get less responsive as more nodes are added, due to latency issues.

      There's a limit on how much data blockchain can handle because the nodes need to process every transaction and maintain a copy of the entire state.

    2. Another concern was the requirement for a dedicated network. The logic of blockchain is that information is shared, which requires cooperation between companies and heavy lifting to standardize data and systems. The coopetition paradox applied; few companies had the appetite to lead development of a utility that would benefit the entire industry. In addition, many banks have been distracted by broader IT transformations, leaving little headspace to champion a blockchain revolution.

      The coopetition paradox occurred in blockchain development. Companies didn't want to lead investment in technology that would benefit the entire industry.

    3. By late 2017, many people working at financial companies felt blockchain technology was either too immature, not ready for enterprise level application, or was unnecessary. Many POCs added little benefit, for example beyond cloud solutions, and in some cases led to more questions than answers. There were also doubts about commercial viability, with little sign of material cost savings or incremental revenues.

      By late 2017 many blockchain proof of concepts did not add much and the technology seemed unnecessary or too immature.

    1. The Internet was built without a way to know who and what you are connecting to. This limits what we can do with it and exposes us to growing dangers. If we do nothing, we will face rapidly proliferating episodes of theft and deception that will cumulatively erode public trust in the Internet.

      Kim Cameron posits that the internet was built without an identity layer. You have no way of knowing who and what you are connecting to.

    1. Today, the sector of the economy with the lowest IT intensity is farming, where IT accounts for just 1 percent of all capital spending. Here, the potential impact of the IoT is enormous. Farming is capital- and technology-intensive, but it is not yet information-intensive. Advanced harvesting technology, genetically modified seeds, pesticide combinations, and global storage and distribution show how complex modern agriculture has become, even without applying IT to that mix

      The sector with the lowest IT intensity is farming, where IT accounts for just 1 percent of all capital spending.

    2. The IoT creates the ability to digitize, sell and deliver physical assets as easily as with virtual goods today. Using everything from Bluetooth beacons to Wi-Fi-connected door locks, physical assets stuck in an analog era will become digital services. In a device driven democracy, conference rooms, hotel rooms, cars and warehouse bays can themselves report capacity, utilization and availability in real-time. By taking raw capacity and making it easy to be utilized commercially, the IoT can remove barriers to fractionalization of industries that would otherwise be impossible. Assets that were simply too complex to monitor and manage will present business opportunities in the new digital economy.

      IoT ushers in a device driven democracy where conference rooms, hotel rooms and cars can self-report capacity, utilization and availability in real-time.

      IoT can make it easier to fractionalize industries that would otherwise be impossible.

    3. In this model, users control their own privacy and rather than being controlled by a centralized authority, devices are the master. The role of the cloud changes from a controller to that of a peer service provider. In this new and flat democracy, power in the network shifts from the center to the edge. Devices and the cloud become equal citizens.

      In a blockchain IoT the power in the network shifts from the center to the edge.

    4. Challenge five: Broken business modelsMost IoT business models also hinge on the use of analytics to sell user data or targeted advertising. These expectations are also unrealistic. Both advertising and marketing data are affected by the unique quality of markets in information: the marginal cost of additional capacity (advertising) or incremental supply (user data) is zero. So wherever there is competition, market-clearing prices trend toward zero, with the real revenue opportunity going to aggregators and integrators. A further impediment to extracting value from user data is that while consumers may be open to sharing data, enterprises are not.Another problem is overly optimistic forecasts about revenue from apps. Products like toasters and door locks worked without apps and service contracts before the digital era. Unlike PCs or smartphones, they are not substantially interactive, which makes such revenue expectations unrealistic. Finally, many smart device manufacturers have improbable expectations of ecosystem opportunities. While it makes interesting conversation for a smart TV to speak to the toaster, such solutions get cumbersome quickly and nobody has emerged successful in controlling and monetizing the entire IoT ecosystem.So while technology propels the IoT forward, the lack of compelling and sustainably profitable business models is, at the same time, holding it back. If the business models of the future don’t follow the current business of hardware and software platforms, what will they resemble?

      Challenge 5 for IoT: Broken business models

      Conventional IoT business models relied on using and selling user data with targeted advertising. This won't work. Enterprises aren't willing to share data.

      Doorknobs and toasters worked without apps before, and whatever smartness is added, they won't be very interactive. Capturing sufficient value from this will be difficult.

      Having your toaster talk to your fridge sounds interesting, but it doesn't improve the user's life.

    5. Challenge four: A lack of functional valueMany IoT solutions today suffer from a lack of meaningful value creation. The value proposition of many connected devices has been that they are connected – but simply enabling connectivity does not make a device smarter or better. Connectivity and intelligence are a means to a better product and experience, not an end. It is wishful thinking for manufacturers that some features they value, such as warranty tracking, are worth the extra cost and complexity from a user’s perspective. A smart, connected toaster is of no value unless it produces better toast. The few successes in the market have kept the value proposition compelling and simple. They improve the core functionality and user experience, and do not require subscriptions or apps.

      Challenge 4 for IoT: A lack of functional value

      Making a device smart, doesn't necessarily improve the experience. A smart toaster is of no value unless it produces better toast.

    6. Challenge three: Not future-proofWhile many companies are quick to enter the market for smart, connected devices, they have yet to discover that it is very hard to exit. While consumers replace smartphones and PCs every 18 to 36 months, the expectation is for door locks, LED bulbs and other basic pieces of infrastructure to last for years, even decades, without needing replacement. An average car, for example, stays on the road for 10 years, the average U.S. home is 39 years old and the expected lifecycles of road, rail and air transport systems is over 50 years.10 A door lock with a security bug would be a catastrophe for a warehousing company and the reputation of the manufacturer. In the IoT world, the cost of software updates and fixes in products long obsolete and discontinued will weigh on the balance sheets of corporations for decades, often even beyond manufacturer obsolescence.

      Challenge 3 for IoT: Not future proof

      (1) Consumers have different expectations for the longevity of smartphones and PCs (1.5-3 years) than they do door locks, LED bulbs etc.

      (2) A door lock might have a security bug, requiring an update, and impacting the manufacturer's reputation.

      (3) Software updates might need to shipped for discontinued, obsolete products.

    7. Challenge two: The Internet after trustThe Internet was originally built on trust. In the post-Snowden era, it is evident that trust in the Internet is over. The notion of IoT solutions built as centralized systems with trusted partners is now something of a fantasy. Most solutions today provide the ability for centralized authorities, whether governments, manufacturers or service providers to gain unauthorized access to and control devices by collecting and analyzing user data. In a network of the scale of the IoT, trust can be very hard to engineer and expensive, if not impossible, to guarantee. For widespread adoption of the ever-expanding IoT, however, privacy and anonymity must be integrated into its design by giving users control of their own privacy. Current security models based on closed source approaches (often described as “security through obscurity”) are obsolete and must be replaced by a newer approach – security through transparency. For this, a shift to open source is required. And while open source systems may still be vulnerable to accidents and exploitable weaknesses, they are less susceptible to government and other targeted intrusion, for which home automation, connected cars and the plethora of other connected devices present plenty of opportunities.

      Challenge 2 of IoT: The internet after trust

      In the post-Snowden era, it is not realistic or wise to expect the world of IoT to be based on a centralized trust model.

      Most solutions today provide the ability to centralized authorities, whether governments, manufacturers or service providers to gain unauthorized access to and control devices.

      Because of the scale of IoT, a centralized trust architecture would not be scalable or affordable.

      Privacy and anonymity must be integrated into its design by giving user control over their own privacy.

      A shift from closed source to open source is required. Open source systems are less susceptible to targeted intrusion.

    8. Challenge one: The cost of connectivityEven as revenues fail to meet expectations, costs are prohibitively high. Many existing IoT solutions are expensive because of the high infrastructure and maintenance costs associated with centralized clouds and large server farms, in addition to the service costs of middlemen. There is also a mismatch in supplier and customer expectations. Historically, costs and revenues in the IT industry have been nicely aligned. Though mainframes lasted for many years, they were sold with enterprise support agreements. PCs and smartphones have not

      Challenge 1 of IoT: The cost of connectivity

      There are high infrastructure and maintenance costs associated with running IoT communication through centralized clouds and the service costs of middlemen.

    1. One way the blockchain could change online security dynamics is the opportunity to replace the flawed “shared-secret model” for protecting information with a new “device identity model.” Under the existing paradigm, a service provider and a customer agree on a secret password and perhaps certain mnemonics—“your pet’s name”—to manage access. But that still leaves all the vital data, potentially worth billions of dollars, sitting in a hackable repository on the company’s servers. With the right design, a blockchain-based system would leave control over the data with customers, which means the point of vulnerability would lie with their devices. The onus is now on the customer to protect that device, so we must, of course, develop far more sophisticated methods for storing, managing, and using our own private encryption keys. But the more important point is that the potential payoff for the hacker is so much smaller for each attack. Rather than accessing millions of accounts at once, he or she has to pick off each device one by one for comparatively tiny amounts. Think of it as an incentives-weighted concept of security.

      Using blockchain we could shift from a shared-secret model to a device identity model.

      This would mean that the customer's data is stored with the customer, not on a central database.

      The onus is then on the customer to protect that data and the device it's on.

      The important point is that you're replacing an attractive single attack vector for the hacker, with many distributed vectors, and reducing the potential pay off of each.

      You achieve security through incentives.

    2. So much of what’s foreseen won’t be viable without distributed trust, whether it’s smart parking systems transacting with driverless cars, decentralized solar microgrids that let neighbors automatically pay each other for power, or public Wi-Fi networks accessed with digital-money micropayments. If those peer-to-peer applications were steered through a centralized institution, it would have to “KYC” each device and its owner—to use an acronym commonly used to describe banks’ regulatory obligation to conduct “know your customer” due diligence. Those same gatekeepers could also curtail competitors, quashing innovation. Processing costs and security risks would rise. In short, a “permissioned” system like this would suck all the seamless, creative fluidity out of our brave new IoT world.

      Permissioned vs. Permissionless

      Many solutions will not be viable without distributed trust because routing all transactions through a central authority comes with too much friction.

      (1) KYC requirements for each node (2) Processing costs rise

      At the same time centralizing these transactions has other adverse effects:

      (1) It gives the centralized entity gatekeeper power, giving them the ability to curtail competitors, thereby quashing innovation. (2) Security risks rise, because data passes through a centralized location.

      Permissioned systems stifle innovation.

    3. Bitcoin has survived because it leaves hackers nothing to hack. The public ledger contains no personal identifying information about the system’s users, at least none of any value to a thief. And since no one controls it, there’s no central vector of attack. If one node on the bitcoin network is compromised and someone tries to undo or rewrite transactions, the breach will be immediately contradicted by the hundreds of other accepted versions of the ledger.

      Bitcoin has not been hacked, in part, because it leaves hackers nothing to hack.

      (1) The public ledger contains to personal identifying information (2) No one controls it, so there's no central vector of attack

    4. Ever since its launch in 2009, there has been no successful cyberattack on bitcoin’s core ledger—despite the tempting bounty that the digital currency’s $9 billion market cap offers to hackers.

      There has been no successful cyberattack on Bitcoin despite the tempting bounty.

    5. Thirty years later, we finally have the conceptual framework for such a system, one in which trust need no longer be invested in a third-party intermediary but managed in a distributed manner across a community of users incentivized to protect a public good. Blockchain technology and the spinoff ideas it has spawned provide us with the best chance yet to solve the age-old problem of the Tragedy of the Commons.

      Blockchain technology allows us to distribute trust across a community of users incentivized to protect a public good.

    6. Thirty years later, we finally have the conceptual framework for such a system, one in which trust need no longer be invested in a third-party intermediary but managed in a distributed manner across a community of users incentivized to protect a public good.

      Blockchain heralds the nascent system for managing trust in a distributed manner across a community of users incentivized to protect a public good.

    7. The problem was that in those early years, Silicon Valley had no distributed trust management system to match the new distributed communications architecture.

      In the same way we didn't initially have the communication network architecture to support peer-to-peer communication, once we did, we still didn't have the trust architecture to support distributed trust management.

    8. The single most important driver of decentralization has been the fact that human communication—without which societies, let alone economies, can’t exist—now happens over an entirely distributed system: the Internet. The packet-switching technology that paved the way for the all-important TCP/IP protocol pair meant that data could travel to its destination via the least congested route, obviating the need for the centralized switchboard hubs that had dominated telecommunications. Thus, the Internet gave human beings the freedom to talk to each other directly, to publish information to anyone, anywhere. And because communication was no longer handled via a hub-and-spokes model, commerce changed, too. People could submit orders to an online store or enter into a peer-to-peer negotiation over eBay.

      Human communication and economic transactions are by their very nature peer-to-peer. Early telecommunication technology was able to scale these interactions over larger groups of participants and over larger distances, but they did so through a hub-and-spoke model relying on centralized switchboards.

      The internet, with its distributed architecture, offered a way for distributed nodes to communicate across the least congested route between them, thereby constituting a distributed architecture and a better match for the distributed nature of human communication and commerce.

    9. For IT-savvy thieves, it’s the best of both worlds: more and more locations from which to launch surreptitious attacks and a set of ever-growing, centralized pools of valuable information to go after.

      Cybercriminals have the best of both worlds:

      (1) More access to points to launch attacks from due to increased decentralization (e.g. IoT) (2) More and larger centralized pools of valuable information to go after

    10. Decentralization, meanwhile, is pushing the power to execute contracts and manage assets to the edges of the network, creating a proliferation of new access points.

      Decentralization, like the proliferation of IoT, is pushing the power to execute contracts and manage assets to the individual nodes of the network.