545 Matching Annotations
  1. Dec 2021
    1. Most of the descriptions I’ve seen focus on mechanisms - block chains, smart contracts, tokens, etc - but I would argue those are implementation details and some are much more likely to succeed than others. (E.g. I think using private keys for authentication/authorization is obviously better if you can get over the UX hump - SSH has shown us that for decades.)

      Most descriptions of Web3 focus on mechanisms — blockchains, smart contracts, etc — but those are implementation details.

  2. Nov 2021
    1. It remains unclear whether the reduction in the neutralization sensitivity of the N501Y.V2 strain to vaccine-induced antibodies is enough to seriously reduce vaccine efficacy. First, mRNA vaccines also induce virus-specific helper T cells and cytotoxic T cells, both of which might be involved in protection against challenge. Also, the mRNA vaccines, in particular, induce such a strong NAb response that there could be enough “spare capacity” to deal with reductions in the sensitivity of the variant to NAbs. In other words, N501Y.V2 (and the related virus from Brazil) may be less sensitive to NAbs, but not to an extent that will cause widespread vaccine failure.

      Variants that show reduced sensitivity to NAbs don't necessarily mean mRNA vaccine failure

      New variants may emerge that show reduced sensitivity to NAbs.

      This may not result in vaccine failure because:

      1. The mRNA vaccines induce such a strong NAb response, there will be enough spare capacity to deal with the virus.
      2. The mRNA vaccines also induce other virus specific protection such as helper T cells and cytotoxic T cells, which may not be affected by the reduction in NAb sensitivity.
    1. The study demonstrated the capacity of a third dose to broaden antibody-based immunity and boost protection against circulating variants of concern. However, it is interesting that neutralizing responses against the Beta variant, known to markedly escape vaccine-elicited antibody responses4, were only fractionally better in those receiving a Beta-specific booster immunization.

      Choi et al. showed that a Beta-targeted booster shot broadened antibody-based immunity and boosted protection against circulating variants, the neutralizing response against the Beta variant was only slightly better.

      @gerdosi thinks this points to Original Antigenic Sin.

    1. Interestingly, all four vaccine breakthrough infection subjects who had previous COVID-19 were seropositive for anti-membrane IgG during acute infection, while no breakthrough subjects without prior COVID-19 had detectable anti-membrane antibodies in the acute infection period (Figure 1I).

      Vaccinated individuals that experience a breakthrough infection do not develop antibodies to the parts of the virus that is not encoded by the vaccine.

    1. As that tagline suggests, an assumption runs quietly through Needle Points that Covid vaccines are by and large safe, necessary, and generally beneficial for personal and public health. Therefore, opposition to them must be explained in psychological or sociological terms, because we all know that, scientifically speaking, opposition is baseless.

      This assumption runs through all appeals to the unvaccinated.

    1. The survey was vague -- the only product-specific query asked about a “Discord-native crypto wallet” -- but it showed that Discord was aware of the web3 community’s growing usage of its product and at least exploring how it might play in the space. 

      Discord might be mulling a native wallet.

    2. Discord’s bot ecosystem extends into crypto. In a recent piece on DAOs, The Generalist outlined a few integrations that have caught on with the web3 world. In particular, products like Collab.Land — which allows holders of unique tokens or NFTs to access private channels — have become essential. Other players in this subspace include Tip (accept crypto tips!) and Piggy (an RPG with crypto rewards).

      Discord integrates with web3. One example of this are channels that are only accessible for people holding a specific NFT.

    3. Discord allows for intra-group socialization, but also adds a social layer on top of this structure.

      Discord allows for intra-group socializations (like Slack), but also allows socialization across groups.

    4. Whereas Slack was clearly designed to be the home for one company and its employees -- each time you get invited to a new Slack workspace, you need to re-enter your email and go through the signup flow -- Discord was built for promiscuity. Discord users are expected to jump from server to server, and to slide into any other Discord user’s DMs. 

      Slack was designed for monogamous relationships between a user and their company, Discord was designed for promiscuity.

    1. Readministration of influenza vaccine has become an annual event for much of the population, in response to both waning immunity and the appearance of variants, termed antigenic drift, necessitating updated vaccines. Even when there is no substantial drift, revaccination is recommended because of waning immunity. But antigenic drift is a constant issue and is monitored globally, with vaccine composition updated globally twice a year on the basis of recommendations from a World Health Organization consultation.

      Influenza vaccines need to be updated yearly to counter (1) waning immunity and (2) antigenic drift.

      Antigenic drift is monitored globally and the WHO makes recommendations for the updates.

    2. Thus, the value of influenza vaccines, now given to as many as 70% of people in some age groups, lies not in eliminating outbreaks but in reducing them and preventing severe complications.

      The goal of influenza vaccines is to prevent severe complications and to reduce outbreaks — not to prevent them.

      As many as 70% of some age groups get influenza vaccines.

    3. Vaccine effectiveness against laboratory-confirmed symptomatic infection is never higher than 50 to 60%, and in some years it is much lower.

      Vaccine effectiveness for influenza vaccines for symptomatic infection is never higher than 50-60% and some years it is much slower.

    4. Eliminating Covid-19 seemed theoretically possible, because the original 2002 SARS virus ultimately disappeared.

      Eliminating SARS-CoV-2 was deemed plausible, because SARS-CoV-1 had been eliminated.

    5. The effect on asymptomatic infections was a welcome surprise, because it has been thought that most vaccines for respiratory illnesses, including influenza, are “leaky” — that is, they allow some degree of asymptomatic infection and are better at preventing symptomatic infection.

      Most vaccines for respiratory illnesses are leaky.

      The efficacy the mRNA vaccines showed in preventing asymptomatic transmission was therefore a welcome surprise.

    1. The spike protein was a target of human SARS-CoV-2 CD8+ T cell responses, but it is not dominant. SARS-CoV-2 M was just as strongly recognized, and significant reactivity was noted for other antigens, mostly nsp6, ORF3a, and N, which comprised nearly 50% of the total CD8+ T cell response, on average. Thus, these data indicate that candidate COVID-19 vaccines endeavoring to elicit CD8+ T cell responses against the spike protein will be eliciting a relatively narrow CD8+ T cell response compared to the natural CD8+ T cell response observed in mild to moderate COVID-19 disease.

      When looking at CD8+ T cell responses, the spike protein was not immuno-dominant. M was just as strongly recognized and significant reactivity was observed for other antigens.

    2. In the case of CD4+ T cell responses, data for other coronaviruses found that spike accounted for nearly two-thirds of reported CD4+ T cell reactivity, with N and M accounting for limited reactivity, and no reactivity in one large study of human SARS-CoV-1 responses (Li et al., 2008). Our SARS-CoV-2 data reveal that the pattern of immunodominance in COVID-19 is different. In particular, M, spike, and N proteins were clearly co-dominant, each recognized by 100% of COVID-19 cases studied here. Significant CD4+ T cell responses were also directed against nsp3, nsp4, ORF3s, ORF7a, nsp12, and ORF8. These data suggest that a candidate COVID-19 vaccine consisting only of SARS-CoV-2 spike would be capable of eliciting SARS-CoV-2-specific CD4+ T cell responses of similar representation to that of natural COVID-19 disease, but the data also indicate that there are many potential CD4+ T cell targets in SARS-CoV-2, and inclusion of additional SARS-CoV-2 structural antigens such as M and N would better mimic the natural SARS-CoV-2-specific CD4+ T cell response observed in mild to moderate COVID-19 disease.

      When looking at CD4+ T cell (which coordinate the immune response by activating other immune cells) responses, other proteins besides the spike protein were immuno-dominant, such as M and N. There were also other significant responses detected against other parts of SARS-CoV-2.

    1. 4) More viral replication produces more particles that stimulate stronger immune responses inside and outside of cells. The immune system is able to recognize and differentiate active viral replication inside cells compared to replication of self-DNA and transcription into mRNA. As viruses infect neighboring cells and spread, this results in a strong signal to local immune cells that help activate T and B cells. Although an mRNA vaccine mimics this signal, spike proteins can’t replicate beyond the spike-encoding mRNA contained in the vaccine, and as a result the signal isn’t as strong and doesn’t affect as many cells, limiting the strength and durability of downstream immunity. This is overcome to some extent with a second dose and with a booster vaccination, which will improve the quality of antibody binding in some indviduals, but not others.

      Viral replication triggers a stronger immune response because more cells are involved in triggering it than through the limited number of cells that express spike induced by vaccination.

    2. 3) Most SARS-CoV-2 vaccines only stimulate immunity against the spike protein. The spike protein of coronaviruses allows for virus attachment to and invasion of host cells. A strong immune response to the spike protein will result in the production of antibodies that prevent the virus from binding the viral receptor (ACE2) on human cells, thus preventing or slowing viral spread. The vaccine consists of mRNA that only codes for the SARS-CoV-2 spike protein, and is packaged to allow cells to uptake spike mRNA and translate the message into protein. That makes those muscle cells look like they’ve been infected to the immune system, which responds with activation and multiplication of spike-recognizing T and B cells. In contrast to this limited scope of immunity in response to vaccination, T and B cells are activated in response to infection that recognize all parts of the virus, including the nucleocapsid and other viral proteins. Although antibodies to these proteins are less likely to block viral entry of host cells, more T cells will recognize these antigens and will be able to kill infected cells due to a broader activation of the immune repertoire. However, this also increases the opportunity for autoimmune pathology (as does any strong immune response), which is an important contributor to severe SARS-CoV-2 infection. In other words, stronger protective immunity comes with a tradeoff of a higher potential for immune destruction and long-term effects.

      Spike-based vaccines only induce an immune response to spike epitopes, and not other parts of the virus such as the nucleocapsid.

    3. 2) Viral antigen may persist after infection, but is less likely to persist after vaccination. This is an important difference between influenza vaccine-induced and infection-induced immunity. Even after symptoms have resolved and live virus has been cleared, the lungs still harbor a reservoir of influenza proteins and nucleic acids that continuously stimulate the development of immunity for extended periods of time. That doesn’t happen in response to vaccine injection, where inactivated virus stimulates an immune response that is cleared quickly and efficiently. Scientists are working on ways to develop vaccines that mimic this antigen persistence to stimulate longer-lasting immunity to influenza vaccination, with some proposing viral antigen packaged in slow-degrading nanoparticles. It is very likely that antigen persistence also occurs during SARS-CoV-2 infection, as viral mRNA and antigens have been detected for months in the small intestines of previously infected individuals. It is unknown how viral nucleic acids and proteins persist after clearance of infection, but it appears to be an important factor in the development of durable antiviral immune memory. In contrast, spike proteins produced by mRNA vaccination may only persist for a few days, thus limiting the time for stimulation and subsequent memory development.

      Viral antigens in influenza are more likely to persist and stimulate continued maturation of the immune response after a natural infection than after vaccination

    4. In response to a vaccine, the immune response starts in the deltoid muscle of the arm. The spike protein of the virus is produced in muscle cells, and spike-recognizing T and B cells in the arm-draining lymph nodes (in the armpit) are activated. The T cells that are activated do not express lung-homing molecules, and neither do the memory T cells that develop later. Activated B cells secrete virus-neutralizing antibodies, but little mucosal IgA is produced. If an infection occurs, memory cells from vaccination will respond quickly, but there won’t be many located in or immediately targeted to the lung, and viral-binding IgA won’t immediately block airway cell-invading viruses.

      In response to the vaccine, the immune response starts in the deltoid muscle of the arm. The spike protein is produced primarily in the muscle cells which activates T and B cells in the arm-draining lymph nodes.

      Unlike the T cells that are activated through a respiratory infection, these T-cells do not express lung-homing molecules, nor do the memory T-cells that develop later.

      If a reinfection occurs, there won't be many memory cells in the lung, and little mucosal IgA is produced to fend off the infection.

    5. In response to a respiratory viral infection, an immune response begins after viruses infect and spread among cells in the airways. This results in the activation of many airway and mucosal-specific immune responses. In the lungs, the lymphatic system drains to lung-associated lymph nodes, where T cells and B cells become activated after recognizing their specific antigen, which consists of pieces of viral proteins that can bind to the T or B cell surface receptors. In lung-associated lymph nodes, these cells are “imprinted” by activation of specific molecules that help them migrate to lung tissues. B cells get specific signals to make antibodies, including a specific type called IgA that is secreted into airways. When an individual recovers from infection, some of these immune cells become long lasting lung-resident and memory cells that can be activated and targeted much more quickly during a reinfection and thus limit spread in the lungs and disease severity.

      The immune response to a respiratory viral infection starts with mucosal-specific immune responses.

      The lymphatic system that takes care of the lungs drains to specific lymph nodes where B and T cells can become activated when they recognize specific viral antigens. There they can become "imprinted" which helps the migrate to the lungs effectively.

      B-cells produce IgA — a type of antibody associated with mucosal immunity — and secrete them into the airways.

      Some of these immune cells become long lasting lung residents and memory cells that, due in part to their new residence, can activated quickly and easily upon a reinfection.

  3. Sep 2021
    1. Neither the vaccinated nor unvaccinated individuals are to be blamed.

      Neither the vaccinated or the unvaccinated should be blamed.

    2. As I’ve been explaining in one of my previous articles (https://trialsitenews.com/why-is-the-ongoing-mass-vaccination-experiment-driving-a-rapid-evolutionary-response-of-sars-cov-2/), this selection was most likely due to overcrowding (e.g., in favelas or slums in certain cities in Brazil or South-Africa) or possibly even due to prolonged infection-prevention measures in other regions (as prolonged infection-prevention measures lead to suppression of innate immunity and could now, indeed, provide a competitive advantage to more infectious variants).

      The selection that led to the emergence of these variants was most likely due to either overcrowding in certain cities in Brazil or South-Africa, or possibly due to prolonged infection-prevention measures in other regions. Prolonged infection-prevention measures lead to a suppression of innate immunity, which confers a competitive advantage to more infectious variants.

    3. Boosters and/ or extending mass vaccination campaigns to younger age groups will only expedite the occurrence of viral resistance to the vaccines and cause substantial harm to both the unvaccinated and vaccinated.

      Boosters and vaccination of younger age groups will expedite the emergence of variants that are resistant to vaccine-induced immunity.

    4. The ‘more humane’ response, therefore, is to treat people at an early stage of the disease instead of preventing herd immunity from getting established.

      A better strategy for bringing the pandemic under control is treating people at an early stage of the disease, instead of pursuing a mass vaccination campaign which prevents herd immunity from being established.

    5. It should suffice to ask him how mass vaccination is going to tame the dramatic expansion of increasingly infectious viral variants as it is now generally acknowledged that mass vaccination will not enable herd immunity and as it is too well understood that no pandemic can be tamed without achieving herd immunity.

      Goldman doesn't provide an answer to the question of how mass vaccination is going to bring under control the expansion of increasingly infectious variants.

      It is now generally acknowledged that mass vaccination will not enable herd immunity.

      It is well understood that no pandemic can be brought under control without achieving herd immunity.

    6. On the contrary, the unvaccinated are the only hope for the human population to build herd immunity, either by virtue of their innate immunity (if asymptomatically infected) or by virtue of their naturally acquired immunity (if symptomatically infected).

      The unvaccinated are the only hope for the population to build herd immunity through either innate immunity or through naturally acquired immunity.

    7. Deaths under the unvaccinated will not lead to diminished viral infectivity as the unvaccinated are not a breeding ground for more infectious variants.

      [Refuting Goldman's point about the unvaccinated being the breeding ground for variants]

      Deaths of the unvaccinated will not avoid the emergence of variants that escape vaccine-induced immunity, because they are not a breeding ground for more infectious-variants.

    8. Why would the unvaccinated even survive if – according to Goldman – they’re not vaccinated and hence, not protected? It’s, of course, thanks to their innate immunity which they should try to boost and, more importantly, preserve by avoiding repeated exposure to the circulating (more infectious) variants.

      [Refuting this point]

      The unvaccinated survive by virtue of their innate immunity, which Goldman doesn't consider.

      Innate immunity can be preserved by avoiding repeated exposure to the circulating more-infectious variants.

      The unvaccinated should try to boost and preserve their innate immunity to avoid disease.

    9. Darwinian selection may also yet solve the problem with a much crueler calculus. The unvaccinated will either get sick and survive, and therefore be the equivalent of vaccinated, or they will die and therefore be removed as breeding grounds for the virus.

      Goldman claims that the problem of the emergence of a variant which escapes vaccine-induced immunity might solve itself in a crueler way. The unvaccinated will either survive and be equivalent to being vaccinated, or they will die and no longer be a breeding ground for the virus.

    10. For lack of any fundamental knowledge in immunology, Goldman doesn’t understand that exactly the opposite applies!

      [Refuting this point]

      The emergence of a variant which evades vaccine-induced immunity can be avoided by abandoning universal vaccination.

    11. This dire prediction need not occur if universal vaccination is adopted, or mandated, to protect everyone, including those who are already vaccinated.

      Goldman claims that the emergence of a variant which escapes vaccine-induced immunity — which would put the vaccinated at risk once again — would not occur if universal vaccination is adopted or mandated.

    12. Again, there is only one single culprit: MASS vaccination across all age groups during a pandemic of more infectious variants.

      [Agreeing with the premise, but identifying a different cause]

      If a variant emerges which escapes vaccine-induced immunity, the cause will have been mass-vaccination across all age groups during a pandemic of more-infectious variants.

    13. Progress we have made in overcoming the pandemic will be lost. New vaccines will have to be developed. Lockdowns and masks will once again be required. Many more who are currently protected, especially among the vulnerable, will die.

      Goldman claims that if this happens, our progress in overcoming the pandemic will be lost. We will have to develop new vaccines, lockdowns and masks will once again be required. Many who are currently protected against severe disease and death will die.

    14. A variant could arise that is resistant to current vaccines, rendering those already vaccinated susceptible again.

      Goldman claims a variant could arise that is resistant to the current vaccines, rendering the vaccinated susceptible to severe disease once again.

    15. Because of mass vaccination, there is now a large part of the population that exerts increasing S-directed immune selection pressure that provides more infectious variants to gain a strong competitive advantage and reproduce more effectively on a background of highly S-specific neutralizing antibodies.

      [GVD disputes this claim]

      Mass vaccination has created a situation where a large part of the population is exerting spike-protein-directed immune selection pressure on the virus, which confers a competitive advantage to more-infectious variants.

    16. The real danger is a future variant, which will be the legacy of those people who are not getting vaccinated providing a breeding ground for the virus to continue to generate variants.

      Goldman claims a future variant is the primary risk to be considered.

      Goldman claims that if such a variant emerges it will have been the result of the unvaccinated. Because those people provide breeding grounds for the virus to continue to generate variants.

    17. Nevertheless, the Delta variant is exhibiting increased frequency of breakthrough infections among the vaccinated (4).

      Goldman claims we're seeing an increased frequency of breakthrough infections among the vaccinated.

    18. The more infectious variants that started circulating before mass vaccination had already been subject to S-directed immune selection pressure! How could one otherwise explain that all these variants developed mutations that were converging towards immunodominant domains in the S protein?

      [Refuting Goldman's point]

      [Definition] Immunodominant The ability of a specific antigen or epitope to induce a measurable or clinically meaningful immune response when other structurally related antigens do not.

      The more-infectious variants that started circulating before mass vaccination had already been subjected to S-directed immune pressure.

      This is the most plausible explanation for these variants developing mutations that converged towards immunodominant domains in the spike protein.

    19. So far, we have been lucky that the variants that have emerged can still be somewhat controlled by current vaccines, probably because these variants evolved in mostly unvaccinated populations and were not subject to selective pressure of having to grow in vaccinated hosts.

      Goldman claims we're lucky that the vaccines are still effective against the variants that emerged.

      He claims this is most likely due to these variants having evolved in unvaccinated populations, not subject to the selective pressure of vaccinated hosts.

    20. Yes, natural selection of more infectious variants happens within the vaccinated population, but not in the non-vaccinated population. This already explains why there was a fall in cases when the lockdown measures in the UK were abandoned and society opened up again. Opening-up society resulted in absorption of more infectious variants (i.e., the Delta variant) by non-vaccinated people. In this population, the Delta variant had no longer a competitive advantage (as unvaccinated individuals can effectively deal with ALL Sars-CoV-2 lineages).

      [Partially agreeing with Goldman]

      Natural selection of more-infectious variants (such as escape variants) happens in the vaccinated population, but not in the unvaccinated.

      When the UK re-opened it resulted in an absorption of more infectious variants by non-vaccinated people.

      Due to this absorption, and the non-specific response mounted by the unvaccinated, the Delta variant no longer had a competitive advantage. The result was that cases fell.

    21. When this occurs within a background of a largely vaccinated population, natural selection will favor a variant that is resistant to the vaccine.

      Goldman claims that against a background of a largely vaccinated population, natural selection will favor variants that are resistant to the vaccine.

    22. Goldman’s interpretation does not take into account that unvaccinated people do have protective immunity, either due to innate or naturally acquired immunity.


      Goldman's interpretation does not take into account that people do have protective immunity through innate or acquired immunity.

    23. The unvaccinated part of the population is, therefore, anything but a reservoir for the virus! On the contrary, their capacity to eliminate the virus in a non-selective manner will lead to a diminished concentration of more infectious immune escape variants in the unvaccinated population, and even in the overall population provided the unvaccinated part of the population represents a significant part of the overall population!(which is now increasingly becoming problematic).

      Because the unvaccinated mount a non-specific response, they get rid of any advantage that more-infectious variants might have had.

      If the unvaccinated constitute a significant portion of the population, this could lead to diminished concentrations of escape variants in the general population.

    24. In contrast, the unvaccinated do not provide such competitive advantage to more infectious variants as they eliminate Sars-CoV-2 lineages without exerting immune selection pressure on viral infectiousness (i.e., on spike protein). This is because unvaccinated either get asymptomatically infected, i.e., they overcome the infection thanks to their innate immunity, which is known to be multi-specific ( i.e., NOT variant-specific) or they contract symptomatic infection, which equally results in multi-variant-specific acquired immunity. In none of these cases does an unvaccinated person exert any immune selection pressure on viral infectiousness, i.e., on spike protein.

      There are essentially two pathways that befall the unvaccinated:

      1. They get asymptomatically infected, where they overcome the virus through their innate immunity, which is multi-specific.

      2. They get symptomatically infected, which results in an acquired immune response which is multi-variant-specific.

      In both of these cases no specific response is mounted, so the infection of an unvaccinated does not lead to targeted selection pressure being exerted on viral infectiousness (by, for instance, targeting the spike protein) or otherwise.

      Thus, the unvaccinated do not give more infectious variants a competitive advantage, because they do not exert immune selection pressure on viral infectiousness.

    25. When people get jabbed in large numbers with S(pike)-based vaccines, this undoubtedly leads to massive S-directed immune selection pressure in the vaccinated part of the population.

      Large numbers of people getting vaccinated with a spike-based vaccine leads to significant immune selection pressure exerted on viral infectivity among the vaccinated portion of the population.

    26. It seems logical that more infectious variants can only enjoy a competitive advantage on a background that exerts selective immune pressure on viral infectiousness, i.e. on spike protein (as the latter is responsible for viral infectiousness).

      More infectious variants will only experience a competitive advantage in an environment where there is selective pressure on infectivity.

      The infectivity of SARS-CoV-2 is mostly determined by the spike protein.

    27. As Goldman has no clue about immunology, he does not understand that the overall (i.e., population-level) immune status of the population constitutes the barrier that is critical to Darwin’s selection and survival of the fittest (as virus replication and transmission critically depends on the ‘resistance’ mounted by the host immune system).

      Variants emerge as a result of natural selection, which is governed by barriers the virus experiences in replication and transmission.

      At the population level these barriers are determined by the individual hosts' immune status - the ability of the host to demonstrate an immune response or to defend itself against disease or foreign substances.

      Thus, by leaving this information out, Goldman incorrectly concludes that a population of unvaccinated individuals is somehow sufficient for variants to emerge.

    28. SARS-CoV-2 has shown that it can mutate into many variants of the original agent (3). An unvaccinated pool of individuals provides a reservoir for the virus to continue to grow and multiply, and therefore more opportunities for such variants to emerge.

      Goldman claims that :

      SARS-CoV-2 has shown that it can mutate into many variants.

      An unvaccinated population provides opportunities for the virus to replicate and transmit and thus opportunities for such variants to emerge.

    29. Goldman doesn’t seem to realize that protection against disease has nothing to do with Darwin’s principles of natural selection and survival of the fittest. In case of viruses, the latter have to do with replication and transmission. So, what viruses care about is barriers that prevent them from replicating / transmitting, not from external influences that prevent them from being more or less pathogenic. This is to say that natural selection of viruses in the presence of neutralizing antibodies does not occur as a result of vaccine-mediated pressure on viral pathogenicity.

      Natural selection and survival of the fittest in viruses is governed by the barriers viruses experience to their ability to replicate and transmit, not by barriers to their ability to make us more or less sick (pathogenicity).

      Therefore natural selection of SARS-CoV-2 does not occur as a result of the vaccinated being more protected against severe illness and the unvaccinated not at all.

      If natural selection does indeed occur, it will necessarily be the result of barriers the virus experienced in its ability to replicate and transmit.

    30. In addition, Goldman doesn’t seem to realize that more infectious variants were already circulating before mass vaccination started.

      More infectious variants were already circulating before mass vaccination started.

    31. In 1859, Charles Darwin published On the Origin of Species (2), in which he outlined the principles of natural selection and survival of the fittest. The world presently has the unwelcome opportunity to see the principles of evolution as enumerated by Darwin play out in real time, in the interactions of the human population with SARS-CoV-2. The world could have easily skipped this unpleasant lesson, had there not been such large numbers of the human population unwilling to be vaccinated against this disease.

      Goldman claims that the world — thanks to the unvaccinated — will now witness Darwin's principles of natural selection and survival of the fittest play out in the interactions between the human population and SARS-CoV-2.

    32. Imai et al. (1) have characterized yet another variant of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus responsible for COVID-19, this one originating in Brazil. The good news is that it appears that vaccines currently available are still expected to provide protection against this variant. However, what about the next variant, one we have not seen yet? Will we still be protected?

      With new variants emerging, Goldman questions whether or not the vaccinated will remain protected.

    1. It might strike you as odd that a technical decision about vaccine booster shots would be a “blow” to a government, and frankly it is odd. But that is where we now are: politicians are forming their own strong views about vaccines, quite apart from their expert committees. They have a clear bias towards showing taking action and moving fast, especially since they stand accused of doing neither at the start of the pandemic; they also watch opinion polls, which in this case showed a 76% majority of people in favour of booster shots.

      Why is a technical decision about vaccine boosters framed as a blow to governments?

    1. And vaccines sit in an awkward spot at the intersection of science, medicine and public health, which do not mix. Science is about examining things as carefully as possible with no agenda. Public health is ALL agenda, identifying one single course of action and trying to make people follow it.

      Tension between science and public health.

      Science is about examining all evidence without an agenda. Public health is all about agenda, identifying one course of action and trying to make people follow it.

    1. So, the mutation rate tells us (technically, this can be defined in context, but usually for a virus…) how many single nucleotide polymorphisms (SNPs, like "snips") we expect to see from one viral generation to the next. But the mutation frequency measures the abundance of SNPs relative to the virions in a generational pool.

      The Mutation Rate tells us how many Single Nucleotide Polymorphisms are introduced from one generation to the next.

      The Mutation Frequency tells us how many Single Nucleotide Polymorphisms already exist relative to a generational pool.

    2. Reading the Competing Interests Statement gave me a rash.AP, PJL, ES, MJN, JC, AJV, and VS are employees of nference and have financial interests in the company. nference is collaborating with Moderna, Pfizer, Janssen, and other bio-pharmaceutical companies on data science initiatives unrelated to this study. These collaborations had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. JCO receives personal fees from Elsevier and Bates College, and receives small grants from nference, Inc, outside the submitted work. ADB is supported by grants from NIAID (grants AI110173 and AI120698), Amfar (#109593), and Mayo Clinic (HH Shieck Khalifa Bib Zayed Al-Nahyan Named Professorship of Infectious Diseases). ADB is a paid consultant for Abbvie, Gilead, Freedom Tunnel, Pinetree therapeutics Primmune, Immunome and Flambeau Diagnostics, is a paid member of the DSMB for Corvus Pharmaceuticals, Equilium, and Excision Biotherapeutics, has received fees for speaking for Reach MD and Medscape, owns equity for scientific advisory work in Zentalis and nference, and is founder and President of Splissen Therapeutics. MDS received grant funding from Pfizer via Duke University for a vaccine side effect registry. JH, JCO, AV, MDS and ADB are employees of the Mayo Clinic. The Mayo Clinic may stand to gain financially from the successful outcome of the research. This research has been reviewed by the Mayo Clinic Conflict of Interest Review Board and is being conducted in compliance with Mayo Clinic Conflict of Interest policies.

      The Puranik et. al paper contains a competing interest statement which mentions that some of the authors are working for a company that works with Moderna and Pfizer.

    3. The paper found a [much] lower efficacy reduction than is being seen in Israel for Pfizer's vaccine, or in the UK where both mRNA vaccines are in use. But I'm not going to go much further than that because I doubt many serious people will take this particularly seriously, except as biased and conflicted secondary evidence of efficacy against variants that is contradicted by that from other nations.

      The paper seems biased because of the stated competing interests, the needless inclusion of "highly-effective mRNA vaccines" in their title and their results showing a much lower efficacy reduction than seen in Israel and the UK.

    4. The other side of the argument seems to have a steep uphill climb, and I've so far not talked to anyone in genetics who feels otherwise.

      The argument that the unvaccinated are the source of the variants is much more difficult to support and Mathew has so far not spoken to anyone in genetics that supports it.

    5. Multiple recent papers have emerged that relate to the debate over whether vaccinated or unvaccinated people drive the emergence of SARS-CoV-2 variants since I wrote my first article on the topic. Let's take a look at what they tell us, and how vaccine partisans are misinterpreting them as publicly as possible.

      Multiple papers have come out which relate to the question of whether variants are driven by the unvaccinated population or the vaccinated population. The results of these papers are being misinterpreted in public by some.

    6. Sadly, this is how the public winds up being misinformed at about every turn during the pandemic.

      These incorrect interpretations of the data, biased research and further amplification on Twitter is unfortunately how the public winds up frequently being misinformed during the pandemic.

    7. Predictably, Edward Nirenberg seems to have jumped on the incorrect interpretation as well. Since he is young, with ample opportunity to shed the notion that he understands more than he does (trained to that point no doubt by the educational institutions that fail us all), he has plenty of time to establish a locus of reality. However, his influential pandemic-era writing reads like a peacock-display of understanding much more than he does while cross-troping Reality Show political phrases like "deplatform [disease]" and "[vaccine] nationalism". Sigh. a.image2.image-link.image2-272-586 { padding-bottom: 46.41638225255973%; padding-bottom: min(46.41638225255973%, 272px); width: 100%; height: 0; } a.image2.image-link.image2-272-586 img { max-width: 586px; max-height: 272px; } I wonder what evolutionary theory he would cite if he cited any.

      Edward Nirenberg, an influential pandemic-era writer — who's writing reads like he understands more than he actually does — further amplified Eric Topol's erroneous take on Twitter.

    8. Now, here he is, declaring a myth debunked while overgeneralizing an incorrect interpretation of data that says exactly the opposite of what he seems to think it does. And his many thousands of followers have no sense of how little he understands statistics, much less statistical genetics.

      Eric Topol declares a myth debunked to his many thousand followers, based on an erroneous assumption that Yeh and Carreras' interpretation generalizes, even though their interpretation is wrong.

      Meanwhile his followers don't realize how little he understands about statistics or statistical genetics.

    9. The problems get substantially compounded by the viral variant of abused reputation. Specifically, Scripps Research Institute founder Eric Topol, a man whose great achievement in genetics was an undergraduate paper opining about prospects for genetic therapy, got ahold of the paper and tweeted out what looks to be an even worse interpretation than that of the authors: that the result, if true, even as misinterpreted, necessarily generalizes:

      One implication of Yeh and Contreras' misinterpretation is Eric Topol amplifying the misinterpretation by tweeting out an even worse intepretation by assuming the the erroneous conclusion also generalizes.

    10. The correct interpretation is that vaccination campaigns channeled mutations through the bottleneck toward their moment of immune escape.

      The correct interpretation of the Tajima's D values going negative, given that the variants emerged in geographies where vaccine trials were taking place, is that the vaccine campaigns created a bottleneck which selected for mutations that could escape immunity.

    11. This is the second graph in the paper, and it shows values of Tajima's D that go negative in India and the UK in particular---just prior to Delta variant breakouts! While I hate to quote Wikipedia, there is a simple table that explains what I noted above about the genomic "resets":

      In Yeh's and Contreras' Tajima's D graph it goes negative in India and the UK just prior to Delta variant breakouts.

      A negative Tajima D values indicates a recent selective sweep — a population expansion after a recent bottleneck.

    12. Still, as I said, I am glad to know the information in the paper. While their confusion over the meaning of the information seems consistent, their computations and graphs give us important confirmation about what is really going on. This includes their Tajima's D computations.

      While the authors' [Yeh and Contreras] interpretation of their results is incorrect, the information presented gives us important information about what is going on.

    13. Looking back at the introduction, I see a hint that these authors are not particularly deep in the statistical genetics field.

      There are hints in the Yeh and Contreras paper which point to the authors not being well versed in the statistical genetics field.

    14. This graph does not tell us that the rate of mutation (SNPs per generation) changes in any way. In fact, that rate likely does not to any appreciable degree, though I suspect that many readers confirming their media-seeded "unvaccinated are variant factories" biases interpret it that way.

      The graph does not show that the rate of mutation changes in any way, and it probably doesn't.

    15. The obvious conclusion is that the self-similar viral pools are more likely to be vaccine resistant.

      Because self-similarity is a result of selection pressure, we can conclude the vaccinated viral pool is under selection pressure.

      Jesse: Mathew assumes this selection pressure is exerted by vaccine-induced immunity, which makes that viral pool more likely to be vaccine resistant.

    16. So, which virions do you imagine are being selected for in a highly vaccinated pool?I'll give you one guess, and it's not the virions most easily neutralized by vaccination.

      In a highly vaccinated population, the virions that are selected for are the ones which most readily escape vaccine-induced immunity.

    17. What we see in the graph above is that greater vaccination results in greater selection pressure to eliminate those virions least like the others.

      We can infer from the graph that high vaccination coverage results in greater selection pressure to eliminate those virions least like the others.

      Jesse: I'm not sure if we can directly infer selection pressure from such a graph, but it seems like a plausible argument. But it would seem that the selection pressure is not simply directed at removing non-self-similarity, but rather directed towards avoiding vaccine-induced immunity, and the self-similarity of the virion population is a consequence of that.

    18. But vaccine trials involve thousands of individuals in a population of many millions.There is no conceivable way for a few thousand vaccinated to drive the evolution of new variants.But this takes no conditional into account. While geography is certainly not the most restrictive conditional (after all, we're talking about...vaccine-resistant variants as per antibody escape, not T cell or PK cell escape), this response disrespects the coincidence factor. Take the individual probability coincidence factor to the fourth power and we get something like a black swan.

      Morris' argument is incorrect because he does not take into account the co-incidence of multiple factors such as:

      1. Individuals under selection pressure (e.g. those undergoing the trials) are more likely to be sources of variants
      2. The odds of a variant emerging in exactly the geographies the vaccine trials occurred, 4 times in a row, is vanishingly small.
    19. The closest thing to that was an email exchange with Biostatistics Professor Jeffrey Morris, who began with the position, "All of these variants were around before vaccination started so, if vaccination produces any variants, they haven’t appeared yet." After I explained his mistake, he shifted to,But vaccine trials involve thousands of individuals in a population of many millions.There is no conceivable way for a few thousand vaccinated to drive the evolution of new variants.

      Biostatistics Professor Jeffrey Morris argued that there is no way a few thousand vaccinated can drive the evolution of new variants, therefore the variants could not have been caused by the vaccine trials.

    20. And given conditions such as when and where the variants emerged...I'll throw my wager on those receiving COVID-19 vaccines being the source...of...variants that fall into the category of vaccine-resistant strains.

      Given the temporal and geographical vicinity of the emergence of the variants to the vaccine trials, it's more likely that the vaccinated are the source of the variants.

    21. On July 30, Rella et al reported in their paper Rates of SARS-CoV-2 transmission and vaccination impact the fate of vaccine-resistant strains on the results of computer simulations testing for variant emergence during the ups-and-downs of seasonal infection waves. From the abstract:As expected, we found that a fast rate of vaccination decreases the probability of emergence of a resistant strain. Counterintuitively, when a relaxation of non-pharmaceutical interventions happened at a time when most individuals of the population have already been vaccinated the probability of emergence of a resistant strain was greatly increased. Consequently, we show that a period of transmission reduction close to the end of the vaccination campaign can substantially reduce the probability of resistant strain establishment.

      Rella et al. reported on the results of computer simulations testing for variant emergence during the ups-and-downs of seasonal infection waves.

      They found that fast vaccination decreases the probability of emergence of a resistant strain.

      They also found that relaxing of non-pharmaceutical interventions when most of the population has been vaccinated, the likelihood of a variant strain emerging is greatly increased.

      Lastly Rella et al claim that reducing transmissions close to the end of vaccination campaign can substantially reduce the likelihood of a variant emerging.

    22. For those not familiar, Tajima's D is a statistical test conjured by Japanese researcher Fumihiro Tajima to test the "neutral mutation hypothesis". Like my wife sometimes does, Tajima studied mutations in fruit flies (Drosophila).

      Tajima's D is a statistical test to test the neutral mutation hypothesis.

    23. Mutations just happen. They are then selected according to contextual fitness in their environments. Left to random chance (sans vaccine), the locus of genetic diversity can get large, and so in the rare instances a single virion piles up enough SNPs to affect a functional domain (one functional specifically in escaping immunity), it nearly always get outcompeted locally before it can establish itself in another host. Even worse---en route to piling up multiple SNPs, such virions become increasingly unstable (less fit) as per Muller's ratchet.

      Mutations are random and happen all the time. They are then selected locally according to the contextual fitness of their environment.

      Without selection pressure the locus of genetic diversity can get large. In this setting if a virion piles up enough mutations to affect a functional domain, it nearly always gets outcompeted locally before it can establish itself in another host.

      Additionally, as a virion piles up mutations, it's increasingly likely to become unstable (less fit) as it is subjected to Muller's ratchet.

    24. They leap---quite incorrectly---to the notion that the suppression of mutation frequency equates to suppression of emergent mutations.

      Restating an earlier argument:

      The authors misinterpret their own results equating a suppression of mutation frequency with mutation rate.

    25. Note that when selection [for environment] occurs, the genetic variance of the genome "resets" relative to a new baseline because the other branches are "forgotten" in the sense that they are not present in the remaining population. Thus, we get low mutation frequency.

      When selection for environment has occurred, the mutation frequency drops (because a certain chunk of the viral pool doesn't pass through the sieve).

    26. This graph tells us that the SARS-CoV-2 samples from highly vaccinated nations are more similar to one another than are those from less vaccinated nations.

      A repeat from the argument above.

      The graph tells us SARS-CoV-2 samples from highly vaccinated countries are more self-similar than those from less vaccinated countries.

    27. Before we move any further, we need to discuss one term defined in the paper, and another one which is not. For many readers unfamiliar with statistical genetics, the distinction will help to disambiguate between a correct understanding of the results of this graph, and an intuitive but false one.Mutation frequency. Simply put, mutation frequency is the measured frequency of mutations (as they exist) in a population. A low mutation frequency represents a population with sequences that are highly similar, while a high mutation frequency represents a population with sequences that are less similar to each other.Mutation rate. The mutation rate is the measured frequency of mutations over time. The higher the mutation rate, the more likely that the "offspring" of a virus differ (at any particular location or base pair) from the immediate progenitor (or per time/ancestral distance from any progenitor assuming nothing like a speciation event).

      A distinction in jargon needs to be made in order to disambiguate the results of the Yeh and Contreras paper.

      Mutation frequency is the measured occurrence of mutations in a population (as they exist). It is a measure of how self-similar a gene pool is.

      Mutation rate is the measured frequency of mutations over time. It is a measure of how quickly a gene pool is mutating.

    28. Three days ago Yeh and Contreras posted this preprint on medRxiv entitled Full vaccination suppresses SARS-CoV-2 delta variant mutation frequency. The abstract goes a step further, claiming this is the "first evidence that full vaccination against COVID-19 suppresses emergent mutations of SARS-CoV-2 delta variants". The problem is that this is a plainly incorrect interpretation of the results. While this paper isn't Lyu and Wehby absurd, it almost appears (to me) designed to mislead.

      The Yeh and Contreras delta variant mutation frequency paper claims vaccines suppress the emergence of SARS-CoV-2 delta variants, but they are incorrectly interpreting their own results.

    29. And while I haven't performed those calculations, my belief is strongly that either there is >99.9% chance the variants are driven primarily by the vaccinated or a >99.9% chance that the variants are driven primarily by the unvaccinated.

      It's either very likely the variants are driven primarily by the vaccinated or very likely the variants are driven by the unvaccinated.

    30. Also, the Variants of Interest strongly display the quality of escape from antibody classes:

      The Variants of Interest show escape from 2 out of 3 classes of antibodies.

    31. Variants of Interest never seemed to emerge until vaccine trials were held, then emerged in the vicinities of where those trials were held.

      Variants of interest did not emerge until vaccine trials started, and then only emerged in those locations.

    32. Those who claim the sloganesque "Unvaccinated are Variant Factories" would claim that vaccine-resistent strains are a subset of Variants of Interest. While they're not wrong, technically, let us remind ourselves that the Variants of Interest never seemed to emerge until vaccine trials were held, then emerged in the vicinities of where those trials were held.

      Those who claim the unvaccinated are variant factories see vaccine-resistent strains as a subset of Variants of Interest.

    1. More importantly, it makes no sense.

      The unvaccinated are variant factories hypthesis makes no sense.

    2. Sadly, either those running the mass vaccination program don't get it, or they just don't care to be honest about it. Just in time for Independence Day, CNN interviewed a single professor and doctor specializing in infectious diseases who declared that the unvaccinated are "variant factories".

      Those running the mass vaccination program don't get it or don't care to be honest about it.

    3. According to Muller's ratchet, we should expect the ordinary process of evolutionary mutation to also lead to the virus tripping over itself. As random mutations that do not immediately harm the ability of a virus to survive pile up, the probability that further mutation results in an organism that can no longer survive piles up. This further puts weakening evolutionary pressure on a highly virulent asexual organism. This tendency works to our advantage---so long as we don't screw it up.

      Muller's ratchet holds that the amount of mutations carried by a progeny is at least as many as that of its parents. As the mutations pile up, it becomes less likely they result in a combination that can survive. This works to our advantage so long as we don't screw it up.

    4. The Alpha variant emerged in the UK in October, which was when Oxford-AstraZeneca was holding vaccine trials there.The Beta variant emerged in South Africa, and was first detected in December, 2020, at the tail end of trial periods for both Oxford-AstraZeneca and Pfizer vaccines. This variant carries three mutations in the spike protein.The Gamma variant was first detected in Japan, but soon after in Brazil, making the origin a little harder to determine. But since Japan has had far lower viral spread than Brazil, it makes the most sense that Brazil was the source. Both Oxford-AstraZeneca and Pfizer trialed their vaccines in Brazil.The Delta variant was first detected in India in October, 2020. India hosted numerous vaccine trials including one for Oxford-AstraZeneca and one for Covishield.

      The Variants of Interest (Alpha, Beta, Gamma, Delta) seem all to have emerged in temporal and geographical vicinity to vaccine trials.

    5. The Delta variant was first detected in India in October, 2020. India hosted numerous vaccine trials including one for Oxford-AstraZeneca and one for Covishield.

      The Delta variant was first detected in India in October 2020. India hosted vaccine trials for Oxford-AstraZeneca and Covishield.

    6. The Gamma variant was first detected in Japan, but soon after in Brazil, making the origin a little harder to determine. But since Japan has had far lower viral spread than Brazil, it makes the most sense that Brazil was the source. Both Oxford-AstraZeneca and Pfizer trialed their vaccines in Brazil.

      The Gamma variant was first detected in Japan, and soon after in Brazil. Brazil was probably the source because it saw high viral spread. Oxford-AstraZeneca and Pfizer held trials in Brazil.

    7. The Beta variant emerged in South Africa, and was first detected in December, 2020, at the tail end of trial periods for both Oxford-AstraZeneca and Pfizer vaccines. This variant carries three mutations in the spike protein.

      The Beta variant was first detected in December 2020 in South Africa, which coincided with the tail end of trial periods for both Oxford-AstraZeneca and Pfizer.

    8. The Alpha variant emerged in the UK in October, which was when Oxford-AstraZeneca was holding vaccine trials there.

      The Alpha variant emerged in the UK in October at the same time AstraZeneca was holding vaccine trials there.

    9. It seems more likely that the sudden emergence of this "variant factory" story is a coordinated response to warnings about leaky vaccines put forth by vaccine expert Geert Vander Bossche, echoed in basic principle by evolutionary biologists Bret Weinstein and Heather Heying, along with evidence of dwindling vaccine efficacy.

      The emergence of the variant factory narrative is likely a coordinated response to the argument put forward by Geert Vanden Bossche and Bret Weinstein and Heather Heying.

    10. Were this the case, wouldn't it make sense that the "experts" (a plurality of pure illusion manufactured by the multitude of media parrots) would have warned us about this story last year while standing next to both Dr. Anthony Fauci and President "Operation Warp Speed" Donald Trump? Doesn't it seem odd that this story would suddenly make the headlines in July, after seven months of mass vaccination?

      If experts indeed consider the unvaccinated variant factories, it's odd this narrative only started to make headlines after 7 months of mass vaccination.

    11. And while COVID-19 cases and SARS-CoV-2 infections do not necessarily go hand-in-hand, it has certainly been true that CFR has generally declined almost everywhere in the world as the pandemic has moved on, regardless of health care practices.

      This is supported by the fact that the CFR for COVID-19 has generally declined almost everywhere in the world.

    12. As a pandemic wages on, the default expectation is for surviving strains of a virus to be those that find a way to push the boundaries of infectivity in order to keep the infection rate, R, above 1, while lowering the infection fatality rate (IFR) substantially.

      The default expectation for the current viral pandemic is that surviving viral strains will be those that are able to find ways to keep the infection rate (R) above 1 while lowering the infection fatality rate (IFR).

    13. One of the most dreadful propositions of mass vaccination is that large scale vaccination during a pandemic or epidemic promotes the selection of mutations that escape immunity. Generally speaking, most viruses tend to evolve toward greater ability to survive and thrive in a host, but with lessened ability to harm that host. After all, a harmed host is more likely to perish, generally along with all the many living things hitching a ride inside. For these and other reasons, it has been understood by the scientific community that imperfect vaccination can enhance the transmission of viruses (Read et al). This is sometimes referred to as the imperfect vaccine hypothesis or the "leaky" vaccine hypothesis.

      Large scale vaccination during a pandemic or epidemic may promote the selection of mutations that escape immunity by virtue of the vaccine or the campaign being imperfect/leaky.

      This has been understood by the scientific community and is known as the imperfect vaccine hypothesis.

    14. These examples both call into question the strategy of targeting the spike protein, but also give us a hint at the potential for disaster. What would happen if one of these variants included an additional mutation that makes COVID-19 explosively more deadly as happened with a leaky vaccine targeting Marek's disease in chickens. The result was an explosively more deadly viral variant that has caused $2 billion in damage to the poultry industry because the escape variants get so hot that they kill every infected bird within just 10 days.

      These results call into question the strategy of targeting the spike protein. What if another mutation makes the virus more deadly like in Marek's disease?

    15. These are most likely not just variants. These appear to be escape variants.

      These variants aren't random genetic drift, they appear to be escape variants.

    16. In another paper (McCallum et al) the Epsilon variant (B.1.427/B.1.429) showed substantial escape from immunity:Plasma from individuals vaccinated with a Wuhan-1 isolate-based mRNA vaccine or convalescent individuals exhibited neutralizing titers, which were reduced 2-3.5 fold against the B.1.427/B.1.429 variant relative to wildtype pseudoviruses. The L452R mutation reduced neutralizing activity of 14 out of 34 RBD-specific monoclonal antibodies (mAbs). The S13I and W152C mutations resulted in total loss of neutralization for 10 out of 10 NTD-specific mAbs since the NTD antigenic supersite was remodeled by a shift of the signal peptide cleavage site and formation of a new disulphide bond, as revealed by mass spectrometry and structural studies.

      A paper by McCallum et al showed that the Epsilon variant showed substantial escape from immunity.

    17. Now, let us consider the specific scientific literature examining some of these variants. Virologist Delphine Planas of the Institut Pasteur, along with colleagues, have found that antibodies of vaccinated patients have greatly diminished efficacy in fighting off the Delta strain (emphasis added):Sera from convalescent patients collected up to 12 months post symptoms were 4 fold less potent against variant Delta, relative to variant Alpha (B.1.1.7). Sera from individuals having received one dose of Pfizer or AstraZeneca vaccines barely inhibited variant Delta. Administration of two doses generated a neutralizing response in 95% of individuals, with titers 3 to 5 fold lower against Delta than Alpha. Thus, variant Delta spread is associated with an escape to antibodies targeting non-RBD and RBD Spike epitopes.That seems indicative of vaccine-specific escape.

      Researchers at the Institut Pasteur showed that:

      (1) Sera from convalescent and vaccinated individuals was multiple fold less effective at neutralizing Delta than Alpha.

      They posit that the spread of Delta is associated with an escape to antibodies targeting non-RBD and RBD Spike epitopes.

    18. But those that have emerged did so in geographies where vaccine trials were held---that is several variants from a far smaller genetic pool.

      The variants that did emerge did so where vaccine trials were held.

    19. It is noteworthy that variants of interest did not emerge during the early stages of the pandemic, despite mass spread of SARS-CoV-2 around the globe. That's a pretty huge sample size of unvaccinated people.

      Matthew claims that

      Variants of interest did not emerge during the early stages of the pandemic, despite SarS-CoV-2 spreading around the globe.

    20. The reason public health authorities did not talk about evolutionary escape of variants six or ten or fifteen months ago is that, generally speaking, that conversation does not favor the logic of a mass vaccination program in the middle of a pandemic.

      Public health authorities did not mention the possibility of evolutionary escape of variants, because it doesn't support the narrative of mass vaccination.

    21. To be clear: every host is an evolutionary factory for viruses. What should concern us is the nature of streamlining of the process.

      All hosts are variant factories, but what should concern us is the degree to which a sieve is introduced to sieve out certain genes selectively.

    22. In a highly vaccinated population, mutations occur at random, but the genetic spread among versions of the virus is narrowed to those that can evade immunity, which has now been made more uniform among the vaccinated population. This further encourages such lineages even when they would not have won out within individual hosts in competition among its cousins. Such evasion increases chances of reinfection.

      In highly vaccinated populations mutations also occur at random but the spread in variants is narrowed to those that evade immunity. Where they would not have outcompeted their cousins in the unvaccinated, in the vaccinated they will win out.

    23. In an unvaccinated population, mutations occur at random producing a wide genetic spread with very few progeny resulting in long lasting lineages (Muller's ratchet), with a selection pressure that favors those variants that can (a) win the competition of replication among its cousins within a host, and (b) not kill the host so that it can thrive in new hosts.

      In an unvaccinated population one would expect there to be random mutations and a spread in genetic variation and selection pressure favoring the variants that can (a) win the competition of replication and (b) not kill their host.

    1. Published clinical data on the safety of mRNA-LNP vaccines are scarce, in comparison with siRNA, and are limited to local administration (ID and IM).

      Safety of mRNA vaccines.

    2. Although LNPs are promising delivery systems, safety issues need to be addressed to enable proper clinical development of LNP-formulated mRNA vaccines. LNPs’ potential toxicity could be complex and might manifest in systemic effects due to innate immune activation (induction of pro-inflammatory cytokine production), and/or in local, cellular toxicity due to accumulation of lipids in tissues (Hassett et al. 2019; Semple et al. 2010; Sabnis et al. 2018). Toxicity could potentially be abrogated, or reduced, by the administration of prophylactic anti-inflammatory steroids or other molecules and/or using biodegradable lipids (Hassett et al. 2019; Abrams et al. 2010; Tabernero et al. 2013; Tao et al. 2011). LNPs can also activate the complement system and might potentially elicit a hypersensitivity reaction known as complement activation-related pseudoallergy (CARPA) (Dezsi et al. 2014; Mohamed et al. 2019; Szebeni 2005, 2014), which can be alleviated using different strategies such as steroid and anti-allergic premedication (i.e., dexamethasone, acetaminophen, and antihistaminic drugs) or the use of low infusion rates during intravenous administration (Mohamed et al. 2019; Szebeni et al. 2018). Alternatively, co-delivery of regulatory cytokines (i.e., IL-10) using LNPs might be a viable strategy to reduce potential LNP-associated adverse events.

      Safety of mRNA Liquid Nanoparticles

  4. Jul 2021
    1. Powerful suppliers, including suppliers of labor, can squeeze profi tability out of an industry that is unable to pass on cost increases in its own prices.

      Suppliers with bargaining power can squeeze the profitability out of an industry by raising prices on industry participants that cannot pass on cost increases in their own prices.

    2. It is the threat of entry, not whether entry actually occurs, that holds down profi tability.
    3. The threat of entry in an industry depends on the height of entry barriers that are present and on the reaction en-trants can expect from incumbents. If entry barriers are low and newcomers expect little retaliation from the entrenched competitors, the threat of entry is high and industry profi t-ability is moderated.

      The threat of entry depends on the barriers (i.e. moat) that are present and the reaction entrants can expect from incumbents. If both are low, the threat of new entrants is high.

    4. Particularly when new entrants are diversifying from other markets, they can leverage exist-ing capabilities and cash fl ows to shake up competition, as Pepsi did when it entered the bottled water industry, Micro-soft did when it began to offer internet browsers, and Apple did when it entered the music distribution business.

      When new entrants enter a market, they can often leverage existing cash flows and capabilities e.g. Apple when it entered the music distribution business.

    5. Industry structure drives competition and profi tability, not whether an industry produces a product or service, is emerging or mature, high tech or low tech, regulated or unregulated.

      Profitability is not driven by market maturation but by industry structure carved out by the five forces.

    1. One of the fundamental goals of a blockchain is resolving the “double spend” problem. In a nutshell, this means preventing someone from sending the same coin to two people. However, beyond just simple spend transactions, it applies any time two transactions want to update the same state. This could be someone trying to duplicate Bitcoin, or two people trying to buy the same CryptoKitty. For the sake of generality, we’ll call it the “double update” problem. Fundamentally it’s about ordering: when we see two things, how do we decide which is first, and what happens to the second one?

      The double spend problem is a subset of what can be called the double update problem. How do we order two updates to the same state?

  5. Jun 2021
    1. Furthermore, multiple coexisting or alternate mechanisms of action likely explain the clinical effects observed, such as the competitive binding of ivermectin with the host receptor-binding region of SARS-CoV-2 spike protein, as proposed in 6 molecular modeling studies.21–26

      The mechanism through which ivermectin works on SARS-CoV-2 may be by competitive binding with the receptor-binding region of the SARS-CoV-2 spike protein as proposed in 4 molecular modelling studies.

    1. DID infrastructure can be thought of as a global key-value database in which the database is all DID-compatible blockchains, distributed ledgers, or decentralized networks. In this virtual database, the key is a DID, and the value is a DID document. The purpose of the DID document is to describe the public keys, authentication protocols, and service endpoints necessary to bootstrap cryptographically-verifiable interactions with the identified entity.

      DID infrastructure can be thought of as a key-value database.

      The database is a virtual database consisting of various different blockchains.

      The key is the DID and the value is the DID document.

      The purpose of the DID document is to hold public keys, authentication protocols and service endpoints necessary to bootstrap cryptographically-verifiable interactions with the identified entity.

    1. DigiNotar was a Dutch certificate authority owned by VASCO Data Security International, Inc.[1][2] On September 3, 2011, after it had become clear that a security breach had resulted in the fraudulent issuing of certificates, the Dutch government took over operational management of DigiNotar's systems.[3]

      Dutch Certificate Authority gets hacked.

    1. New Trusted Third Parties Can Be Tempting Many are the reasons why organizations may come to favor costly TTP based security over more efficient and effective security that minimizes the use of TTPs: Limitations of imagination, effort, knowledge, or time amongst protocol designers – it is far easier to design security protocols that rely on TTPs than those that do not (i.e. to fob off the problem rather than solve it). Naturally design costs are an important factor limiting progress towards minimizing TTPs in security protocols. A bigger factor is lack of awareness of the importance of the problem among many security architects, especially the corporate architects who draft Internet and wireless security standards. The temptation to claim the "high ground" as a TTP of choice are great. The ambition to become the next Visa or Verisign is a power trip that's hard to refuse. The barriers to actually building a successful TTP business are, however, often severe – the startup costs are substantial, ongoing costs remain high, liability risks are great, and unless there is a substantial "first mover" advantage barriers to entry for competitors are few. Still, if nobody solves the TTP problems in the protocol this can be a lucrative business, and it's easy to envy big winners like Verisign rather than remembering all the now obscure companies that tried but lost. It's also easy to imagine oneself as the successful TTP, and come to advocate the security protocol that requires the TTP, rather than trying harder to actually solve the security problem. Entrenched interests. Large numbers of articulate professionals make their living using the skills necessary in TTP organizations. For example, the legions of auditors and lawyers who create and operate traditional control structures and legal protections. They naturally favor security models that assume they must step in and implement the real security. In new areas like e-commerce they favor new business models based on TTPs (e.g. Application Service Providers) rather than taking the time to learn new practices that may threaten their old skills. Mental transaction costs. Trust, like taste, is a subjective judgment. Making such judgement requires mental effort. A third party with a good reputation, and that is actually trustworthy, can save its customers from having to do so much research or bear other costs associated with making these judgments. However, entities that claim to be trusted but end up not being trustworthy impose costs not only of a direct nature, when they breach the trust, but increase the general cost of trying to choose between trustworthy and treacherous trusted third parties.

      There are strong incentives to stick with trusted third parties

      1. It's more difficult to design protocols that work without a TTP
      2. It's tempting to imagine oneself as a successful TTP
      3. Entrenched interests — many professions depend on the TTP status quo (e.g. lawyers, auditors)
      4. Mental transaction costs — It can be mentally easier to trust a third party, rather than figuring out who to trust.
    2. The high costs of implementing a TTP come about mainly because traditional security solutions, which must be invoked where the protocol itself leaves off, involve high personnel costs. For more information on the necessity and security benefits of these traditional security solutions, especially personnel controls, when implementing TTP organizations, see this author's essay on group controls. The risks and costs borne by protocol users also come to be dominated by the unreliability of the TTP – the DNS and certificate authorities being two quite commom sources of unreliability and frustration with the Internet and PKIs respectively.

      The high costs of TTPs have to do with the high personnel costs that are involved in the centralized solutions.

    3. The certificate authority has proved to be by far the most expensive component of this centralized public key infrastructure (PKI). This is exacerbated when the necessity for a TTP deemed by protocol designers is translated, in PKI standards such as SSL and S/MIME, into a requirement for a TTP. A TTP that must be trusted by all users of a protocol becomes an arbiter of who may and may not use the protocol. So that, for example, to run a secure SSL web server, or to participate in S/MIME, one must obtain a certifcate from a mutually trusted certificate authority. The earliest and most popular of these has been Verisign. It has been able to charge several hundred dollars for end user certificates – far outstripping the few dollars charged (implicitly in the cost of end user software) for the security protocol code itself. The bureaucratic process of applying for and renewing certificates takes up far more time than configuring the SSL options, and the CA's identification process is subject to far greater exposure than the SSL protocol itself. Verisign amassed a stock market valuation in the 10's of billions of U.S. dollars (even before it went into another TTP business, the Internet Domain Name System(DNS) by acquiring Network Solutions). How? By coming up with a solution – any solution, almost, as its security is quite crude and costly compared to the cryptographic components of a PKI – to the seemingly innocuous assumption of a "trusted third party" made by the designers of public key protocols for e-mail and the Web.

      The most expensive (and wasteful) part of Central Public Key Infrastructure is the Certificate Authority (the Trusted Third Party).

      Verisign became a billion dollar company by charging hundreds of dollars in subscription fees for issuing certificates. Even though their security wasn't anything out of the ordinary. It also takes far longer to request a certificate than it does to configure one for actual use.

      Meanwhile the costs paid for the protocol code, captured implicitly in the software's price, is a mere few bucks.

    4. Personal Property Has Not and Should Not Depend On TTPs For most of human history the dominant form of property has been personal property. The functionality of personal property has not under normal conditions ever depended on trusted third parties. Security properties of simple goods could be verified at sale or first use, and there was no need for continued interaction with the manufacturer or other third parties (other than on occasion repair personel after exceptional use and on a voluntary and temporary basis). Property rights for many kinds of chattel (portable property) were only minimally dependent on third parties – the only problem where TTPs were neededwas to defend against the depredations of other third parties. The main security property of personal chattel was often not other TTPs as protectors but rather its portability and intimacy. Here are some examples of the ubiquity of personal property in which there was a reality or at least a strong desire on the part of owners to be free of dependence on TTPs for functionality or security: Jewelry (far more often used for money in traditional cultures than coins, e.g. Northern Europe up to 1000 AD, and worn on the body for better property protection as well as decoration) Automobiles operated by and house doors opened by personal keys. Personal computers – in the original visions of many personal computing pioneers (e.g. many members of the Homebrew Computer Club), the PC was intended as personal property – the owner would have total control (and understanding) of the software running on the PC, including the ability to copy bits on the PC at will. Software complexity, Internet connectivity, and unresolved incentive mismatches between software publishers and users (PC owners) have substantially eroded the reality of the personal computer as personal property. This desire is instinctive and remains today. It manifests in consumer resistance when they discover unexpected dependence on and vulnerability to third parties in the devices they use. Suggestions that the functionality of personal property be dependent on third parties, even agreed to ones under strict conditions such as creditors until a chattel loan is paid off (a smart lien) are met with strong resistance. Making personal property functionality dependent on trusted third parties (i.e. trusted rather than forced by the protocol to keep to the agreement governing the security protocol and property) is in most cases quite unacceptable.

      Personal property did not depend on trusted third parties

      For most of human history personal property did not depend on Trusted Third Parties (TTP). To the extent that TTPs were needed, was to defend property from depredataions of other third parties.

      Jewelry, automobile keys, house keys — these all show that humans had a preference for having sovereign access to their property, without relying on third parties.

      This preference remains with us today and you can see it manifest itself in people's anger when they discover that part of their product is not owned by them.

    5. The main security property of personal chattel was often not other TTPs as protectors but rather its portability and intimacy.

      The security properties of personal chattel was not a Trusted Third Party (TTP), but their portability and intimacy.

    1. So, what problem is blockchain solving for identity if PII is not being stored on the ledger? The short answer is that blockchain provides a transparent, immutable, reliable and auditable way to address the seamless and secure exchange of cryptographic keys. To better understand this position, let us explore some foundational concepts.

      What problem is blockchain solving in the SSI stack?

      It is an immutable (often permissionless) and auditable way to address the seamless and secure exchange of cryptographic keys.

    1. But, as I have said many times here at AVC, I believe that business model innovation is more disruptive that technological innovation. Incumbents can adapt to and adopt new technological changes (web to mobile) way easier than they can adapt to and adopt new business models (selling software to free ad-supported software). So this new protocol-based business model feels like one of these “changes of venue” as my partner Brad likes to call them. And that smells like a big investable macro trend to me.

      Business model innovation is more disruptive than technological innovation.

    2. This is super important because the more open protocols we have, the more open systems we will have.

      Societal benefits of cryptocurrencies

      The more open protocols we have, the more open systems we have.

    1. From a comment by Muneeb Ali:

      The original Internet protocols defined how data is delivered, but not how it's stored. This lead to centralization of data.

      The original Internet protocols also didn't provide end-to-end security. This lead to massive security breaches. (Other reasons for security breaches as well, but everything was based on a very weak security model to begin with.)

    2. Because we didn’t know how to maintain state in a decentralized fashion it was the data layer that was driving the centralization of the web that we have observed.

      We didn't know how to maintain state in a decentralized fashion, and this is what drove centralization.

    3. I can’t emphasize enough how radical a change this is to the past. Historically the only way to make money from a protocol was to create software that implemented it and then try to sell this software (or more recently to host it). Since the creation of this software (e.g. web server/browser) is a separate act many of the researchers who have created some of the most successful protocols in use today have had little direct financial gain. With tokens, however, the creators of a protocol can “monetize” it directly and will in fact benefit more as others build businesses on top of that protocol.

      Tokens allow protocol creators to profit from their creation, whereas in the past they would need to create an app that implemented the protocol to do so.

    4. Organizationally decentralized but logically centralized state will allow for the creation of protocols that can undermine the power of the centralized incumbents.

      Organizationally decentralized but logically centralized

    1. The important innovation provided by the blockchain is that it makes the top right quadrant possible. We already had the top left. Paypal for instance maintains a logically centralized database for its payments infrastructure. When I pay someone on Paypal their account is credited and mine is debited. But up until now all such systems had to be controlled by a single organization.

      The top right quadrant is the innovation that blockchain represents.

    2. organizationally organizationally centralized decentralized logically eg *new* centralized Paypal Bitcoin logically eg eg decentralized Excel e-mail

      Organizationally decentralized, logically centralized

      Organizationally centralized are systems that are controlled by a single organization. Organizationally decentralized are systems that are not under control of any one entity.

      Logically decentralized are systems that have multiple databases, where participants control their own database entirely. Excel is logically decentralized. Logically centralized are systems that appear as if they have a single global database (irrespective of how it's implemented).

  6. May 2021
    1. It’s estimated there will be over 20 billion connected devices by 2020, all of which will require management, storage, and retrieval of data. However, today’s blockchains are ineffective data receptacles, because every node on a typical network must process every transaction and maintain a copy of the entire state. The result is that the number of transactions cannot exceed the limit of any single node. And blockchains get less responsive as more nodes are added, due to latency issues.

      There's a limit on how much data blockchain can handle because the nodes need to process every transaction and maintain a copy of the entire state.

    2. Another concern was the requirement for a dedicated network. The logic of blockchain is that information is shared, which requires cooperation between companies and heavy lifting to standardize data and systems. The coopetition paradox applied; few companies had the appetite to lead development of a utility that would benefit the entire industry. In addition, many banks have been distracted by broader IT transformations, leaving little headspace to champion a blockchain revolution.

      The coopetition paradox occurred in blockchain development. Companies didn't want to lead investment in technology that would benefit the entire industry.

    3. By late 2017, many people working at financial companies felt blockchain technology was either too immature, not ready for enterprise level application, or was unnecessary. Many POCs added little benefit, for example beyond cloud solutions, and in some cases led to more questions than answers. There were also doubts about commercial viability, with little sign of material cost savings or incremental revenues.

      By late 2017 many blockchain proof of concepts did not add much and the technology seemed unnecessary or too immature.

    1. The Internet was built without a way to know who and what you are connecting to. This limits what we can do with it and exposes us to growing dangers. If we do nothing, we will face rapidly proliferating episodes of theft and deception that will cumulatively erode public trust in the Internet.

      Kim Cameron posits that the internet was built without an identity layer. You have no way of knowing who and what you are connecting to.

    1. Today, the sector of the economy with the lowest IT intensity is farming, where IT accounts for just 1 percent of all capital spending. Here, the potential impact of the IoT is enormous. Farming is capital- and technology-intensive, but it is not yet information-intensive. Advanced harvesting technology, genetically modified seeds, pesticide combinations, and global storage and distribution show how complex modern agriculture has become, even without applying IT to that mix

      The sector with the lowest IT intensity is farming, where IT accounts for just 1 percent of all capital spending.

    2. The IoT creates the ability to digitize, sell and deliver physical assets as easily as with virtual goods today. Using everything from Bluetooth beacons to Wi-Fi-connected door locks, physical assets stuck in an analog era will become digital services. In a device driven democracy, conference rooms, hotel rooms, cars and warehouse bays can themselves report capacity, utilization and availability in real-time. By taking raw capacity and making it easy to be utilized commercially, the IoT can remove barriers to fractionalization of industries that would otherwise be impossible. Assets that were simply too complex to monitor and manage will present business opportunities in the new digital economy.

      IoT ushers in a device driven democracy where conference rooms, hotel rooms and cars can self-report capacity, utilization and availability in real-time.

      IoT can make it easier to fractionalize industries that would otherwise be impossible.

    3. In this model, users control their own privacy and rather than being controlled by a centralized authority, devices are the master. The role of the cloud changes from a controller to that of a peer service provider. In this new and flat democracy, power in the network shifts from the center to the edge. Devices and the cloud become equal citizens.

      In a blockchain IoT the power in the network shifts from the center to the edge.

    4. Challenge five: Broken business modelsMost IoT business models also hinge on the use of analytics to sell user data or targeted advertising. These expectations are also unrealistic. Both advertising and marketing data are affected by the unique quality of markets in information: the marginal cost of additional capacity (advertising) or incremental supply (user data) is zero. So wherever there is competition, market-clearing prices trend toward zero, with the real revenue opportunity going to aggregators and integrators. A further impediment to extracting value from user data is that while consumers may be open to sharing data, enterprises are not.Another problem is overly optimistic forecasts about revenue from apps. Products like toasters and door locks worked without apps and service contracts before the digital era. Unlike PCs or smartphones, they are not substantially interactive, which makes such revenue expectations unrealistic. Finally, many smart device manufacturers have improbable expectations of ecosystem opportunities. While it makes interesting conversation for a smart TV to speak to the toaster, such solutions get cumbersome quickly and nobody has emerged successful in controlling and monetizing the entire IoT ecosystem.So while technology propels the IoT forward, the lack of compelling and sustainably profitable business models is, at the same time, holding it back. If the business models of the future don’t follow the current business of hardware and software platforms, what will they resemble?

      Challenge 5 for IoT: Broken business models

      Conventional IoT business models relied on using and selling user data with targeted advertising. This won't work. Enterprises aren't willing to share data.

      Doorknobs and toasters worked without apps before, and whatever smartness is added, they won't be very interactive. Capturing sufficient value from this will be difficult.

      Having your toaster talk to your fridge sounds interesting, but it doesn't improve the user's life.

    5. Challenge four: A lack of functional valueMany IoT solutions today suffer from a lack of meaningful value creation. The value proposition of many connected devices has been that they are connected – but simply enabling connectivity does not make a device smarter or better. Connectivity and intelligence are a means to a better product and experience, not an end. It is wishful thinking for manufacturers that some features they value, such as warranty tracking, are worth the extra cost and complexity from a user’s perspective. A smart, connected toaster is of no value unless it produces better toast. The few successes in the market have kept the value proposition compelling and simple. They improve the core functionality and user experience, and do not require subscriptions or apps.

      Challenge 4 for IoT: A lack of functional value

      Making a device smart, doesn't necessarily improve the experience. A smart toaster is of no value unless it produces better toast.

    6. Challenge three: Not future-proofWhile many companies are quick to enter the market for smart, connected devices, they have yet to discover that it is very hard to exit. While consumers replace smartphones and PCs every 18 to 36 months, the expectation is for door locks, LED bulbs and other basic pieces of infrastructure to last for years, even decades, without needing replacement. An average car, for example, stays on the road for 10 years, the average U.S. home is 39 years old and the expected lifecycles of road, rail and air transport systems is over 50 years.10 A door lock with a security bug would be a catastrophe for a warehousing company and the reputation of the manufacturer. In the IoT world, the cost of software updates and fixes in products long obsolete and discontinued will weigh on the balance sheets of corporations for decades, often even beyond manufacturer obsolescence.

      Challenge 3 for IoT: Not future proof

      (1) Consumers have different expectations for the longevity of smartphones and PCs (1.5-3 years) than they do door locks, LED bulbs etc.

      (2) A door lock might have a security bug, requiring an update, and impacting the manufacturer's reputation.

      (3) Software updates might need to shipped for discontinued, obsolete products.

    7. Challenge two: The Internet after trustThe Internet was originally built on trust. In the post-Snowden era, it is evident that trust in the Internet is over. The notion of IoT solutions built as centralized systems with trusted partners is now something of a fantasy. Most solutions today provide the ability for centralized authorities, whether governments, manufacturers or service providers to gain unauthorized access to and control devices by collecting and analyzing user data. In a network of the scale of the IoT, trust can be very hard to engineer and expensive, if not impossible, to guarantee. For widespread adoption of the ever-expanding IoT, however, privacy and anonymity must be integrated into its design by giving users control of their own privacy. Current security models based on closed source approaches (often described as “security through obscurity”) are obsolete and must be replaced by a newer approach – security through transparency. For this, a shift to open source is required. And while open source systems may still be vulnerable to accidents and exploitable weaknesses, they are less susceptible to government and other targeted intrusion, for which home automation, connected cars and the plethora of other connected devices present plenty of opportunities.

      Challenge 2 of IoT: The internet after trust

      In the post-Snowden era, it is not realistic or wise to expect the world of IoT to be based on a centralized trust model.

      Most solutions today provide the ability to centralized authorities, whether governments, manufacturers or service providers to gain unauthorized access to and control devices.

      Because of the scale of IoT, a centralized trust architecture would not be scalable or affordable.

      Privacy and anonymity must be integrated into its design by giving user control over their own privacy.

      A shift from closed source to open source is required. Open source systems are less susceptible to targeted intrusion.

    8. Challenge one: The cost of connectivityEven as revenues fail to meet expectations, costs are prohibitively high. Many existing IoT solutions are expensive because of the high infrastructure and maintenance costs associated with centralized clouds and large server farms, in addition to the service costs of middlemen. There is also a mismatch in supplier and customer expectations. Historically, costs and revenues in the IT industry have been nicely aligned. Though mainframes lasted for many years, they were sold with enterprise support agreements. PCs and smartphones have not

      Challenge 1 of IoT: The cost of connectivity

      There are high infrastructure and maintenance costs associated with running IoT communication through centralized clouds and the service costs of middlemen.

    1. One way the blockchain could change online security dynamics is the opportunity to replace the flawed “shared-secret model” for protecting information with a new “device identity model.” Under the existing paradigm, a service provider and a customer agree on a secret password and perhaps certain mnemonics—“your pet’s name”—to manage access. But that still leaves all the vital data, potentially worth billions of dollars, sitting in a hackable repository on the company’s servers. With the right design, a blockchain-based system would leave control over the data with customers, which means the point of vulnerability would lie with their devices. The onus is now on the customer to protect that device, so we must, of course, develop far more sophisticated methods for storing, managing, and using our own private encryption keys. But the more important point is that the potential payoff for the hacker is so much smaller for each attack. Rather than accessing millions of accounts at once, he or she has to pick off each device one by one for comparatively tiny amounts. Think of it as an incentives-weighted concept of security.

      Using blockchain we could shift from a shared-secret model to a device identity model.

      This would mean that the customer's data is stored with the customer, not on a central database.

      The onus is then on the customer to protect that data and the device it's on.

      The important point is that you're replacing an attractive single attack vector for the hacker, with many distributed vectors, and reducing the potential pay off of each.

      You achieve security through incentives.

    2. So much of what’s foreseen won’t be viable without distributed trust, whether it’s smart parking systems transacting with driverless cars, decentralized solar microgrids that let neighbors automatically pay each other for power, or public Wi-Fi networks accessed with digital-money micropayments. If those peer-to-peer applications were steered through a centralized institution, it would have to “KYC” each device and its owner—to use an acronym commonly used to describe banks’ regulatory obligation to conduct “know your customer” due diligence. Those same gatekeepers could also curtail competitors, quashing innovation. Processing costs and security risks would rise. In short, a “permissioned” system like this would suck all the seamless, creative fluidity out of our brave new IoT world.

      Permissioned vs. Permissionless

      Many solutions will not be viable without distributed trust because routing all transactions through a central authority comes with too much friction.

      (1) KYC requirements for each node (2) Processing costs rise

      At the same time centralizing these transactions has other adverse effects:

      (1) It gives the centralized entity gatekeeper power, giving them the ability to curtail competitors, thereby quashing innovation. (2) Security risks rise, because data passes through a centralized location.

      Permissioned systems stifle innovation.

    3. Bitcoin has survived because it leaves hackers nothing to hack. The public ledger contains no personal identifying information about the system’s users, at least none of any value to a thief. And since no one controls it, there’s no central vector of attack. If one node on the bitcoin network is compromised and someone tries to undo or rewrite transactions, the breach will be immediately contradicted by the hundreds of other accepted versions of the ledger.

      Bitcoin has not been hacked, in part, because it leaves hackers nothing to hack.

      (1) The public ledger contains to personal identifying information (2) No one controls it, so there's no central vector of attack

    4. Ever since its launch in 2009, there has been no successful cyberattack on bitcoin’s core ledger—despite the tempting bounty that the digital currency’s $9 billion market cap offers to hackers.

      There has been no successful cyberattack on Bitcoin despite the tempting bounty.

    5. Thirty years later, we finally have the conceptual framework for such a system, one in which trust need no longer be invested in a third-party intermediary but managed in a distributed manner across a community of users incentivized to protect a public good. Blockchain technology and the spinoff ideas it has spawned provide us with the best chance yet to solve the age-old problem of the Tragedy of the Commons.

      Blockchain technology allows us to distribute trust across a community of users incentivized to protect a public good.

    6. Thirty years later, we finally have the conceptual framework for such a system, one in which trust need no longer be invested in a third-party intermediary but managed in a distributed manner across a community of users incentivized to protect a public good.

      Blockchain heralds the nascent system for managing trust in a distributed manner across a community of users incentivized to protect a public good.

    7. The problem was that in those early years, Silicon Valley had no distributed trust management system to match the new distributed communications architecture.

      In the same way we didn't initially have the communication network architecture to support peer-to-peer communication, once we did, we still didn't have the trust architecture to support distributed trust management.

    8. The single most important driver of decentralization has been the fact that human communication—without which societies, let alone economies, can’t exist—now happens over an entirely distributed system: the Internet. The packet-switching technology that paved the way for the all-important TCP/IP protocol pair meant that data could travel to its destination via the least congested route, obviating the need for the centralized switchboard hubs that had dominated telecommunications. Thus, the Internet gave human beings the freedom to talk to each other directly, to publish information to anyone, anywhere. And because communication was no longer handled via a hub-and-spokes model, commerce changed, too. People could submit orders to an online store or enter into a peer-to-peer negotiation over eBay.

      Human communication and economic transactions are by their very nature peer-to-peer. Early telecommunication technology was able to scale these interactions over larger groups of participants and over larger distances, but they did so through a hub-and-spoke model relying on centralized switchboards.

      The internet, with its distributed architecture, offered a way for distributed nodes to communicate across the least congested route between them, thereby constituting a distributed architecture and a better match for the distributed nature of human communication and commerce.

    9. For IT-savvy thieves, it’s the best of both worlds: more and more locations from which to launch surreptitious attacks and a set of ever-growing, centralized pools of valuable information to go after.

      Cybercriminals have the best of both worlds:

      (1) More access to points to launch attacks from due to increased decentralization (e.g. IoT) (2) More and larger centralized pools of valuable information to go after

    10. Decentralization, meanwhile, is pushing the power to execute contracts and manage assets to the edges of the network, creating a proliferation of new access points.

      Decentralization, like the proliferation of IoT, is pushing the power to execute contracts and manage assets to the individual nodes of the network.

    11. Yet, we continue to depend upon something we might call the centralized trust paradigm, by which middlemen entities coordinate our monetary transactions and other exchanges of value. We trust banks to track and verify everyone’s account balances so that strangers can make payments to each other. We entrust our most sensitive health records to insurance firms, hospitals, and laboratories. We rely on public utilities to read our electricity meters, monitor our usage, and invoice us accordingly. Even our new, Internet-driven industries are led by a handful of centralized behemoths to which we’ve entrusted our most valuable personal data: Google, Facebook, Amazon, Uber, etc.

      Despite aggregators driving more decentralized economic exchanges, we continue to rely on a centralized trust paradigm.

      We trust our banks to verify everyone's account balances, we trust our health records to insurance firms, we rely on public utilities to read our electricity meters.

    12. Startups of all kinds are constantly pitching ideas for e-marketplaces and online platforms that would unlock new network effects by bypassing incumbent middlemen and letting people interact directly with each other. Although these companies are themselves centralized entities, the services they provide satisfy an increasing demand for more decentralized exchanges. This shift underpins social media, ride-sharing, crowdfunding, Wikipedia, localized solar microgrids, personal health monitoring, and everything else in the Internet of Things (IoT).

      Aggregators (ride-sharing, social media) have been driving an increase in decentralized economic exchanges, while being built on top of centralized network infrastructure.

      This has allowed them to capture a sizeable portion of the value that is generated by their platforms, but it has also burdened them with custody over large amounts of user data.

    13. At the heart of this failure lies the fact that the ongoing decentralization of our communication and business exchanges is in direct contradiction with the outdated centralized systems we use to secure them. Given that the decentralization trend is fueled by the distributed communications system of the Internet—one in which no central hub acts as information gatekeeper—what’s needed is a new approach to security that’s also based on a distributed network architecture.

      Michael Casey posits that at the heart of the colossal failure of securing the world's online commerce is the contradiction between two things:

      (1) The ongoing decentralization of our communication and business exchanges driven by the distributed communications system of the internet. (2) The centralized systems we use to secure them.

      Communication and economic exchanges are becoming increasingly decentralized, fueled by the distributed infrastructure of the internet. This requires a similar approach to security that is based on a distributed network architecture.

    14. Lloyd’s of London knows a thing or two about business losses—for three centuries, the world’s oldest insurance market has been paying out damages triggered by wars, natural disasters, and countless acts of human error and fraud. So, it’s worth paying attention when Lloyds estimates that cybercrime caused businesses to lose $400 billion between stolen funds and disruption to their operations in 2015. If that number seems weighty—and it ought to—try this one for size: $2.1 trillion. That’s Juniper Research’s total cybercrime loss forecast for the even more digitally interconnected world projected for 2019. To put that figure in perspective, at current economic growth rates, it would represent more than 2% of total world GDP. Learn faster. Dig deeper. See farther. Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful. Learn more We are witnessing a colossal failure to protect the world’s online commerce.

      Forecasts for cybercrime loss are in the 400 billion to 2.1 trillion range for 2019. This points to a "collosal failure to protect the world's online commerce"

    1. Storing any type of PII on a public blockchain, even encrypted or hashed, is dangerous for two reasons: 1) the encrypted or hashed data is a global correlation point when the data is shared with multiple parties, and 2) if the encryption is eventually broken (e.g., quantum computing), the data will be forever accessible on an immutable public ledger. So the best practice is to store all private data off-chain and exchange it only over encrypted, private, peer-to-peer connections.

      Storing sensitive information on a blockchain, whether encrypted or hashed, is a risk, because it's there forever and it forms a correlation point and the encryption might get broken.

    2. For self-sovereign identity, which can be defined as a lifetime portable digital identity that does not depend on any centralized authority, we need a new class of identifier that fulfills all four requirements: persistence, global resolvability, cryptographic verifiability, and decentralization.

      The four requirements that constitute self-sovereign identity:

      1. Persistence
      2. Global Resolvability
      3. Cryptographic Verifiability
      4. Decentralization
    1. The models for online identity have advanced through four broad stages since the advent of the Internet: centralized identity, federated identity, user-centric identity, and self-sovereign identity.

      Online identity advanced through 4 stages:

      Centralized identity Federated identity User-centric identity Self-sovereign identity

    2. Identity in the digital world is even trickier. It suffers from the same problem of centralized control, but it’s simultaneously very balkanized: identities are piecemeal, differing from one Internet domain to another.

      Identity in the digital world also gets muddied, but in addition it is also balkanized: different internet domains have different identities.

    3. However, modern society has muddled this concept of identity. Today, nations and corporations conflate driver’s licenses, social security cards, and other state-issued credentials with identity; this is problematic because it suggests a person can lose his very identity if a state revokes his credentials or even if he just crosses state borders. I think, but I am not.

      Christopher Allen posits that modern society has muddled the concept of identity by equating it to a driver license or national id card, thereby implying that it is something that can be taken away.

      I would say that it is not society, but the modern state that has not muddied, rather corrupted the concept of identity.

      This also reminds me of idea of how to draw the line of definition for a component with which greater complexity is built up.

  7. Apr 2021
    1. Acquiring viral drift sufficient to produce new influenza strains capable of escaping population immunity is believed to take years of global circulation, not weeks of local circulation.

      Experiencing enough viral drift to produce an influenza variant capable of escaping population immunity is believed to take years of global circulation (not weeks of local circulation).

  8. Mar 2021
    1. Private property rights are not absolute. The rule against the "dead hand" or the rule against perpetuities is an example. I cannot specify how resources that I own will be used in the indefinitely distant future. Under our legal system, I can only specify the use for a limited number of years after my death or the deaths of currently living people.

      Property rights are not absolute, our legal system does not support the ability to specify how resources should be used indefinitely in the future.

    2. Similarly, the set of resources over which property rights may be held is not well defined and demarcated. Ideas, melodies, and procedures, for example, are almost costless to replicate explicitly (near-zero cost of production) and implicitly (no forsaken other uses of the inputs). As a result, they typically are not protected as private property except for a fixed term of years under a patent or copyright.

      It's not well demarcated over what resources property rights may be held. Melodies and ideas, for instance, are virtually costless to replicate. These resources tend not to be protected by private property.

    3. Depending upon circumstances certain actions may be considered invasions of privacy, trespass, or torts. If I seek refuge and safety for my boat at your dock during a sudden severe storm on a lake, have I invaded "your" property rights, or do your rights not include the right to prevent that use? The complexities and varieties of circumstances render impossible a bright-line definition of a person's set of property rights with respect to resources.

      In real-life property rights there are also many gray areas. In programmatic property rights, there is none.

    4. The cost of establishing private property rights—so that I could pay you a mutually agreeable price to pollute your air—may be too expensive. Air, underground water, and electromagnetic radiations, for example, are expensive to monitor and control. Therefore, a person does not effectively have enforceable private property rights to the quality and condition of some parcel of air. The inability to cost-effectively monitor and police uses of your resources means "your" property rights over "your" land are not as extensive and strong as they are over some other resources, like furniture, shoes, or automobiles. When private property rights are unavailable or too costly to establish and enforce, substitute means of control are sought. Government authority, expressed by government agents, is one very common such means. Hence the creation of environmental laws.

      For some types of property and/or use of that property, the costs of monitoring them is too high. It is too expensive to monitor parcels of air for pollution or radiation.

      When a resource cannot be cost-effectively monitored and/or policed, your property rights over this resource become less strong.

      When property rights become too weak, alternative means of control are sought, e.g. government agents and environmental laws.

    5. The two extremes in weakened private property rights are socialism and "commonly owned" resources. Under socialism, government agents—those whom the government assigns—exercise control over resources. The rights of these agents to make decisions about the property they control are highly restricted. People who think they can put the resources to more valuable uses cannot do so by purchasing the rights because the rights are not for sale at any price. Because socialist managers do not gain when the values of the resources they manage increase, and do not lose when the values fall, they have little incentive to heed changes in market-revealed values. The uses of resources are therefore more influenced by the personal characteristics and features of the officials who control them. Consider, in this case, the socialist manager of a collective farm. By working every night for one week, he could make 1 million rubles of additional profit for the farm by arranging to transport the farm's wheat to Moscow before it rots. But if neither the manager nor those who work on the farm are entitled to keep even a portion of this additional profit, the manager is more likely than the manager of a capitalist farm to go home early and let the crops rot.

      Weakened property rights come in two forms (1) socialism and (2) the commons.

      If a socialist manager of a farm isn't entitled extra profit for working extra to transport of the farm's wheat to Moscow before it rots, he likely won't put int the extra hours.

    6. The fundamental purpose of property rights, and their fundamental accomplishment, is that they eliminate destructive competition for control of economic resources. Well-defined and well-protected property rights replace competition by violence with competition by peaceful means.

      Well defined property rights replace competition by violence with competition by peaceful means

    7. Finally, a private property right includes the right to delegate, rent, or sell any portion of the rights by exchange or gift at whatever price the owner determines (provided someone is willing to pay that price). If I am not allowed to buy some rights from you and you therefore are not allowed to sell rights to me, private property rights are reduced. Thus, the three basic elements of private property are (1) exclusivity of rights to the choice of use of a resource, (2) exclusivity of rights to the services of a resource, and (3) rights to exchange the resource at mutually agreeable terms.

      The are three elements that constitute private property:

      (1) Exclusive rights to determine how the property gets used (2) Exclusive rights to the services of a resource (3) Rights to sell the resource

    1. A normal household has to pay rent or make mortgage payments. To arbitrarily exclude the biggest expense to consumers from CPI is pretty misleading.When you create new money prices don't rise evenly. At the moment we have new money being created by central banks and given to privileged institutions who get access to free money. They use that to buy investments: real estate, stocks, etc. These are precisely the things getting really expensive. The last things to get more expensive during big cycles of inflation are employee wages.The world used gold/silver for its currency for most of human history until 1970 when we entered this period of worldwide fiat currencies. Our current situation is pretty remarkable.The whole argument for printing money being OK is dumb. If it's OK to print money to pay for some things why are you not doing it more? Why not make everyone a millionaire?I think that another deception is that we should ordinarily be experiencing price deflation. Every day our society is getting more efficient at making things. If prices for goods are staying the same then it may not be that their value has not changed, they may be less valuable goods, but they cost the same because you're also buying them with less valuable currency.If you have gone through years of moving everything to China to make it cheaper to manufacture, improved technology to make processes more efficient, etc. and I'm still paying the same amount for all of the stuff in my life, then again, maybe all these things are cheaper, but I'm also buying them with currency that's less valuable.Ultimately, printing money doesn't make anyone more productive or produce anything. All it does is redistribute wealth from those that were first to get the new free money away from those that were last to contact it.

      Solid HN comment on inflation

    1. For the evolutionary psychologists an explanation that humans do something for "the sheer enjoyment of it" is not an explanation at all – but the posing of a problem. Why do so many people find the collection and wearing of jewelry enjoyable? For the evolutionary psychologist, this question becomes – what caused this pleasure to evolve?

      For evolutionary psychologists an explanation that humans do something for their enjoyment is not an explanation at all. The question becomes: Why did this pleasure evolve?

    2. Collecting and making necklaces must have had an important selection benefit, since it was costly – manufacture of these shells took a great deal of both skill and time during an era when humans lived constantly on the brink of starvation[C94].

      Because evolution is ruthlessly energy preserving and because our African ancestors would have continuously lived on the brink of starvation, the costly manufacture of ornamental shells must have incurred a large selection benefit to those doing it.

    3. It continued to be used as a medium of exchange, in some cases into the 20th century – but its value had been inflated one hundred fold by Western harvesting and manufacturing techniques, and it gradually went the route that gold and silver jewelry had gone in the West after the invention of coinage – from well crafted money to decoration.

      The value of wampum became inflated more than one hundred fold by Wester harvesting and manufacturing techniques.

    4. The beginning of the end of wampum came when the British started shipping more coin to the Americas, and Europeans started applying their mass-manufacturing techniques. By 1661, British authorities had thrown in the towel, and decided it would pay in coin of the realm – which being real gold and silver, and its minting audited and branded by the Crown, had even better monetary qualities than shells. In that year wampum ceased to be legal tender in New England.

      Wampum stopped being legal considered legal tender by the British in 1661 when they started shipping gold and silver coins from Europe.

    5. Once they got over their hangup about what constitutes real money, the colonists went wild trading for and with wampum. Clams entered the American vernacular as another way to say "money". The Dutch governor of New Amsterdam (now New York) took out a large loan from an English-American bank – in wampum. After a while the British authorities were forced to go along. So between 1637 and 1661, wampum became legal tender in New England. Colonists now had a liquid medium of exchange, and trade in the colonies flourished.[D94]

      The colonists of New England started trading in Wampum and started using it as money. It was accepted as legal tender from 1637 to 1661.

    6. Clams were found only at the ocean, but wampum traded far inland. Sea-shell money of a variety of types could be found in tribes across the American continent. The Iriquois managed to collect the largest wampum treasure of any tribe, without venturing anywhere near the clam's habitat.[D94] Only a handful of tribes, such as the Narragansetts, specialized in manufacturing wampum, while hundreds of other tribes, many of them hunter-gatherers, used it. Wampum pendants came in a variety of lengths, with the number of beads proportional to the length. Pendants could be cut or joined to form a pendant of length equal to the price paid.

      Wampum was traded by hundreds of tribes, but it was only "mined" by a handful living close to the shore.

    7. The colonists' solution was at hand, but it took a few years for them to recognize it. The natives had money, but it was very different from the money Europeans were used to. American Indians had been using money for millenia, and quite useful money it turned out to be for the newly arrived Europeans – despite the prejudice among some that only metal with the faces of their political leaders stamped on it constituted real money. Worse, the New England natives used neither silver nor gold. Instead, they used the most appropriate money to be found in their environment – durable skeleton parts of their prey. Specifically, they used wampum, shells of the clam venus mercenaria and its relatives, strung onto pendants.

      Native American Indians used the shells of clams as money, strung onto pendants. It was the best form of money in their environment.

    1. So when I’m searching for information in this space, I’m much less interested in asking “what is this thing?” than I am in asking “what do the people who know a lot about this thing think about it?” I want to read what Vitalik Buterin has recently proposed regarding Ethereum scalability, not rote definitions of Layer 2 scaling solutions. Google is extraordinarily good at answering the “what is this thing?” question. It’s less good at answering the “what do the people who know about the thing think about it?” question. Why? 

      According to Devin Google is good at answering a question such as "what is this thing?", but not good at answering a questions "what do people who know a lot about this thing say about it?"

      This reminds me of social search

  9. Feb 2021
    1. But in credibly neutral mechanism design, the goal is that these desired outcomes are not written into the mechanism; instead, they are emergently discovered from the participants’ actions. In a free market, the fact that Charlie’s widgets are not useful but David’s widgets are useful is emergently discovered through the price mechanism: eventually, people stop buying Charlie’s widgets, so he goes bankrupt, while David earns a profit and can expand and make even more widgets. Most bits of information in the output should come from the participants’ inputs, not from hard-coded rules inside of the mechanism itself.

      This reminds me of Hayek worrying about the components/primitives of capitalism (e.g. property rights) were being corrupted by socialists.

      You could view the proper "pure" component of capitalism being credibly neutral property rights, and it becomes corrupted if you make it non-credibly neutral, e.g. you introduce preferences in terms of the outcomes.

    2. This is why private property is as effective as it is: not because it is a god-given right, but because it’s a credibly neutral mechanism that solves a lot of problems in society - far from all problems, but still a lot.

      Property rights are credibly neutral

    3. We are entering a hyper-networked, hyper-intermediated and rapidly evolving information age, in which centralized institutions are losing public trust and people are searching for alternatives. As such, different forms of mechanisms – as a way of intelligently aggregating the wisdom of the crowds (and sifting it apart from the also ever-present non-wisdom of the crowds) – are likely to only grow more and more relevant to how we interact.

      This is Jordan Hall's blue church vs. red church.

      Losing trust in institutions perhaps has more emphasis here.

    1. Finding clientsFinally, we were at the moment of truth. Luckily, from our user interviews we knew that companies were posting on forums like Reddit and Craigslist to find participants. So for 3 weeks we scoured the “Volunteers” and “Gigs” sections of Craigslist and emailed people who were looking for participants saying we could do it for them.Success!We were able to find 4 paying clients! 

      UserInterviews found their first clients by replying to ads on Craigslist and Reddit for user interview volunteers with the pitch that they could help the companies find them.

  10. Jan 2021
    1. Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct produced by your brain. You want to get out of the way as fast as possible, with minimal mental clutter interfering with your actions. Likewise, if you are doing programming or math, you want to become at least partially fused together with your understanding of the domain, taking its axioms as objective facts so that you can focus on figuring out how to work with those axioms and get your desired results.

      Cognitive Fusion serves an important role

      When you are driving a car, you want to be fused with its internal logic, because it will allow you to respond in the quickest possible way to threats. (I'm not sure if this is the same thing though)

    2. Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault (even if you had actually done something blameworthy too). In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic. Another emotional example might be feelings of shame, where it’s easy to experience yourself as a horrible person and feel that this is the literal truth, rather than being just an emotional interpretation.

      Cognitive Fusion

      Cognitive Fusion is a term that comes from Acceptance and Commitment Therapy (ACT).

      CF happens when you identify so strongly with a thought or an emotion that its contents is experienced as the objective way the world is.

      "She is the one" for example is a cognitive fusion.

      The cognitive fusion prevents you from stepping back and examining the construct.

      You experience everything in terms of the belief's internal logic.

    1. This brings me to the fourth pattern of oscillating tension: Shadow values.The pattern goes something like this: We have two values that (without proper planning) tend to be in tension with each other. One of them, we acknowledge, as right and good and ok. One of them we repress, because we think it's bad or weak or evil.Safety vs. AdventureIndependence vs. Love Revenge vs. Acceptance All common examples of value tensions, where one of the values is often in shadow (which one depends on the person).So we end up optimizing for the value we acknowledge. We see adventure as "good", so we optimize for it, hiding from ourselves the fact we care about safety. And man, do we get a lot of adventure. Our adventure meter goes up to 11.But all the while, there's that little safety voice, the one we try ignore. Telling us that there's something we value that we're ignoring. And the more we ignore it, the louder it gets.And meanwhile, because we've gotten so much of it, our adventure voice is getting quieter. It's already up to 11, not a worry right now. Until suddenly, things shift. And where we were going on many adventures, now we just want to stay home, safe. Oscillating tension.

      Shadow Values

      Shadow Values are a pattern of Oscillating Tension.

      When we have two values, one which we make explicit and acknowledge, one which we don't, we might optimize for the one we made explicit.

      This results in our behavior pursuing the maximization of that value, all the while ignoring the implicit one (the shadow value).

      Because this value is getting trampled on, the voice that corresponds to it will start to speak up. The more it gets ignored, the more it speaks up.

      At the same time, the voice corresponding to the value that is getting maximized, becomes quiet. It's satisfied where it is.

      We find ourselves in a place where all we want to do is tend to the value that is not being met.

    1. Volkswagen, the world’s largest car maker, has outspent all rivals in a global bid by auto incumbents to beat Tesla. For years, industry leaders and analysts pointed to the German company as evidence that, once unleashed, the old guard’s raw financial power paired with decades of engineering excellence would make short work of Elon Musk’s scrappy startup. What they didn’t consider: Electric vehicles are more about software than hardware. And producing exquisitely engineered gas-powered cars doesn’t translate into coding savvy.

      Many thought Volkswagen would crush Tesla as soon as they put their weight behind an electric car initiative. What they didn't consider was that an electric car is more about software than it is about hardware.

    1. Note that I have defined privacy in terms of the condition ofothers' lack of access to you. Some philosophers, for example CharlesFried, have claimed that it is your control over who has access to youthat is essential to privacy. According to Fried, it would be ironic tosay that a person alone on an island had privacy.'0 I don't find thisironic at all. But more importantly, including control as part of pri-vacy leads to anomalies. For example, Fried writes that "in our cul-ture the excretory functions are shielded by more or less absoluteprivacy, so much so that situations in which this privacy is violated areexperienced as extremely distressing."" But, in our culture one doesnot have control over who gets to observe one's performance of theexcretory functions, since it is generally prohibited to execute them inpublic.'2 Since prying on someone in the privy is surely a violation of

      Reiman argues that in his definition privacy is defined in terms of others' lack of access to you, and not, as Charles Fried does for instance, in terms of your control over who has access to you.

      He argues this point by saying that since watching someone go to the toilet is certainly a violation of privacy, and since we don't have control over the law dictating that we cannot go to the toilet in public, privacy cannot be about control.

      I think this argument is redundant. Full control would imply that you can deny anyone access to you at their discretion.

    2. It might seem unfair to IVHS to consider it in light of all thisother accumulated information-but I think, on the contrary, that it isthe only way to see the threat accurately. The reason is this: We haveprivacy when we can keep personal things out of the public view.Information-gathering in any particular realm may not seem to pose avery grave threat precisely because it is' generally possible to preserveone's privacy by escaping into other realms. Consequently, as welook at each kind of information-gathering in isolation from the others,each may seem relatively benign.2 However, as each is put into prac-tice, its effect is to close off yet another escape route from public ac-cess, so that when the whole complex is in place, its overall effect onprivacy will be greater than the sum of the effects of the parts. Whatwe need to know is IVHS's role in bringing about this overall effect,and it plays that role by contributing to the establishment of the wholecomplex of information-gathering modalities.

      Reiman argues that we can typically achieve privacy by escaping into a different realm. We can avoid public eyes by retreating into our private houses. It seems we could avoid Facebook by, well, avoiding Facebook.

      If we treat each information-gather in one realm as separate, they may seem relatively benign.

      When these realms are connected, they close off our escape routes and the effect on privacy becomes greater than the sum of its parts.

    3. But notice here that the sort of privacy we wantin the bedroom presupposes the sort we want in the bathroom. Wecannot have discretion over who has access to us in the bedroom un-less others lack access at their discretion. In the bathroom, that is allwe want. In the bedroom, we want additionally the power to decide atour discretion who does have access. What is common to both sortsof privacy interests, then, is that others not have access to you at theirdiscretion. If we are to find the value of privacy generally, then it willhave to be the value of this restriction on others.

      The sort of privacy we want in the bedroom (control over who accesses us) presupposes the sort of privacy we want in the bathroom (others lack access to us at their discretion).

    4. In our bedrooms, we want to have powerover who has access to us; in our bathrooms, we just want others de-prived of that access.

      Reidman highlights two types of privacy.

      The privacy we want to have in the bathroom, which is the power to deprive others of access to us.

      And the privacy we want to have in the bedroom, which is the power to control who has access to us.

    5. By privacy, I understand the condition in which other people aredeprived of access to either some information about you or some ex-perience of you. For the sake of economy, I will shorten this and saythat privacy is the condition in which others are deprived of access toyou.

      Reiman defines privacy as the condition in which others are deprived of access to you (information (e.g. location) or experience (e.g. watching you shower))

    6. No doubt privacyis valuable to people who have mischief to hide, but that is not enoughto make it generally worth protecting. However, it is enough to re-mind us that whatever value privacy has, it also has costs. The moreprivacy we have, the more difficult it is to get the information that

      Privacy is valuable to people who have mischief to hide. This is not enough to make it worth protecting, but it tells us that there is also a cost.

    7. As Bentham realized and Foucault emphasized, the system workseven if there is no one in the guard house. The very fact of generalvisibility-being seeable more than being seen-will be enough toproduce effective social control.4 Indeed, awareness of being visiblemakes people the agents of their own subjection. Writes Foucault,He who is subjected to a field of visibility, and who knows it, as-sumes responsibility for the constraints of power; he makes themplay spontaneously upon himself; he inscribes in himself the powerrelation in which he simultaneously plays both roles; he becomesthe principle of his own subjection.

      The panopticon works as a system of social control even without someone in the guardhouse. It is being seeable, rather than being seen, which makes it effective.

      I don't understand what Foucault says here.