1,186 Matching Annotations
  1. Jun 2022
    1. The nature-of-work factor generally focuses on the degree of expressiveness of the plaintiff's work. Artistic and fanciful works tend to be highly expressive, so it is generally more difficult to win fair use defenses involving such works. Fact-intensive and highly functional works tend, by contrast, to have a lesser quantum of expressive content. Hence, fair use may be easier to establish in cases involving such works.

      Nature-of-work factor is more favorable for fact-intensive and highly functional works

    1. Stoller acknowledged that there might be a “genuine leap of technical capacity” brought about by crypto, but he hasn’t seen it for himself. The rampant and accruing amount of fraud involving crypto is also an indicator of intent to Stoller. “If blockchain proponents want to advance their technology, they would eagerly seek to get rid of the fraud, but I don't see that happening,” he said. “That signals to me the fraud is the point.” Or more specifically, as Kelsey Hightower argues, money is the point: “It’s not like someone looked at blockchain and said, ‘Oh, my God, we finally have a better database for storing transactions!’” Instead, what made blockchain’s big promises so compelling were stories of people turning a small amount of money into a lot. “And in our society, we equate morality to money,” Hightower said.

      “The fraud is the point”

    1. To me, the problem isn’t that blockchain systems can be made slightly less awful than they are today. The problem is that they don’t do anything their proponents claim they do. In some very important ways, they’re not secure. They doesn’t replace trust with code; in fact, in many ways they are far less trustworthy than non-blockchain systems. They’re not decentralized, and their inevitable centralization is harmful because it’s largely emergent and ill-defined. They still have trusted intermediaries, often with more power and less oversight than non-blockchain systems. They still require governance. They still require regulation. (These things are what I wrote about here.) The problem with blockchain is that it’s not an improvement to any system—and often makes things worse.

      Blockchain does not improve monetary systems

      With cryptocurrencies—built on blockchain—we still need: centralization, trust, regulation, governance, and a whole host of other things that are already in TradFi.

    1. In other words, transaction reversibility is not about the ledger, but rather about the transaction rules that a currency uses. A reversible currency requires that someone anoint this trusted party (or trusted parties) and that they use their powers to freeze/burn/transact currency in ways that are at odds with the recorded owners’ intentions. And indeed, this is a capability that many tokens now possess, thanks to the development of sophisticated smart contract systems like Ethereum, that allow parties to design currencies with basically any set of transaction rules they want.

      Transaction reversibility requires trusted party

      In order to mimic the capabilities in TradFi to make business decisions to reverse transactions, cryptocurrencies rely on smart contract systems and an anointed trusted party to achieve the same thing.

    2. Beyond proof-of-stake, there are other technologies in deployment, such as the proof-of-time-and-space construction used by Chia, or more centralized proof-of-authority systems.

      Ah, yes...that thing that was driving up hard drive prices a few years back.

    3. Proof-of-work is not the only technology we have on which to build consensus protocols. Today, many forward-looking networks are deploying proof-of-stake (PoS) for their consensus. In these systems, your “voting power” in the network is determined by your ownership stake in some valuable on-chain asset, such as a new or existing electronic token. Since cryptocurrency has coincidentally spent a lot of time distributing tokens, this means that new protocols can essentially “cut out the middleman” and simply use coin ownership directly as a proxy for voting power, rather than requiring operators to sell their coins to buy electricity and mining hardware. Proof-of-stake systems are not perfect: they still lead to some centralization of power, since in this paradigm the rich tend to get richer. However it’s hard to claim that the result will be worse than the semi-centralized mess that proof-of-work mining has turned into.

      Proof-of-Stake can replace Proof-of-Work

      This is a big caveat here...the nature of the tech leads to a centralization of power that means "the rich tend to get richer." For the sake of removing the environmental consequences of consensus building, does it seem worthwhile to anoint a subset of users to get richer from the use of the tech?

    1. When the digital music industry was getting started, they invented a new form of quantum indeterminacy. When a customer paid $0.99 for an Itunes track, they were engaged in a license. When that transaction was recorded on the artist's royalty statement, it was a sale. Like Schroedinger's alive/dead cat, digital music was in superposition, caught in a zone between a sale and a license.

      Bought from artist, licensed to listener

    2. sex workers are the vanguard of every technological revolution. What gives? Well, think about the other groups that make up that vanguard – who else is an habitual early adopter? At least four other groups also take the lead on new tech: political radicals, kids, drug users, and terrorists. There's some overlap among members of these groups, but their most salient shared trait isn't personnel, it's exclusion. Kids, drug users, political radicals, sex workers and terrorists are all unwelcome in mainstream society. They struggle to use its money, its communications tools, and its media channels.

      Exclusion from communication tools drives some to adopt tech

      Early adopters of new tech are there because they have been excluded from other communication mediums.

    3. The kids who left Facebook for Instagram weren't looking for the Next Big Thing; they were looking for a social media service that their parents and teachers didn't use.

      Example with kids leaving Facebook for Instagram

    1. the creation of an ebook from aprint book falls under the author’s exclusive right to create derivative works. Moreover, printbooks and ebooks have very different characteristics – ebooks can be reproduced and distributedinstantaneously and at minimal cost. Internet Archive has no right to take those benefits foritself without compensating the rightsholders. While Internet Archive claims in a recent letter tothe Court to be “improving the efficiency of delivering content” – as if it is the only entitycapable of delivering ebooks to library patrons – Plaintiffs have invested heavily to create now-thriving markets for library ebooks.

      Benefits of ebook derivatives belong to the publishers

      Where does the court land on reformatting as a "derivative work"? The IA-supplied ebook is a page image duplication with limited dirty OCR search. The publisher, with the source format, has so much more opportunity to create derivative services for the ebook.

    2. We write on behalf of plaintiffs Hachette Book Group, Inc., HarperCollins PublishersLLC, John Wiley & Sons, Inc. and Penguin Random House LLC (the “Plaintiffs”) to request apre-motion summary judgment conference pursuant to Individual Practice 2(B).

      Purpose of Letter

    3. Hachette Book Group, Inc. et al. v. Internet Archive, Case No. 1:20-CV-04160-JGK

      RECAP's archive of the docket from PACER

    1. I write on behalf of Defendant Internet Archive pursuant to Paragraph 2-B of Your Honor’s IndividualPractices to request a pre-motion conference on a motion for summary judgment in the above matter.

      A letter from the law firm representing the Internet Archives that summarizes the four-point fair use argument and details the extraordinary circumstances behind the the IA's National Emergency Library.

      Hachette Book Group, Inc. et al. v. Internet Archive, Case No. 1:20-CV-04160-JGK

      RECAP's archive of the docket from PACER

    2. As to the second factor, “the nature of the copyrighted work,” this factor “has rarely played a significantrole in the determination of a fair use dispute,” Authors Guild v. Google, Inc., 804 F.3d 202, 220 (2d Cir.2015), and this case is no exception.

      Nature of Copyrighted Work is rarely a fair use determining factor

      I hadn't thought about this before, but it does seem that the "nature of copyrighted work" is not often discussed. I wonder what the origins of this factor were.

    3. may a nonprofit library that owns a lawfully made andacquired print copy of a book loan a digital copy of that book to a library patron, if the library (1) loansthe book to only one patron at a time for each non-circulating print copy it owns (thus maintaining aone-to-one “owned-to-loaned” ratio); (2) implements technical protections that prevent access to thebook by anyone other than the current borrower; and (3) limits its digital lending to books published inthe past five or more years? This describes Internet Archive’s implementation of a practice known as“Controlled Digital Lending,” or “CDL.”

      Internet Archive's legal definition of CDL

    1. All wireless devices have small manufacturing imperfections in the hardware that are unique to each device. These fingerprints are an accidental byproduct of the manufacturing process. These imperfections in Bluetooth hardware result in unique distortions, which can be used as a fingerprint to track a specific device. For Bluetooth, this would allow an attacker to circumvent anti-tracking techniques such as constantly changing the address a mobile device uses to connect to Internet networks. 

      Tracking that evades address changes

      An operating system can change the hardware address it broadcasts in avoid tracking. But subtle differences in the signal itself can still be identified and tracked.

    1. Dall-E delivers ten images for each request, and when you see results that contain sensitive or biased content, you can flag them to OpenAI for review. The question then becomes whether OpenAI wants Dall-E's results to reflect society's approximate reality or some idealized version. If an occupation is majority male or female, for instance, and you ask Dall-E to illustrate someone doing that job, the results can either reflect the actual proportion in society, or some even split between genders. They can also account for race, weight, and other factors. So far, OpenAI is still researching how exactly to structure these results. But as it learns, it knows it has choices to make.

      Philosophical questions for AI-generated artwork

      As if we needed more technology to dissolve a shared, cohesive view of reality, we need to consider how it is possible to tune the AI parameters to reflect some version of what is versus some version of how we want it to be.

    1. an employee of OpenAI — which created it — asked Dall-E to draw a "Rabbit prison warden, digital art," and, within twenty seconds, it produced ten new illustrations.

      20 seconds to create 10 illustrations. I'm trying to guess what the computing power is behind this, but I can't. Quite possibly because I don't grasp the technique they are using to do this.

    1. Tech isn't a bunch of toys. It's tools to create a world. We all know this is true, and we need to start acting like it.

      Technology in the Public Interest

      Schneier's call-to-action: Build something new. Distribute power. Inhabit government.

    2. distributing power creates a thicker level of civil society. And that's essential to resisting old power.

      Distributing Power Creates a Thicker Layer of Civil Society

    3. Tim Berners-Lee solid initiative

      Solid Initiative

      Solid creates interoperable ecosystems of applications and data

      Data stored in Solid Pods can power ecosystems of interoperable applications where individuals are free to use their data seamlessly across different applications and services.

      Solid Project

    4. Private tech companies have greater power to influence, censor and control the lives of ordinary people than any government on earth
    5. And when corporations start to dominate the Internet, they became de-facto governments. Slowly but surely, the tech companies began to act like old power. They use the magic of tech to consolidate their own power, using money to increase their influence, blocking the redistribution of power from the entrenched elites to ordinary people.

      "Money is its own kind of power"

      The corporations built by white, male, American, and vaguely libertarian people became a focal point of power because of the money they had to influence governments and society. They started looking like "old power."

      Later:

      Facebook took advantage of tech's tradition of openness [importing content from MySpace], but as soon as it got what it wanted, it closed its platform off.

    6. We did not think the threat would come from the inside, that it would look and sound like us. And when I say look and sound like us, I mean exactly that.

      The threat to the internet becomes white, male, American, and vaguely libertarian

    7. Nothing could have confirm the righteousness of our faith or the rightness of our cause. More than the Arab Spring

      Arab Spring as validation of the hacker ideals

      The use of technology to drive the Arab Spring—internet power versus old power—was seen as the embodiment of white, male, American, and vaguely libertarian values.

    8. By moving and changing faster than government could keep up.

      An early defense against "old power"

    9. the real enemy would be government, the bastion of traditional power. Old power. Dark suits. Heavy badges.

      The "real enemy" of the internet

      Governments would want to use the internet to "invade our privacy." Starting in the 1990s with the Clipper Chip.

    10. We were distributing power and not hoarding it. So one of the things we liked about the new tech, our new tech was that it was a powerful tool that traditional power didn't understand. And in many cases, they didn't even know about it. So we were free to do with it what we wanted

      Early hacker ethos: powerful tools that traditional power didn't understand

    11. And that identity, like most of us was white, male, American, and vaguely libertarian. That's how the Internet got personified in those early days. Again, this wasn't everyone If you gathered all of us to talk about those early days, the women, the people of color, they would tell different stories. But it was most of us.

      Early culture on the internet: "white, male, American, and vaguely libertarian"

    1. what humor does with advocacy is it softens the edges

      Humor in Advocacy

      If you have a point to get across and a call to action, use humor as your medium—there is a better chance of your message getting through.

    2. now we have to maintain a level of professionalism because we have a very unique role in society people we have their lives their eyes and our hands and and so we do have a level of professionalism that we need to maintain

      Being real on social media while maintaining professionalism

      ...but it should not come at the expense of being able to show who we are—express ourselves—on social media.

    3. joking helps us acknowledge and integrate painful absurdities

      "Joking helps us acknowledge and integrate painful absurdities"

      Quote by Ted Cohen (philosopher)

    4. what humor does is you can take that rearrange it add humor to it deal with it the way you

      "Joking serves the function of overcoming internal and external obstacles"

      Quote by Sigmund Freud. Will Flanary goes on to say:

      When we're faced with something in life—whether you get sick, a family member gets sick, an accident, something unforeseen—we feel like control over our own lives is taken away from us. And what humor does is you can take that, rearrange it, add humor to it, deal with it the way you want to deal with it, present it to others and have them laugh with you about it. want to deal with it present it to others and have them laugh with you about it

    1. A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

      Gall's Law

      Gall's Law is a rule of thumb for systems design from Gall's book Systemantics: How Systems Really Work and How They Fail.

      It reminds me of the TCP/IP versus OSI network stack wars.

    1. the one thing that you have to keep conveying to people about the consequences of surveillance is that it's all very well to say that you have nothing to hide, but when you're spied upon, everybody that's connected to you gets spied upon. And if we don't push back, the most vulnerable people in society, the people that actually keep really massive violations of human rights and illegality in check, they're the people who get most affected.

      "I Have Nothing To Hide" counter-argument

      Even if you have nothing to hide, that doesn't mean that those you are connected with aren't also being surveilled and are part of targeted communities.

  2. May 2022
    1. over the past decade and change a dynamic ecosystem has developed around cryptocurrencies and blockchains. And it’s constantly getting more complicated. We’ve now got non-fungible tokens, or NFTs, unique digital bits purchased with crypto that have mostly been associated with weird pieces of digital art and are an arena that looks very much like a bubble. There are stablecoins, cryptocurrencies that are supposed to be less volatile, pegged to something like the US dollar. There’s also the burgeoning world of decentralized finance, or DeFi, which tries to replicate a lot of the financial system but without intermediaries, and there are decentralized autonomous organizations, or DAOs, essentially internet collectives. Now, much of this is falling under the still-nascent umbrella of Web3, a relatively fuzzy reimagining of the internet on blockchains.

      Putting it all together.

    2. What emerged was a picture that was simultaneously murky and clarifying, in that there’s not one good answer. Some of what it does is promising; a lot of what it does — even boosters admit — is trash, and trash that’s costing some people a lot of money. This probably isn’t the death knell for crypto — it’s gone through plenty of boom and bust cycles in the past. It would be unwise to definitively say that crypto has no chance of being a game changer; it would also be disingenuous to claim it is now.

      Article summary: cryptocurrency isn't all that useful now, but that doesn't necessarily mean it won't be useful in the future.

    1. The effect or “value” of the EJC platform has certainly shifted over time. While it was initially created to solve an access delivery problem, in 2022, the value still lies with the ownership of the content. Because of savvy contract negotiations, if OhioLINK should cancel any packages, members will retain access to the locally stored backfiles. Members would never have to negotiate with publishers/vendors for post-cancellation “access fees” to those resources.

      This is very true...taking possession of the content—as a matter of contract and as a matter of locally storing the articles themselves—was very key to the EJC strategy.

    2. OhioLINK also had the foresight to add a set of core online research databases (which at this time were only indexes).

      The earliest of these research databases used the same public catalog interface as the local library catalog and the central catalog. I think this had a significant usability advantage to promoting these resources even as compromises on sophistication of indexing were made.

    3. Over the next four years, Ohio’s public 4-year institutions, plus University of Dayton and Case Western Reserve University collaborated to develop the statewide consortial lending system.

      The inclusion of U-Dayton and CWRU set a key precedent for OhioLINK—the activities of the statewide consortium were going to lift all boats: public 4-year, private 4-year, and soon the community and technical colleges. All would have near identical access to the same resources, regardless of the student's or researcher's school.

    4. the Ohio Board of Regents (1987) (BOR, now known as the Ohio Department of Higher Education)

      Opinion: the politicization of higher education in Ohio was an ill-conceived and ultimately detrimental decision. Where there was once insulation—the governor appointed members of the board of regents and the regents appointed the chancellor—the ability of the governor to directly appoint the head of higher education in the state injected politics into higher ed, and the higher ed system as a whole lost as a result.

    5. create a book depository system for off-site storage of library materials

      I think the formation of the off-site storage buildings is often forgotten in the history of higher education libraries in the early 1990s. One was built just off the Ohio State campus, one was built originally in a former car dealership in Kent, Ohio (I think), and the third was the one I was involved with: the Southwest Ohio Regional Depository, or S.W.O.R.D. I registered "sword.org" and had an email and website system run off of a Mac Server.

      Ah, good days.

    1. This work is still being done on platforms like Facebook and Reddit. But unlike the sysops who enabled the flourishing of early online communities, the volunteer moderators on today’s platforms do not own the infrastructures they oversee. They do not share in the profits generated by their labor. They cannot alter the underlying software or implement new technical interventions or social reforms. Instead of growing in social status, the sysop seems to have been curtailed by the providers of platforms. If there is a future after Facebook, it will be led by a revival of the sysop, a reclamation of the social and economic value of community maintenance and moderation.

      Being a moderator on a large private social network is different from being a sysop or moderator in "Modem World"—mainly because of the lack of control over the underlying technical infrastructure and the engagement rules baked into the technical infrastructure.

    2. The modem world shows us that other business models are possible. BBS sysops loved to boast about “paying their own bills.” For some, the BBS was an expensive hobby, a money pit not unlike a vintage car. But many sysops sought to make their BBSs self-sustaining. Absent angel investors or government contracts, BBSs became sites of commercial experimentation. Many charged a fee for access—experimenting with tiered rates and per-minute or per-byte payment schemes. There were also BBSs organized like a social club. Members paid “dues” to keep the hard drive spinning. Others formed nonprofit corporations, soliciting tax-exempt donations from their users. Even on the hobby boards, sysops sometimes passed the virtual hat, asking everybody for a few bucks to buy a new modem or knock out a big telephone bill.

      Funding "Modem World"

      A mixture of a personal expensive hobby to small businesses experimenting with tiered rates to donation-driven to non-profit corporations.

    3. In the days of Usenet and BBSs and Minitel, cyberspace was defined by the interconnection of thousands of small-scale local systems, each with its own idiosyncratic culture and technical design, a dynamic assemblage of overlapping communication systems held together by digital duct tape and a handshake. It looked and felt different depending on where you plugged in your modem.

      This is a picturesque description of the loosely linked BBS/Usenet world, where a person's view of the federation was different "depending on where you plugged in your modem."

    4. Forgetting has high stakes. As wireless broadband approaches ubiquity in many parts of North America, the stories we tell about the origins of the internet are more important than ever. Faced with crises such as censorship and surveillance, policy makers and technologists call on a mythic past for guidance. In times of uncertainty, the most prominent historical figures—the “forefathers” and the “innovators”—are granted a special authority to make normative claims about the future of telecommunications. As long as the modem world is excluded from the internet’s origin story, the everyday amateur will have no representation in debates over policy and technology, no opportunity to advocate for a different future.

      "Modem world"

      In addition to being a useful argument for the inclusion of the social aspects of BBS networks, the "modem world" phrase is an interesting shorthand for describing what was happening in the public sphere while NSFnet was growing in the academic and computing research world.

    5. Instead of emphasizing the role of popular innovation and amateur invention, the dominant myths in internet history focus on the trajectory of a single military-funded experiment in computer networking: the Arpanet. Though fascinating, the Arpanet story excludes the everyday culture of personal computing and grassroots internetworking. In truth, the histories of Arpanet and BBS networks were interwoven—socially and materially—as ideas, technologies, and people flowed between them

      Interwoven history between Arpanet and BBS networks

      There is some truth to this statement. The necessary protocol underpinnings were from the Arpanet part of the pair, but the social pieces were derived from BBS interconnections via dialup protocols like UUCP. Is there an evolutionary link between UUCP and NNTP?

      In the calls for loosely linked independent social networks to replace the large, global private social networks, there are echos of loosely connected BBS networks.

    1. The more customers that a cable company served, the stronger their negotiating position with content providers; the more studios and types of content that a content provider controlled the stronger their negotiating position with cable providers. The end result were a few dominant cable providers (Comcast, Charter, Cox, Altice, Mediacom) and a few dominant content companies (Disney, Viacom, NBC Universal, Time Warner, Fox), tussling back-and-forth over a very profitable pie.

      Ratcheting power of cable and content companies

    2. Within these snippets is everything that makes the cable business so compelling: Cable is in high demand because it provides the means to get what customers most highly value. Cable works best both technologically and financially when it has a geographic monopoly. Cable creates demand for new supply; technological advances enable more supply, which creates more demand.

      Cable TV service drivers

    3. The aforementioned satellite, though, led to the creation of national TV stations, first HBO, and then WTCG, an independent television station in Atlanta, Georgia, owned by Ted Turner. Turner realized he could buy programming at local rates, but sell advertising at national rates via cable operators eager to feed their customers’ hunger for more stations. Turner soon launched a cable only channel devoted to nothing but news; he called it the Cable News Network — CNN for short (WTCG would later be renamed TBS).

      Origins of national programming

      "buy programming at local rates but sell advertising at national rates"

    4. Jerrold Electronics, meanwhile, spun off one of the cable systems it built in Tupelo, Mississippi to an entrepreneur named Ralph Roberts; Roberts proceeded to systematically buy up community cable systems across the country, moving the company’s headquarters to Philadelphia and renaming it to Comcast Corporation (Roberts would eventually hand the business off to his son, Brian).

      Origin of Comcast

    5. what if Tarlton could place an antenna further up the mountain in Summit Hill and run a cable to his shop?

      Origin story of cable television

      An electronics store needed a way to demonstrate the capabilities of television sets, but was in a valley that prevented line-of-site access to a transmitter.

    1. VIN locks began in car-engines. Auto manufacturers started to put cheap microcontrollers into engine components and subcomponents. A mechanic could swap in a new part, but the engine wouldn’t recognize it — and the car wouldn’t drive — until an authorized technician entered an unlock code into a special tool connected to the car’s internal network.

      VIN Locks and Right-to-Repair

    2. The next time someone tells you “If you’re not paying for the product, you’re the product,” remember this. These farmers weren’t getting free, ad-supported tractors. Deere charges six figures for a tractor. But the farmers were still the product. The thing that determines whether you’re the product isn’t whether you’re paying for the product: it’s whether market power and regulatory forbearance allow the company to get away with selling you.

      Nuanced "If you're not paying for the product, you're the product"

      Is your demographic and/or activity data being sold? Then you are still the product even if you are paying for something.

      I worry about things like Google Workspace sometimes. Am I paying enough for the product to cover the cost of supplying the product to me, or is Google having to raise additional revenue to cover the cost of serving me? Is Google raising additional revenue even though they don't have to in order to cover my cost?

    1. In a rush to show growth, Bolt often overstated its technological capability and misrepresented the number of merchants using its service, some of the people said. In presentations to investors, it included the names of customers before verifying whether those merchants were able to use its technology. For a time, a fraud detection product it was pitching to merchants was more dependent on manual review than Mr. Breslow implied, according to a former employee.

      "Fake it until you make it" meets reality

      Over promising and under delivering is a common "tech bro" problem. Theranos, while not lead by a "bro", suffered from much of the same problem.

    1. In July 2021, Wyoming became the first state in the country to explicitly codify rules around DAOs wishing to become domiciled in that jurisdiction. This rule change means that DAOs in Wyoming are considered a distinct form of limited liability company (LLC), which grants them a legal personality and confers a wide range of rights, such as limited liability for members. Without this protection, a DAO could be viewed as a general partnership, exposing its members to personal liability for any of the DAO’s obligations or actions. Each DAO must have a registered agent in Wyoming, and the agent must establish a physical address and maintain a register of names and addresses of the entity’s directors or individuals serving in a similar capacity.

      Distributed Autonomous Organizations as legal entities

      Wyoming grants a [[distributed autonomous organization|DAO]] a legal form of existence akin to a limited liability company. Without this legal structure, a DAO could be considered a "general partnership" that subjects its participants to personal liability for the action of the DAO's smart contracts.

    1. Blockchains are immutable, which means once data is recorded, it can’t be removed. The idea that blockchains will be used to store user-generated data for services like social networks has enormous implications for user safety. If someone uses these platforms to harass and abuse others, such as by doxing, posting revenge pornography, uploading child sexual abuse material, or doing any number of other very serious things that platforms normally try to thwart with content-moderation teams, the protections that can be offered to users are extremely limited. The same goes for users who plagiarize artwork, spam, or share sensitive material like trade secrets. Even a user who themself posts something and then later decides they’d rather not have it online is stuck with it remaining on-chain indefinitely.

      Nothing is forgotten on the blockchain

      Once something is recorded in the blockchain ledger, it is almost impossible to remove (except, say, for a community-agreed-upon hard fork of the ledger). All of the ills of social media become even more permanent when recorded directly in a blockchain.

    1. “The crisis for the Church is a crisis of discernment,” he said over lunch. “Discernment”—one’s basic ability to separate truth from untruth—“is a core biblical discipline. And many Christians are not practicing it.”

      Discernment as a cause for division

    1. Prasad says that early tests of Nextdoor’s “constructive conversations” reminders have already been positive, though it has led to some decrease in overall engagement on the platform. “We think that it's still the right thing to do.”

      Quality Over Engagement

      A [[private social space]] that says it is prioritizing quality conversation over engagement, according to its chief product officer

    1. Our goal: to encourage neighbors to conduct more mindful conversations. What if we can be proactive and intervene before the conversations spark more abusive responses? Oftentimes unkind comments beget more unkind comments. 90% of abusive comments appear in a thread with another abusive comment, and 50% of abusive comments appear in a thread with 13+ other abusive comments.* By preventing some of these comments before they happen, we can avoid the resulting negative feedback loops.

      Proactive Approach to Handling Abusive Comments

      Interesting that they took a more nuanced approach to this problem. Something more heavy-handed would have added a time delay or limit on the number of comments by a particular user. Instead, they chose to model the conversations and have the app offer pop-ups based on that. Another alternative would be something like [[social credit score]].

    2. This model was tasked with predicting whether a future comment on a thread will be abusive. This is a difficult task without any features provided on the target comment. Despite the challenges of this task, the model had a relatively high AUC over 0.83, and was able to achieve double digit precision and recall at certain thresholds.

      Predicting Abusive Conversation Without Target Comment

      This is fascinating. The model is predicting if the next, new comment will be abusive by examining the existing conversation, and doing this without knowing what the next comment will be.

    3. AUC

      "Area Under Curve"

    4. For somes cases such as misinformation and discrimination, these reports are sent directly to our trained Neighborhood Operation Staff to review.

      Ah, interesting...I didn't know these went to Nextdoor staff.

    5. The multiple dimensions of this conversation created some complexity around how we define each comment’s parent node and traverse along the parent nodes to recreate the conversation thread.

      I would argue that Nextdoor's threaded mode is broken because there is only ever two levels: the parent comment and any replies. I've seen confusion in Nextdoor posts when one person's reply is read as a reply to the parent comment rather than a reply to a reply. I wonder what the implementation decision behind this was.

    1. When fed information about a target individual’s mobile phone interactions, as well as their contacts’ interactions, AI can correctly pick the target out of more than 40,000 anonymous mobile phone service subscribers more than half the time, researchers report January 25 in Nature Communications.

      Citation to research: A.-M. Creţu et al. Interaction data are identifiable even across long periods of time. Nature Communications. Published online January 25, 2022. doi: 10.1038/s41467-021-27714-6.

    2. For one test, the researchers trained the neural network with data from an unidentified mobile phone service that detailed 43,606 subscribers’ interactions over 14 weeks. This data included each interaction’s date, time, duration, type (call or text), the pseudonyms of the involved parties and who initiated the communication.

      Graph of Phone Calls/Texts

      One of the tests involved the researcher creating a directed graph of user calls/texts including timestamp, type of interaction (call versus text), and duration. Just based on the pattern of interaction, the AI could be fed the graph of a known individual and be spotted in the anonymized dataset about 15% of the time. Adding the second derivative interactions into the search graph increased the positive result to just over 50%.

    1. those community driven organizations were where a lot of us learned how to do small “D” democracy. We learned how to run a meeting. We learned how to manage a budget. We learned how to host a group discussion. We learned how to get people to work together to do things. We've lost some of that. I think one of the best ways that people are learning those small d democratic skills are doing things like being moderators on Reddit, are trying to figure out how to run virtual spaces that they actually have control over.

      Small Digital Social Spaces Mimic Community Organizations

      How do people learn to interact with each other in productive ways? In the past it was with community-driven organizations. How do we bring that teaching tool to digital spaces?

    2. I don't think you can responsibly run a social network these days, without some way of dealing with things like child sexual abuse imagery. Whichever lines you want to draw, maybe it's around terrorism. Certainly it's around child abuse imagery. There have to be some central resources where you can put up fingerprints of your images, instead of say, "I need to take this down." Even in the circumstances that I'm talking about, there's no way to deal with a community that decides that child porn's a great thing and we're going to trade it back and forth, without having some of the central resources that you can work against. If you really were working for this decentralized world, some combination of the mandatory interop without too high level of it, and some sort of those collective resources that we in a field all work together on maintaining and feeding. With auditability, I understand that all those resources need to be audited and checked, so you don't end up being an irresponsible blacklist.

      Small Social Spaces need Central Resources

      The example brought up is child sexual abuse material (CSAM). Small communities will need to police this. Pretty sure that is a universal understanding.

      What if one person's terrorism is another's political speech. Under who's laws do small communities fall when the participants can cross political boundaries?

    3. The only problem with it is that when you leave Facebook, your data's really only useful if you're going to another Facebook. Leaving is not actually the interesting thing. It's being able to interoperate with each of the small blocks of content.

      Interoperate versus Leaving

      Private social media spaces have tools that let you "export" your data, but that data is only good on the social media space it came from. What we need instead is a level of interoperability between social media spaces.

    4. digital public infrastructure, this idea that maybe our public spaces should actually be paid for with public dollars

      Digital Public Infrastructure

      As an answer to private social spaces.

    5. If we, as human beings are allowed to change and evolve, we have to find some way to be able to outgrow our data doppelgängers. It's not just that these things are creepy. It's that they're literally holding us to our worst selves, even when we try to change and work our way through the future.

      Data Doppelgängers phrase

      This is an aspect of the right to be forgotten here. Should this be just about behavioral advertising? What about the person running for office and having old pictures and old writings coming back to haunt them.

    6. I don't think that hateful speech disappears in the model that I'm talking about.  Will people build horrible hateful spaces within the architecture that I'm trying to design? Yeah, absolutely. My hope is that most of us will avoid them. My hope is that those spaces will be less successful in recruiting and pulling people into those spaces.

      Small Spaces and Hateful Speech

      Hateful speech doesn't go away, but is segregated into smaller spaces where "my hope is that most of us will avoid." This will depend on the software tools that communities have to enforce rules about hateful speech. I want to think this will work, but it is so easy to have bots spin up small communities, and I imagine those bots infiltrating spaces.

    7. We've had three things happen simultaneously: we've moved from an open web where people start lots of small projects to one where it really feels like if you're not on a Facebook or a YouTube, you're not going to reach a billion users, and at that point, why is it worth doing this? Second, we've developed a financial model of surveillance capitalism, where the default model for all of these tools is we're going to collect as much information as we can about you and monetize your attention. Then we've developed a model for financing these, which is venture capital, where we basically say it is your job to grow as quickly as possible, to get to the point where you have a near monopoly on a space and you can charge monopoly rents. Get rid of two aspects of that equation and things are quite different.

      How We Got Here: Concentration of Reach, Surveillance Capitalism, and Venture Capital

      These three things combined drove the internet's trajectory. Without these three components, we wouldn't have seen the concentration of private social spaces and the problems that came with them.

    8. we both want things that are smaller in terms of the manageability of the size of it. But the other thing you said was really important is we want to be able to skip between them. We don't want to all be locked in small little walled gardens where we can't go from one place to another.

      Interoperability of Spaces

      This echos Cory Doctorow's interoperability vision.

    9. Scale's hard. Having a set of speech rules that work for 10 people around a dinner table, that can be hard in it of itself. Everyone can think of a Christmas meal or a Thanksgiving meal with someone who's really politically out of line with everyone else and the Thanksgiving rules of no politics around the table. But that's 10 people. Once you start trying to scale that to India /Kashmir, or Palestinianians/ Israelis, or Rohingya/ Bama you're really wrestling with some questions that frankly, most of the people involved with these companies are not qualified to address. The only solution I've been able to come to out of that is a whole lot of small spaces. All of us moving between them and some of us taking responsibility for governing, at least some of the spaces that we interact in.

      Devolve the rule-making for a space to smaller-sized groups. One set of global rules is not manageable.

      Interesting, though: one set of technical global rules—protocols and other standards—is required for global communication, but the social/interpersonal aspects of global rules defies codification.

    10. She's built a whole company around adding features to Twitter that Twitter, frankly, should have.

      Tracy Chou's company is Block Party.

    11. Reimagining looks at this and says, "Wait a second, why are we trying to have a conversation in a space that's linking 300 million people? Maybe this isn't where I want to have my conversation. And you know what? I don't actually want my conversation moderated by poorly paid people in the Philippines who are flipping through a three ring binder to figure out if speech is acceptable. What if we built social media around communities of people who want to interact with one another and want to take responsibility for governing those spaces?"

      Reimagining Social Media

      This is what Ethan Zuckerman proposes as re-imagined social media spaces...communities of people owning the rules for the space they are in, and then having loosely connected spaces interact.

    12. So we've gone from worrying about government censoring the net, to worrying about platform censoring the net, to now in some cases, worrying about platforms not doing enough to censor the net, this is not how we should be running a digital public sphere. 

      Progression of Concerns about the Private Social Space

      The private social spaces don't make themselves available to research analysis, so we have this vague feeling that something is wrong with only empirical evidence that we can't really test.

    13. I got sober about four years ago, but the internet knows me as an alcoholic and there is in those many records out there, the fact that I have clicked on alcohol ads. I have bought alcohol online. The internet in a very real way doesn't want me to stop drinking. The fact that they know that I like to drink is actually very lucrative for them. When you think about this, this creates a really interesting ethical conundrum. It's not just that these things are creepy. It's that they're literally holding us to our worst selves, even when we try to change and work our way through the future.

      Effects of Behavioral Advertising when the Behavior Changes

      It is said that the internet doesn't forget. This could be really true for behavioral advertisers who's business it is to sell to your behaviors, whether you've wanted to change them or not.

    1. The software trains on 100 hours of footage so each camera can learn “normal” behavior, and then it flags anything deemed out of the ordinary. Each camera can also be configured with additional hard-coded rules. For example, it can be programmed with barriers that people should never cross and zones where cars should never stop.

      AI learns what is "normal" and escalates non-normal things (or hard-coded condition violations) to human operators.

    2. The bulk of Vumacam’s subscribers have thus far been private security companies like AI Surveillance, which supply anything from armed guards to monitoring for a wide range of clients, including schools, businesses, and residential neighborhoods. This was always the plan: Vumacam CEO Croock started AI Surveillance with Nichol shortly after founding Vumacam and then stepped away to avoid conflicts with other Vumacam customers.

      AI-driven Surveillance-as-a-Service

      Vumacam provides the platform, AI-driven target selection, and human review. Others subscribe to that service and add their own layers of services to customers.

    1. We still have scientific papers; we still send them off to peer reviewers; we still have editors who give the ultimate thumbs up or down as to whether a paper is published in their journal.

      To which we should also add, of course, the exorbitant fees and resulting profits for the corporate entities doing the publishing.

    2. Having been printed on paper since the very first scientific journal was inaugurated in 1665

      There is some history here. The first scientific journal was one that published the proceedings of one of the first scholarly society meetings (The (mostly true) origins of the scientific journal - Scientific American Blog Network) and resulting letters.

    1. A platform can use all available signals to judge who should have promotion and privileges on the platform, including giving more access and amplification to those who have shown a consistent history of positively engaging with others. And those who misbehave on the platform would be managed with a community management strategy that's informed by the principles of restorative justice, incentivizing good behaviors while also taking into account a person's history of community contributions on other platforms as well.

      Using Outside Signals to Judge Promotion/Privileges

      The linked "community management strategy" post is about using signals outside our system to help adjudicate the actions on our own network. This seems strikingly close to the "social credit score" that is being tried in some cities in China. And that is pretty uncomfortable.

    2. We're not currently seeing a debate about "free speech". What we're actually witnessing is just a debate about who controls the norms of a social network, and who gets free promotion from that network.

      Private Social Space

      The First Amendment ("free speech") guides what speech the government can control. The social networks are private companies, so the control is over who gets to say what in that private, social space. Is there an analog about who gets to say what in a bar...is it the bar owner? (The bar being an example of another public, social space.)

  3. Apr 2022
    1. Given the difficulty of regulating every online post, especially in a country that protects most forms of speech, it seems far more prudent to focus most of our efforts on building an educated and resilient public that can spot and then ignore disinformation campaigns

      On the need for disinformation educations

      ...but what is the difference "between what’s a purposeful attempt to mislead the public and what’s being called disinformation because of a genuine difference of opinion"

    1. This practice leavespatrons’ search histories, search results, and full content of browsed pages open to covertsurveillance via packet-sniffing applications such as the free cross-platform Wireshark.

      Problems with Unencrypted Web Traffic

      The author limits the extent of privacy issues to illicit scanning of the local area network. A bigger problem—particularly from residential networks where IP address can be closely connected to users—is ISPs gathering behavioral data from DNS and deep-packet-search of HTTP transactions.

    2. This list should be preferred to the lists mostbrowser ad-blocking plugins use, as it is limited only to trackers and doesnot include advertisers and other organizations that do not employtracking techniques.

      Trackers versus Advertisers

      This was a distinction I didn't anticipate. Almost all advertising—at least on the general web—is using past browsing history or clues from the browser IP/context to select advertisements to show. Trackers would capture this information for purposes other than advertising. So I'm not sure why removing advertisers from this analysis is warranted.

    3. the privacy of an e‑resource may be considered physical-equivalent only when apatron using an information-equivalent physical resource would enjoy no more privacythan the same patron using the e‑resource

      Definition of "Physical-equivalent privacy"

      The formulation of the definition assumes patron privacy when using the physical carrier is always better than when using the digital carrier. On its face, I think that assumption holds true. We could construct a scenario—say, visiting a Tor .onion site while using Talis is more privacy protecting than observing a patrom pulling a reference book from a shelf—but those seem so far out of the probable that it can be ignored.

    4. A systematic decrease in the privacy of e‑resource use relative to use ofphysical materials provides neither equitable service nor equitable information access topatrons with little or no choice but to use information in electronic form. These patronsinclude:

      Equity of Privacy Protection for Patrons in Conditions Where Electronic Access is All that is Available

      Article text has category descriptions of patrons who can only use electronic versions of materials.

    5. Dorothea Salo (2021) Physical-Equivalent Privacy, The Serials Librarian, DOI: 10.1080/0361526X.2021.1875962

      Permanent Link: http://digital.library.wisc.edu/1793/81297

      Abstract

      This article introduces and applies the concept of “physical-equivalent privacy” to evaluate the appropriateness of data collection about library patrons’ use of library-provided e‑resources. It posits that as a matter of service equity, any data collection practice that causes e‑resource users to enjoy less information privacy than users of an information-equivalent print resource is to be avoided. Analysis is grounded in real-world e‑resource-related phenomena: secure (HTTPS) library websites and catalogs, the Adobe Digital Editions data-leak incident of 2014, and use of web trackers on e‑resource websites. Implications of physical-equivalent privacy for the SeamlessAccess single-sign-on proposal will be discussed.

    1. Redesigning democracy for the digital age is far beyond my abilities, but I can suggest three categories of reforms––three goals that must be achieved if democracy is to remain viable in the post-Babel era. We must harden democratic institutions so that they can withstand chronic anger and mistrust, reform social media so that it becomes less socially corrosive, and better prepare the next generation for democratic citizenship in this new age.

      Reforms for a Post-Babel Era

      • harden democratic institutions
      • reform social media
      • prepare the next generation
    2. This, I believe, is what happened to many of America’s key institutions in the mid-to-late 2010s. They got stupider en masse because social media instilled in their members a chronic fear of getting darted. The shift was most pronounced in universities, scholarly associations, creative industries, and political organizations at every level (national, state, and local), and it was so pervasive that it established new behavioral norms backed by new policies seemingly overnight. The new omnipresence of enhanced-virality social media meant that a single word uttered by a professor, leader, or journalist, even if spoken with positive intent, could lead to a social-media firestorm, triggering an immediate dismissal or a drawn-out investigation by the institution. Participants in our key institutions began self-censoring to an unhealthy degree, holding back critiques of policies and ideas—even those presented in class by their students—that they believed to be ill-supported or wrong.

      Key American Institutions Lose the Ability to Think Critically

    3. Across eight studies, Bor and Petersen found that being online did not make most people more aggressive or hostile; rather, it allowed a small number of aggressive people to attack a much larger set of victims.

      Social Media Assholes

      It isn't that social media that made most people more aggressive or hostile; it over-amplified the few that had that nature.

    4. The many analysts, including me, who had argued that Trump could not win the general election were relying on pre-Babel intuitions, which said that scandals such as the Access Hollywood tape (in which Trump boasted about committing sexual assault) are fatal to a presidential campaign. But after Babel, nothing really means anything anymore––at least not in a way that is durable and on which people widely agree.

      Trump Won the 2016 Election Because We Thought We Could Rely on Pre-Babel Institutions

    5. But that essay continues on to a less quoted yet equally important insight, about democracy’s vulnerability to triviality. Madison notes that people are so prone to factionalism that “where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts.”

      Democracy Vulnerability to Triviality

      Although social media is called out here, I wonder if there is an aspect of 24-hour news networks' need to fill time that is also drives "frivolous and fanciful distinctions."

    6. This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment, and their prediction of how others would react to each new action. One of the engineers at Twitter who had worked on the “Retweet” button later revealed that he regretted his contribution because it had made Twitter a nastier place. As he watched Twitter mobs forming through the use of the new tool, he thought to himself, “We might have just handed a 4-year-old a loaded weapon.”

      Twitter Engineer Regrets Retweet Feature

      Companies and services are made up of people making decisions. Often decisions that we can't see the impact of...or are unwilling to listen to the wisdom of those that are predicting the impact.

    7. Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.Shortly after its “Like” button began to produce data about what best “engaged” its users, Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well. Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.

      The Firehose versus the Algorithmic Feed

      See related from The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning, except with more depth here.

    8. Once social-media platforms had trained users to spend more time performing and less time connecting, the stage was set for the major transformation, which began in 2009: the intensification of viral dynamics.

      Perform rather than Connect

      Social media rewards engagement—performance for "likes"—versus conversation.

    9. Social scientists have identified at least three major forces that collectively bind together successful democracies: social capital (extensive social networks with high levels of trust), strong institutions, and shared stories. Social media has weakened all three.

      Thesis: social media has weakened the three forces that social scientists have identified as essential for successful democracies.

    10. Babel is a metaphor for what some forms of social media have done to nearly all of the groups and institutions most important to the country’s future—and to us as a people.

      Algorithms creating the divide

    11. It’s been clear for quite a while now that red America and blue America are becoming like two different countries claiming the same territory, with two different versions of the Constitution, economics, and American history. But Babel is not a story about tribalism; it’s a story about the fragmentation of everything. It’s about the shattering of all that had seemed solid, the scattering of people who had been a community. It’s a metaphor for what is happening not only between red and blue, but within the left and within the right, as well as within universities, companies, professional associations, museums, and even families.

      Babel as an Allegory for present-day America

      Babel is a reference to the story of the Tower of Babel (Genesis chapter 11 versus 1-9). It does indeed seem like the American people are speaking different languages and have been scattered.

    1. The fire hose was like in early Facebook days, when you saw the posts of everyone who was in your network in the order in which they were posted and that was the end of the story. I remember first starting to become clued in to the fact that that wasn’t the way it was working anymore, circa 2011, when I was seeing a lot more stupid stuff, and a lot more stuff that was clearly nudging me in one direction or another, rather than giving me an autonomous view of the landscape of information out there.

      The Firehose versus the Algorithmic Feed

      Social media used to be reverse-time based. When it switched to algorithmic-driven, what we saw as determined by the algorithm and the algorithm was programmed to promote engagement to increase the profits of the social media companies. See related Why the Past 10 Years of American Life Have Been Uniquely Stupid.

    2. And therefore, to accept the dictates of algorithms in deciding what, for example, the next song we should listen to on Spotify is, accepting that it will be an algorithm that dictates this because we no longer recognize our non-algorithmic nature and we take ourselves to be the same sort of beings that don’t make spontaneous irreducible decisions about what song to listen to next, but simply outsource the duty for this sort of thing, once governed by inspiration now to a machine that is not capable of inspiration.

      Outsourcing decisions to algorithms

    3. Algorithms in themselves are neither good nor bad. And they can be implemented even where you don’t have any technology to implement them. That is to say, you can run an algorithm on paper, and people have been doing this for many centuries. It can be an effective way of solving problems. So the “crisis moment” comes when the intrinsically neither-good-nor-bad algorithm comes to be applied for the resolution of problems, for logistical solutions, and so on in many new domains of human social life, and jumps the fence that contained it as focusing on relatively narrow questions to now structuring our social life together as a whole. That’s when the crisis starts.

      Algorithms are agnostic

      As we know them now, algorithms—and [[machine learning]] in general—do well when confined to the domains in which they started. They come apart when dealing with unbounded domains.

    1. a child had gone missing in our town and the FBI came to town to investigate immediately and had gone to the library. They had a tip and wanted to seize and search the library’s public computers. And the librarians told the FBI that they needed to get a warrant. The town was grief stricken and was enraged that the library would, at a time like that, demand that the FBI get a warrant. Like everyone in town was like, are you kidding me? A child is missing and you’re– and what? This town meeting afterwards, the library budget, of course, is up for discussion as it is every year, and the people were still really angry with the library, but a patron and I think trustee of the library – again, a volunteer, someone living in town – an elderly woman stood up and gave the most passionate defense of the Fourth Amendment and civil liberties to the people on the floor that I have ever witnessed.

      An example of how a library in Vermont stood up to a warrantless request from the FBI to seize and search public library computers. This could have impacted the library's budget when the issue was brought to a town meeting, but a library patron was a passionate advocate for the 4th amendment.

    1. The Internet owes its strength and success to a foundation of critical properties that, when combined, represent the Internet Way of Networking (IWN). This includes: an accessible Infrastructure with a common protocol, a layered architecture of interoperable building blocks, decentralized management and distributed routing, a common global identifier system, and a technology neutral, general-purpose network.

      Definition of the Internet Way of Networking

    1. it’s important to note that this study specifically looked at political speech (the area that people are most concerned about, even though the reality is that this is a tiny fraction of what most content moderation efforts deal with), and it did find that a noticeably larger number of Republicans had their accounts banned than Democrats in their study (with a decently large sample size). However, that did not mean that it showed bias. Indeed, the study is quite clever, in that it corrected for generally agreed upon false information sharers — and the conclusion is that Twitter’s content moderation is biased against agreed-upon misinformation rather than political bias. It’s just that Republicans were shown to be much, much, much more willing to share such misinformation.

      This is arguably a good thing in society, even if social media companies take it on the chin in lost revenue.

    1. The situationwould be better for IPv6 under two conditions. First, if IPv6 couldoffer some popular new services that IPv4 cannot offer—that wouldprovide the former with additional products (and value) that thelatter does not have. Second, IPv6 should avoid competition withIPv4, at least until it has been widely deployed. That would be thecase if IPv6 was presented, not as a replacement to IPv4, but as“the second network layer protocol” that is required to support theprevious new services.

      On IPv6 replacing IPv4

      This could be interesting to watch. In the early days of IPv6 that I was tracking, it seemed like there were many new features built into it that made the protocol better than IPv4. Perhaps those competitive features were abandoned. In a footnote to this article, the authors state:

      The original proposals for IPv6 included several novel services, such as mobility, improved auto-configuration and IP-layer security, but eventually IPv6 became mostly an IPv4-like protocol with many more addresses.

      In order to be adopted, IPv6 had to be IPv4 with more address space (mostly to fulfill the needs of the mobile computing marketplace). But to simplify itself so that mobile carriers could easily understand and adopt it, does the feature parity with IPv4 mean that IPv4 never goes away?

    2. EvoArch suggests an additional reason that IPv4 has been so sta-ble over the last three decades. Recall that a large birth rate atthe layer above the waist can cause a lethal drop in the normalizedvalue of the kernel, if the latter is not chosen as substrate by thenew nodes. In the current Internet architecture, the waist is the net-work layer but the next higher layer (transport) is also very narrowand stable. So, the transport layer acts as an evolutionary shield forIPv4 because any new protocols at the transport layer are unlikelyto survive the competition with TCP and UDP. On the other hand,a large number of births at the layer above TCP or UDP (applica-tion protocols or specific applications) is unlikely to significantlyaffect the value of those two transport protocols because they al-ready have many products. In summary, the stability of the twotransport protocols adds to the stability of IPv4, by eliminating anypotential new transport protocols that could select a new networklayer protocol instead of IPv4.

      Network Layer protected by Transport Layer

      In the case of IPv4 at the network layer, it is protected by the small number of protocols at the Transport Layer. Even the cannibalization of TCP by QUIC, that is still happening at the Transport layer: [QUIC] does this by establishing a number of multiplexed connections between two endpoints using User Datagram Protocol (UDP), and is designed to obsolete TCP at the transport layer for many applications, thus earning the protocol the occasional nickname "TCP/2"..

    1. To ensure more diversity in the middle layers, EvoArch suggests designing protocols that are largely non-overlapping in terms of services and functionality so that they do not compete with each other. The model suggests that protocols overlapping more than 70 percent of their functions start competing with each other.

      When new protocols compete

      I think one way of reading this would be to say that HTTP replaced FTP because it did at least 70% of what FTP did. And in order to compete/replace HTTP, something is going to need to do at least 70% of it—and presumably in some better fashion before it too will be replaced.

      It would be interesting to think of this in an HTTP/1.1, HTTP/2.0, HTTP-over-QUIC framing. Will HTTP/1.1 eventually be replaced?

    2. The EvoArch model predicts the emergence of few powerful and old protocols in the middle layers, referred to as evolutionary kernels. The evolutionary kernels of the Internet architecture include IPv4 in the network layer, and TCP and the User Datagram Protocol (UDP) in the transport layer. These protocols provide a stable framework through which an always-expanding set of physical and data-link layer protocols, as well as new applications and services at the higher layers, can interoperate and grow. At the same time, however, those three kernel protocols have been difficult to replace, or even modify significantly.

      Defining the "EvoArch" (Evolutionary Architecture) hour-glass model

      The hour-glass model is the way it is because these middle core protocols profile a stable foundation experimentation and advancement in upper and lower level protocols. That also makes these middle protocols harder to change, as we have seen with the slow adoption of IPv6.

    1. All the evidence indicates that at the edge of the Internet lies an endless frontier of new potential applications and that new transmission technologies are eagerly absorbed as we have seen with the arrival of smartphones, 4G and 5G. The Internet continues to evolve as new ideas for its use and implementation bubble to the surface in the minds of inventors everywhere.

      Will the future of the internet always be open

      This paragraph has an embedded assumption that open standards of encapsulated protocols will continue to the the norm on the internet. Is there so much momentum in that direction that we can assume this to be true? What would it look like if this started to change?

    2. A higher layer protocol is encapsulated as payload in lower layers which provides a well-defined boundary between layers.  This boundary isolates a higher layer from lower layer implementation.

      Excellent summary of encapsulated protocol layers. From someone who was there...Vinton Cerf.

    1. Save around $11.30 for every 100 miles driven in an EV instead of a gasoline fueled vehicle.

      A later tweet provides the math. 4 gallons for 100 miles = $16.80. 34.6kWh for 100 miles = $5.50.

    1. In the information age, filtering systems, driven by algorithms and artificial intelligence (AI), have become increasingly prominent, to such an extent that most of the information you encounter on the internet is now rearranged, ranked and filtered in some way. The trend towards a more customised information landscape is a result of multiple factors. But advances in technology and the fact that the body of information available online grows exponentially are important contributors.

      And, in fact, the filtering systems are driven by signals of the searcher, not signals of the content. Past behavior (and user profiling), current location (IP address recognition), device type (signal of user intent and/or social-economic status), and other user-specific attributes are being used to attempt to offer users the information that the provider thinks the user is looking for.

    1. As a woman in America in 2022, I will also observe the sexist hostility implicit in this viewpoint is unsurprising as it is insidious. Library work is one of a  small number of professions that have not been (completely) dominated by white men. Libraries are easy targets for this style of prescriptive opinion piece, and I challenge the desire by powerful men to tell others how to do their jobs because it reeks of a desire to dominate which is wholly inappropriate to the collective challenges we face.

      One white male librarian technologist viewpoint.

      I was nodding in agreement with Lindsay's writing until this point. And while I acknowledge the seen-as-feminine-profession problem and the issue of white-man-blinders, I think the argument in this article is more powerful without this paragraph. Mr. Kurtz's op-ed is about librarians on a political spectrum, not librarians on a gender spectrum. Adding this paragraph conflates "woke librarian" with "female librarian".

    1. Hold on...this is like search-engine-optimization for speech? Figure out what the algorithm wants—or doesn't—and adjust what you say to match the effect you seek. Does this strike anyone as a really, really bad idea? https://t.co/nEZlN0bANr

      — Peter Murray (@DataG) April 12, 2022
    2. Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it’s common to say “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” instead of “vibrator.”

      Definition of "Algospeak"

      In order to get around algorithms that demote content in social media feeds, communities have coined new words or new meanings to existing words to communicate their sentiment.

      This is affecting TikTok in particular because its algorithm is more heavy-handed in what users see. This is also causing people who want to be seen to tailor their content—their speech—to meet the algorithms needs. It is like search engine optimization for speech.

      Article discovered via Cory Doctorow at The "algospeak" dialect

    1. Much of the time, the blurred automation/enforcement distinction doesn’t matter. If you and I trust one another, and you send me a disappearing message in the mistaken belief that the thing preventing me from leaking it is the disappearing message bit and not my trustworthiness, that’s okay. The data still doesn’t leak, so we’re good.But eventually, the distinction turns into a fracture line.

      Automation versus enforcement

      As a message sender, I'm trusting the automation to delete the message in the same manner as a pair-wise agreement to manually delete a conversation. But, as the essay started with, there isn't an active enforcement of that deletion that survives the fact that the recipient has full control over their own computer (and messaging app).

      When that automation is all in one closed platform, it is somewhat straightforward to assume that the automation will occur as anticipated. Once a platform is opened up and the automation rules are encoded into APIs, enforcement becomes much harder. The recipient can receive a message containing the automation parameters for deletion, but choose whether or not to honor that in a way that the sender doesn't understand or know.

    2. But beyond this danger is a subtler — and more profound — one. We should not normalize the idea that our computers are there to control us, rather than to empower us.

      The general case against uncontrollable automation

      As Doctorow says a paragraph earlier, the danger lies in the implementation of the automation; a computer that can be told not to take action can also be coerced by another party to take an action we didn't intend.

      At a fundamental level, is the computer a tool that empowers us or controls us? Does the computer implement commands from us, or are we at the mercy of commands from other users? This is the key question of digital rights.

    3. Disappearing message apps take something humans are bad at (remembering to do a specific task at a specific time) and hand that job to a computer, which is really good at that.

      Disappearing message apps automate the agreement

      The people in the message thread turn the responsibility for deleting the thread over to a machine. One person doesn't need to rely on the memory of another person to ensure the contents are deleted. People might forget; the machine just runs its rules.

    4. I thought that the point of disappearing messages was to eat your cake and have it too, by allowing you to send a message to your adversary and then somehow deprive them of its contents. This is obviously a stupid idea.But the threat that Snapchat — and its disappearing message successors —was really addressing wasn’t communication between untrusted parties, it was automating data-retention agreements between trusted parties.

      Why use a disappearing message service

      The point of a disappearing message service is to have the parties to the message agree on the data-retention provisions of a message. The service automates that agreement by deleting the message at the specified time. The point isn't to send a message to an adversary and then delete it so they can't prove that it has been sent. There are too many ways of capturing the contents of a message—as simple as taking a picture of the message with another device.

    1. On the flip side, getting more than 700 sign-ups for two weeks in a row made operators eligible for an additional Orb. “Just tell people it’s free money,” one operator said a Worldcoin representative advised them.

      Worldcoin pyramid scheme

      This is sounding like a pyramid scheme with the goal of getting all of the world's population involved. Is there a relationship to what commentators are calling the Bitcoin pyramid scheme?

    2. Biometrics play an important role in colonial history: British administrators began experimenting with them in the 1850s as a way to control and intimidate their subjects in colonial India. Worldcoin’s activities in India, as well as other former British colonies such as Zimbabwe, where banks are banned from processing crypto transactions, and Kenya, where a new law forbids the transfer of biometrics data beyond the country’s borders, evoke Silicon Valley’s history of ignoring sensitive cultural issues and skirting regulations.

      Colonial history of biometrics

      Article text links to The Origin of Finger-Printing . Nature 98, 268 (1916). https://doi.org/10.1038/098268a0.

    1. In Hypothes.is who are you annotating with?

      So far, "Public". I think it is cool that Hypothes.is supports groups, and I get how classrooms or research teams would be good groups. For me, though, not enough of my peers are using Hypothes.is to have it make sense to form a group.

      That said, I have gotten into a couple interesting conversations with public Hypothes.is annotations. I followed a couple more people on Lindy Annotations because of it.

    2. Do you annotate differently in public view, self censoring or self editing?

      So far, no. It might be useful to add a disclaimer footer to the bottom of any Hypothes.is annotation to say that the contents of the annotation might only make sense to me, but so far I haven't found the need to change what is included in an annotation.

    1. Weinberg’s tweet announcing the change generated thousands of comments, many of them from conservative-leaning users who were furious that the company they turned to in order to get away from perceived Big Tech censorship was now the one doing the censoring. It didn’t help that the content DuckDuckGo was demoting and calling disinformation was Russian state media, whose side some in the right-wing contingent of DuckDuckGo’s users were firmly on.

      There is an odd sort of self-selected information bubble here. DuckDuckGo promoted itself as privacy-aware, not unfiltered. On their Sources page, they talk about where they get content and how they don't sacrifice privacy to gather search results. Demoting disinformation sources in their algorithms would seem to be a good thing. Except if what you expect to see is disinformation, and then suddenly the search results don't match your expectations.

    1. even if it is necessary to adjust the policy over time as new risks and considerationsemerge.

      Wondering now if there is a sort of "agile" editorial approach to policy-making. Policy seems like something that is concrete and shouldn't change very often. The development of a policy could happen in focused sprint cycles (perhaps along-side the technology implementation), but certainly the publication of policies is something that should be more intentional.

      Also, this is a test annotation.

    2. Organizationsmust consider these threats before introducing new technologies, rather than the other way around

      Later in the article, the author says: "It is always better to start with a policy than to make one up as one goes along, even if it is necessary to adjust the policy over time as new risks and considerations emerge."

      Also, this is a test annotation.

  4. Mar 2022
    1. Students’ perspectives on their data may shift, however,when they are given opportunities to learn about the risks (Bowler et al., 2017), and there is astrong argument that such activities are a requirement for ethical practice in the use of data(Braunack-Mayer et al., 2020)

      On the value of teaching students about the risks of overly verbose and unnecessary data trails.

    2. graduate attribute statements

      Many universities, in recent years, have published formal statements 1 of what they believe graduates of their programmes should be capable, in terms of skills and abilities beyond specific subject knowledge. Or, perhaps more correctly, what graduates could potentially be capable of, if successful in their studies and taking all the opportunities available to them whilst they complete their degree programme (including, typically, engaging fully in the wider student experience with clubs, societies, volunteering, placements, etc.). Focus on Graduate Attribute Statements | Crannóg Project: collaborative knowledge exchange

    3. This in turn means that data ownership, privacy, ethics andtransparency are becoming issues that are dealt with by corporate players, based inthe global North, rather than negotiated through local policies and theirapplication.

      Ah, of course! The assumptions on which these SaaS offerings are made are primarily in the well developed nations, and are likely inappropriate for other countries.

    4. educators and administrators need to be wary about potential discriminations and asymmetriesresulting from continually categorising and normalising people as they work and study.

      Notable source of systemic inequalities that the adoption of data-driven decision-making is bringing into being.

    5. Surveillance technologies, especially those backed by significant amounts of venture capital, areoften underpinned by the same precarious labour and outsourcing practices that are critiqued fromwithin the academy

      Ah, vulture capitalism.

    6. s teachers’ roles become less coherent and satisfying, they alsobecome more stratified, with staff who perform the lower-valued (typically more caring, student-oriented and “feminised”) aspects of the role being increasingly casualised, monitored, andsubjected to “efficiency” measures.

      Ah! See previous annotation on the problems that qualitatively productive activities pose for analytics programs for instructor evaluation.

    7. Practices of monitoring and tracking students’ online behaviour also entrench the belief thatmeaningful learning activity is that which can be measured minutely and monitored closely,ignoring activities such as thinking, reading, imagining, creating, challenging, and unstructureddiscussion

      Employing data collection and analytics on student activity has the effect of valuing only that which can be quantitatively measured. Qualitative scholarly activities—arguably, especially activities that are seen as "idle" or "nonsense"—cannot be measured and so provide no valuable input into the models that predict student success or instructor performance.

    8. However,perversely, the more these tools are employed, the more adversarial teaching relationships withstudents become, fueling both the risk of cheating and the arguments against a trust model ofhigher education

      While plagiarism detection systems and remote test proctoring systems were put in place on the assumption that all students are included to cheat, the net effect of the introduction of these tools is to erode the trust between students and instructors that would have been a natural barrier to such activity.

    9. trust that research processes generate valid, useful knowledge and evidence that caninform practice and decision-making both within the HE context and in society morebroadly.

      There is also an intersection here with the long-standing problems with for-profit corporate interests in the scholarly communication chain that are probably not addressed in this article.

    10. the normalization of vendor-universityrelationships (which tend to privilege vendor profit-making)

      A mismatch between the values/goals of the university and the values/goals of the for-profit corporation.

    11. Theunilateral claiming of private human experience as free raw material for translation into behavioraldata constitutes, for Zuboff, a new economic order

      Holy crap!

    12. Thus,information about people and their behaviour is made visible to other people, systems andcompanies.

      "Data trails"—active information and passive telemetry—provide a web of details about a person's daily life, and the analysis of that data is a form of knowledge about a person.

    13. panopticon

      The panopticon is a disciplinary concept brought to life in the form of a central observation tower placed within a circle of prison cells.

      From the tower, a guard can see every cell and inmate but the inmates can’t see into the tower. Prisoners will never know whether or not they are being watched. Ethics Explainer: The Panopticon - What is the panopticon effect?

    14. algorithmic embedding and enhancement of biases that reinforceracism, sexism, and structural inequality

      Of note.

    15. carceral

      In the Merriam-Webster dictionary, “carceral” is defined as “of, relating to, or suggesting a jail or prison” (Webster). However, the carceral system has been extended outside of physical prison walls and into minoritized communities in the form of predictive policing.Glossary: Carcerality - Critical Data Studies - Purdue University

    16. Data-driven decision making in education settings is becoming an established practice to optimizeinstitutional functioning and structures (e.g., knowledge management, and strategic planning), tosupport institutional decision-making (e.g., decision support systems and academic analytics), tomeet institutional or programmatic accreditation and quality assurance, to facilitate participatorymodels of decision-making, and to make curricular and/or instructional improvements

      Kinds of data-driven decision making in higher education.

    17. DIGITAL CULTURE & EDUCATION, 14(1) 2022, ISSN 1836-8301Surveillance Practices, Risks and Responsesin the Post Pandemic University
    1. Over vast distances, the sonic exhaust of our digital lives reverberates: the minute vibrations of hard disks, the rumbling of air chillers, the cranking of diesel generators, the mechanical spinning of fans. Data centers emit acoustic waste, what environmentalists call “noise pollution.”

      This is a byproduct of data centers that I hadn't considered. In the Chicago case, the data center is in an 8-story downtown building adjacent to residential housing. Of the few reasons I can think to put a data center there is because of its physical proximity to something else—perhaps a Chicago stock exchange where shaving microseconds of latency means big money?

    2. In some cases, only 6 to 12 percent of energy consumed is devoted to active computational processes.

      The cited article is from 2012, which is 10 years after this article was written. Especially at the "hyperscale" facilities, this number is nowhere near 6-12%.

      I'm surprised this article never mentioned the Power Usage Effectiveness ratio—a way of measuring how efficient a data center is. Companies try to drive this number down to 1, which would mean that there is no overhead energy use when compared to the IT infrastructure. (For instance, no air conditioning or fans.) Google's and Facebook's data centers have PUEs of somewhere around 1.2.

    3. The flotsam and jetsam of our digital queries and transactions, the flurry of electrons flitting about, warm the medium of air. Heat is the waste product of computation, and if left unchecked, it becomes a foil to the workings of digital civilization. Heat must therefore be relentlessly abated to keep the engine of the digital thrumming in a constant state, 24 hours a day, every day.

      "Cloud Computing" has a waste stream, and one of the waste streams is heat exhaust from servers. This is a poetic description of that waste stream.

    1. computers might therefore easily outperform humans at facial recognition and do so in a much less biased way than humans. And at this point, government agencies will be morally obliged to use facial recognition software since it will make fewer mistakes than humans do.

      Banning it now because it isn't as good as humans leaves little room for a time when the technology is better than humans. A time when the algorithm's calculations are less biased than human perception and interpretation. So we need rigorous methodologies for testing and documenting algorithmic machine models as well as psychological studies to know when the boundary of machine-better-than-human is crossed.

    2. In June 2020, in the first known case of its type, a man in Detroit was arrested in front of his family for burglary because he was mistakenly identified by facial recognition software. It may come as no surprise the man was Black
    3. Researchers including MIT's Joy Buolamwini have demonstrated the technology often works better on men than women, better on white people than Black people, and worst of all on Black women.
    1. Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight.

      What regulatory possibilities are there? Will this become the type of job where psyc evals are required? Does that even matter with open source tools and open datasets?

    2. Discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse10, but not on national and international security.

      Add one more facet of concern for the misapplication of AI techniques: national security.

    3. Importantly, we had a human in the loop with a firm moral and ethical ‘don’t-go-there’ voice to intervene.

      The human-in-the-loop was a key breakpoint between the model's findings as concepts and the physical instantiation of the model's findings. As the article goes on to say, unwanted outcomes come from both taking the human out of the loop and replacing the human in the loop with someone with a different moral or ethical driver.

    4. the better we can predict toxicity, the better we can steer our generative model to design new molecules in a region of chemical space populated by predominantly lethal molecules.

      In its normal operation, the model would screen out toxic molecules as the desired effect. But the model can be changed to select for that capability.

    5. In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible.

      Although the model was driven "towards compounds such as the nerve agent VX", it found VX but also many other known chemical warfare agents and many new molecules...that looked equally plausible."

      AI is the tool. The parameters by which it is set up makes something "good" or "bad".

    6. This generative model normally penalizes predicted toxicity and rewards predicted target activity. We simply proposed to invert this logic by using the same approach to design molecules de novo, but now guiding the model to reward both toxicity and bioactivity instead.

      By changing the parameters of the AI, the output of the AI changed dramatically.

    7. de novo
    8. Dual use of artificial-intelligence-powered drug discovery

      Citation: Urbina, F., Lentzos, F., Invernizzi, C. et al. Dual use of artificial-intelligence-powered drug discovery. Nat Mach Intell (2022). https://doi.org/10.1038/s42256-022-00465-9

    1. The growing prevalence of AI systems, as well as their growing impact on every aspect of our daily life create a great need to that AI systems are "responsible" and incorporate important social values such as fairness, accountability and privacy.

      An AI is the sum of its programming along with its training data. Its "perspecitive" of social values such as fairness, accountability, and privacy are a function of the data used to create it.

    2. inherent precision-recall trade-off

      Ah, back to my library science degree classes...

    1. Newton arranged an experiment in which one person — a “tapper” — was asked to tap out the melody of a popular song, while another person — the “listener” — was asked to identify it. The tappers assumed that their listeners would correctly identify about 50% of their melodies; they were amazed to learn that the listeners only got about one out of 40 songs correct. To the tappers, their melodies sounded perfectly clear and obvious, but the listeners heard no music, no instrumentation in their heads — only the muffled noise of a finger tapping on a table.

      An example of the curse of knowledge effect.

    1. Trust is paramount to the way networks self-organize and interoperate with other networks.

      This is the key sentence. The "internet" is a network of interconnected networks. The independent operator of a network agrees to peering arrangements and interoperability with other networks. The internet—through organizations like ICANN and RIPE—only works because the network operators voluntarily follow the decisions of these organizations. Trust is a key component.

    2. But now the government of Ukraine has called on ICANN to disconnect Russia from the internet by revoking its Top Level domain names

      What is striking about this request and EFF's argument against is how this goes against "common carrier" principles—although this phrase isn't specifically used. In the net neutrality wars, "common carrier" status means that the network pipes are dumb...they neither understand nor promote/demote particular kinds of traffic. Their utility is in passing bits from one location another in the service of broader connectivity. "Common carrier" is a useful phrase for net neutrality in the United States...as a phrase, it may not translate well to other languages.

  5. Feb 2022
    1. Each application will therefore provide users with one or more trust lists, which are lists of certification authorities that issue credentials for that application.

      Certificate authorities will be provided by the C2PA-compliant software.

      TODO: How these lists of CAs will be formed and distributed?

    2. This is accomplished through the use of a certification authority (CA). CAs perform real-world due diligence to ensure credentials are only issued to actors who are whom they claim to be.

      TODO: See if this is a top-down certificate authority, as in the HTTPS domain where browsers embed lists of trusted CAs.

    3. Provenance generally refers to the facts about the history of a piece of digital content assets (image, video, audio recording, document).

      Definition 2 from Society of American Archivists: "information regarding the origins, custody, and ownership of an item or collection" (source)

    1. “Socialism with Chinese Characteristics” has rapidly transformed China into one of the most economically unequal societies on earth. It now boasts a Gini Coefficient of, officially, around 0.47, worse than the U.S.’s 0.41. The wealthiest 1% of the population now holds around 31% of the country’s wealth (not far behind the 35% in the U.S.). But most people in China remain relatively poor: some 600 million still subsist on a monthly income of less than 1,000 yuan ($155) a month.

      This is statistics about societal inequities in China that I was not aware of.

    2. From the smug point of view of millions who now inhabit the Chinese internet, Wang’s dark vision of American dissolution was nothing less than prophetic

      Which came first…Wang’s dark vision (perhaps leaking out in Chinese government propaganda) or the smug point-of-view (organically realized in the population)?

    3. he marvels at homeless encampments in the streets of Washington DC, out-of-control drug crime in poor black neighborhoods in New York and San Francisco, and corporations that seemed to have fused themselves to and taken over responsibilities of government. Eventually, he concludes that America faces an “unstoppable undercurrent of crisis” produced by its societal contradictions, including between rich and poor, white and black, democratic and oligarchic power, egalitarianism and class privilege, individual rights and collective responsibilities, cultural traditions and the solvent of liquid modernity.

      Reading this description of 1988 seems so quaint now with what the country is experiencing today. Of the three things mentioned, I think statistics would show that only the petty crime situation is better now than it was in 1988. The other areas—homelessness, corporate fusion with government, economic inequity, collective responsibility, racial equality—have all gotten worse in America.

    4. he believed that the modernization of “Socialism with Chinese characteristics” was effectively leaving China without any real cultural direction at all. “There are no core values in China’s most recent structure,” he warned. This could serve only to dissolve societal and political cohesion.

      TODO: look at the Pew Foundation polling about the decline of Christian church attendance—especially outside of the highly-individualized evangelical movements—for parallelisms of “no core values” in American culture. Have a sense that the widening political divide and the lack of common truths causing U.S. to “dissolve societal and political cohesion” maybe have kinship with what the author is describing about Chinese culture.

    5. Wang perceived a country “in a state of transformation” from “an economy of production to an economy of consumption,” while evolving “from a spiritually oriented culture to a materially oriented culture,” and “from a collectivist culture to an individualistic culture.”

      This is key in understanding what comes later in the article. Near the end, the author is going to point to where Wang apparently inspires political policies that seek to bring a top-down imposition of collectivist culture. To transform the Chinese society into something that it was before.

      Unmentioned in this article is the societal crackdown and reëducation of the Uyigur people. Is that a roadmap for Wang-inspired policies in the rest of China? Would that happen? Could that happen?

  6. Dec 2021
    1. Since the start of the pandemic, Gloo’s Mr. Beck said, the company has been focusing on how to help churches get more attention on Google search. The company has a program for churches to pool their funds and buy search keywords—something a single church couldn’t afford on its own, he said.

      Going beyond creating community profiles—becoming a co-op to raise common funds for digital advertising.

    2. Clients can integrate their internal databases with Gloo, adding to its data trove. The company offers technology that churches can put on their websites to collect data, and has questionnaires churches can give their congregants.

      The members of the congregation become part of the product through actions of the church. One wonders what the [[data sharing disclosure]]s look like in this case. One also wonders what GDPR regulators would think of this activity.

    3. Gloo said third-party data has always been anonymized to users—it said it doesn’t reveal people’s names or exact locations to them. In response to questions from the Journal, the company said it also began de-identifying data within its own databases last year.

      Important [[privacy]] considerations, including processing of [[pseudoanonymous]] data. No mention in the article about the risk of [[re-identification]] of user—particularly in the context of geolocated data within a radius of a church. ("Gloo offers to provide churches with snapshots of data to better understand their communities and focus their ministries on relevant issues" from earlier in the article.)

    1. Second, knowledge may be contested, where it has been constructed within particular power relations and dominant perspectives.

      This sentence has me thinking about how Google Maps has to have different names for different physical features or have boundaries in different locations depending on the cultural background of the person using the map.

    2. the presentation on which it is based

      See https://www.lorcandempsey.net/presentation-two-metadata-directions/ for the presentation at the Eurasian Academic Libraries Conference - 2021, organized by The Nazarbayev University Library and the Association of University Libraries in the Republic of Kazakhstan.

    3. Metadata is about both value and values

      Oh, excellent formulation here. Embedded in the metadata that is created are the values infused in the people and processes creating it (stretching back to the values of the people writing the software generating the programmatic metadata).

    4. data which relieves a potential user (whether human or machine) of having to have full advance knowledge of the existence or characteristics of a resource of potential interest in the environment.

      The "schematized assertions about a resource of interest" definition is clearly answers a "what" question. This definition answers a "why" question? I'm left unsatisfied by this definition, and I can't quite put my finger on it. It is good to have the end-user's purpose in mind when creating and curating metadata. Maybe it is the open-ended nature of the challenge of creating a description that "relieves a potential user of having to have full advance knowledge".

    5. I have spoken about four sources of metadata in the past.

      Somewhere between "Professional" and "Community" is another source. The "Professional" definition is geared towards librarians and other information professionals. "Community" is "crowdsourced". Professionals other than information professionals have their own metadata schemes, though, that can be just as formal as the once created by librarians, albeit more specialized towards the needs of a particular community. These are neither "crowdsourced"—which has an ad hoc and/or educated amateur connotation—nor the specialized formats from libraries and archives. Think Darwin Core or PBCore.

    6. the network environment

      I'm hoping I can find a definition for the networked environment. I can't tell if this is a statement about the internet in general (or the subset that is the web), or of something more formal like [[linked data]]. The way this notion is used in the first couple of paragraphs makes me think it is something with a somewhat concrete definition.

    1. Controlled Digital Lending: Unlocking the Library’s Full Potential

      This document was in an HTML frame at https://www.libraryfutures.net/policy-document-2021 — I needed to bust it out of the frame in order to comment on it.

      Although not explicitly stated, this document seems to be information document for those seeking legislative sanctioning of CDL activity. Only at the fourth paragraph is the phrase "Congress should support their communities" included. The remainder of the document also include several calls for legislative cover for CDL.

    2. libraries generally lend digitized versions ofprint materials from their collections, strictly limiting them to a single digital copy per physicalcopy owned—a one-to-one “owned-to-loaned” ratio. If a library owns two physical copies of TheGiving Tree, it only loans out two copies at any time, whether physically or digitally.

      Concise definition of [[controlled digital lending]]. It maintains the same circulation model with similar points of friction as my library users experience now.

    1. Develop a process to ensure privacy is aconsideration in student analytics andinstitutional research. Unfortunately, concernsabout student data privacy have been minimal sincethe earliest days of the student success movement

      Yes—this needs to feed back into the the discussion in the EDUCAUSE Top 10 Higher Education IT Issues for 2022, specifically in the Needed Technologies and IT Capabilities section of Issue #3: Digital Faculty for a Digital Future.

    2. Comprehensive and sustained privacyawareness campaigns

      Got to page 5 of the document, and I'm now wondering "who". The "what" and "why" are more self-evident, but as of yet this document hasn't described a framework of who should be doing this work.

    3. Efforts to clarify and disseminatethe differences between “privacy as advocacy” (e.g.,privacy is a fundamental right; privacy is an ethicalnorm) and “privacy as compliance” (e.g., ensuringprivacy policies and laws are followed; privacyprograms train, monitor, and measure adherence torules) help frame conversations and set expectations.

      This is an interesting distinction... privacy-because-it-is-the-right-thing-to-do versus privacy-because-you-must. I think the latter is where most institutions are today. It will take a lot more education to get institutions to the former.

    4. These enhanced capabilities at theinstitution will no doubt necessitate investments in privacystaffing and infrastructure, resulting in fully staffed andresourced privacy units within the institution.

      Is there an enhanced role for Institutional Review Boards in assessing the data privacy aspects of research? To what extent does a privacy staffing/infrastructure component take in assisting researchers and shepherding data collection/analysis from a more central (either university-wide or department-centered) perspective?

    5. As informed and engagedstakeholders, students understand how and why theirinstitutions use academic and personal data.

      Interesting that there is a focus here on advocacy from an active student body. Is it the expectation that change from some of the more stubborn areas of the campus would be driven by informed student push-back? This section on "Students, Faculty, and Staff" doesn't have the same advocacy role from the other portions of the campus community.

    1. By extracting the center frame of every shot, the user couldview all of the frames simultaneously on the contact sheet to review all visual content in thevideo.

      Interesting choice to pick out the middle frame of each shot.

    2. Leveraging the metadata from AMP, for example, users are already able to conduct searches(with varying levels of results) such as:● Take me to every point in a video interview with Herman B Wells where Herman B Wellsmentions Eleanor Roosevelt on the subjects of Presidents’ spouses and 20th-centuryleaders.● Show me every video interview with Herman B Wells in the 1970s where the intervieweris Thomas D. Clark, and it was produced at WTIU Bloomington.● Take me to every point in a video interview with Herman B Wells where Herman B Wellsis on camera and talking about Midwest universities where there is no music present.

      Thinking about these searches—and the kinds of metadata needed to answer them—I wonder how much of this metadata can be transmitted to DPLA. I wouldn't expect these same kinds of searches to be possible in a multi-collection search tool like DPLA, but what would it look like to crosswalk this metadata into something DPLA can consume? (I'm less familiar with the metadata characteristics of Europeana.)

    3. Based on work and results so far, the project team has concluded that the approach taken inAMP is effective and scalable for generation of metadata for certain types of AV collections,particularly those that involve significant amounts of spoken word content. This includeslectures, events, and documentaries, along with oral history interviews and other ethnographiccontent.

      Pilot project results. Seemingly good for "scholarly" kinds of material—perhaps not so much for consumer content? The spoken word content also makes me wonder if there are similar machine-generation tools for musical performance content. YouTube's ContentID system certainly generates hits for some musical content. Also remembering the origins of Pandora to classify music characteristics.

    4. as of February 2021, Europeana comprises 59%images and 38% text objects, but only 1% sound objects and 2% video objects.3 DPLA iscomposed of 25% images and 54% text, with only 0.3% sound objects, and 0.6% videoobjects.4Another reason, beyond cost, that audiovisual recordings are not widely accessible is the lack ofsufficiently granular metadata to support identification, discovery, and use, or to supportinformed rights determination and access control and permissions decisions on the part ofcollections staff and users.

      Despite concerted efforts, there is a minimal amount of A/V material in Europeana and DPLA. This report details a pilot project to use a variety of machine-generated-metadata mechanisms to augment the human description efforts. Although this paragraph mentions rights determination, it isn't clear from the problem statement whether the machine-generated description includes anything that will help with rights. I would expect that unclear rights—especially for moving image content—would be a significant barrier to the open publication of A/V material.

    1. The goal of data brokers is to allow consumers to decide which information may be shared with advertisers, then share in some of the revenue generated by its use. These services ask users to sign up on the Web or via an application, connect their social media and Web accounts, then ask them to answer specific questions about their interests. Based on the data provided and collected initially and over time, the brokers will place users into segments, and advertisers can purchase access to data from one or more segments for use in personalized advertising. Each time their data is shared, or advertisers purchase access to a segment in which the user's data has been placed, the user can earn points, rewards, or cash. All the data brokers note that they store their user data on the cloud using a variety of encryption and security protocols, and that the end users with whom they work can opt out of having specific data shared if they so choose.

      The thought being: if a private file is going to be created about me, at least I should be able to cash in on that. How can we know if we are getting a good “price” for selling our behavior data and interests? Is there a divide between those that can afford not to be tracked versus those that need to be tracked as a source of income?

    2. Data broker Invisibly (www.invisibly.com) provides a listing of various types of data available for sale on the dark web, ranging from a Social Security number (valued at just $0.53) to a complete healthcare record ($250).

      Social security numbers, often thought of as important personally identifying keys, are relatively inexpensive according to this website.

    3. data on demographics that are in limited supply (such as data on Middle Eastern male consumers) is more valuable than demographic data on white millennial women. Similarly, the browsing data of individuals seeking to purchase a Tesla or Ferrari automobile within the next month would be valued more highly by data brokers and advertisers than the data of someone browsing for the best deals on a used Chrysler minivan.

      Demographic data gathered from behavioral advertising systems is not equally valuable. Value can vary by the attributes of the person and by attributes of what that person was doing.

    1. Catala, a programming language developed by Protzenko's graduate student Denis Merigoux, who is working at the National Institute for Research in Digital Science and Technology (INRIA) in Paris, France. It is not often lawyers and programmers find themselves working together, but Catala was designed to capture and execute legal algorithms and to be understood by lawyers and programmers alike in a language "that lets you follow the very specific legal train of thought," Protzenko says.

      A domain-specific language for encoding legal interpretations.

  7. Nov 2021
    1. Raspberry Pi Trading

      At the moment, this company is wholly owned by the Raspberry Pie Foundation.

      For clarity, tell us the distinction between the foundation, which you run, and the trading company that Eben Upton presides over, and how they work in conjunction with each other. The Raspberry Pi Foundation is a UK-registered charity with an educational mission and Raspberry Pi Trading Limited is a wholly owned subsidiary of the Foundation. That means that the Foundation is the shareholder of the trading company, which is an independent, commercial business. That distinction is really important because there are limits on what charities can do commercially. For example, a charity couldn't sell computers that are used in industry, which is a huge part of the Raspberry Pi computer business now. I lead the foundation and I also serve as a director on the board of the trading company. As you said, Eben Upton leads the trading company. [source]

      So it will be interesting to see how much control the Foundation has if/when the trading company goes public.

    1. The ultimate solution probably requires incentives that provide enough deterrence to eliminate such misconduct proactively rather than treating it reactively.

      There seems to be a lack of consequences when these deeds are done. Reputations are tarnished in the moment, but then forgotten. There is a new NISO work item on handling corrections. If those retractions and corrections are tied to ORCID identifiers, could this data be aggregated into actionable information in review workflows?