1,081 Matching Annotations
  1. Apr 2022
    1. a child had gone missing in our town and the FBI came to town to investigate immediately and had gone to the library. They had a tip and wanted to seize and search the library’s public computers. And the librarians told the FBI that they needed to get a warrant. The town was grief stricken and was enraged that the library would, at a time like that, demand that the FBI get a warrant. Like everyone in town was like, are you kidding me? A child is missing and you’re– and what? This town meeting afterwards, the library budget, of course, is up for discussion as it is every year, and the people were still really angry with the library, but a patron and I think trustee of the library – again, a volunteer, someone living in town – an elderly woman stood up and gave the most passionate defense of the Fourth Amendment and civil liberties to the people on the floor that I have ever witnessed.

      An example of how a library in Vermont stood up to a warrantless request from the FBI to seize and search public library computers. This could have impacted the library's budget when the issue was brought to a town meeting, but a library patron was a passionate advocate for the 4th amendment.

    1. The Internet owes its strength and success to a foundation of critical properties that, when combined, represent the Internet Way of Networking (IWN). This includes: an accessible Infrastructure with a common protocol, a layered architecture of interoperable building blocks, decentralized management and distributed routing, a common global identifier system, and a technology neutral, general-purpose network.

      Definition of the Internet Way of Networking

    1. it’s important to note that this study specifically looked at political speech (the area that people are most concerned about, even though the reality is that this is a tiny fraction of what most content moderation efforts deal with), and it did find that a noticeably larger number of Republicans had their accounts banned than Democrats in their study (with a decently large sample size). However, that did not mean that it showed bias. Indeed, the study is quite clever, in that it corrected for generally agreed upon false information sharers — and the conclusion is that Twitter’s content moderation is biased against agreed-upon misinformation rather than political bias. It’s just that Republicans were shown to be much, much, much more willing to share such misinformation.

      This is arguably a good thing in society, even if social media companies take it on the chin in lost revenue.

    1. The situationwould be better for IPv6 under two conditions. First, if IPv6 couldoffer some popular new services that IPv4 cannot offer—that wouldprovide the former with additional products (and value) that thelatter does not have. Second, IPv6 should avoid competition withIPv4, at least until it has been widely deployed. That would be thecase if IPv6 was presented, not as a replacement to IPv4, but as“the second network layer protocol” that is required to support theprevious new services.

      On IPv6 replacing IPv4

      This could be interesting to watch. In the early days of IPv6 that I was tracking, it seemed like there were many new features built into it that made the protocol better than IPv4. Perhaps those competitive features were abandoned. In a footnote to this article, the authors state:

      The original proposals for IPv6 included several novel services, such as mobility, improved auto-configuration and IP-layer security, but eventually IPv6 became mostly an IPv4-like protocol with many more addresses.

      In order to be adopted, IPv6 had to be IPv4 with more address space (mostly to fulfill the needs of the mobile computing marketplace). But to simplify itself so that mobile carriers could easily understand and adopt it, does the feature parity with IPv4 mean that IPv4 never goes away?

    2. EvoArch suggests an additional reason that IPv4 has been so sta-ble over the last three decades. Recall that a large birth rate atthe layer above the waist can cause a lethal drop in the normalizedvalue of the kernel, if the latter is not chosen as substrate by thenew nodes. In the current Internet architecture, the waist is the net-work layer but the next higher layer (transport) is also very narrowand stable. So, the transport layer acts as an evolutionary shield forIPv4 because any new protocols at the transport layer are unlikelyto survive the competition with TCP and UDP. On the other hand,a large number of births at the layer above TCP or UDP (applica-tion protocols or specific applications) is unlikely to significantlyaffect the value of those two transport protocols because they al-ready have many products. In summary, the stability of the twotransport protocols adds to the stability of IPv4, by eliminating anypotential new transport protocols that could select a new networklayer protocol instead of IPv4.

      Network Layer protected by Transport Layer

      In the case of IPv4 at the network layer, it is protected by the small number of protocols at the Transport Layer. Even the cannibalization of TCP by QUIC, that is still happening at the Transport layer: [QUIC] does this by establishing a number of multiplexed connections between two endpoints using User Datagram Protocol (UDP), and is designed to obsolete TCP at the transport layer for many applications, thus earning the protocol the occasional nickname "TCP/2"..

    1. To ensure more diversity in the middle layers, EvoArch suggests designing protocols that are largely non-overlapping in terms of services and functionality so that they do not compete with each other. The model suggests that protocols overlapping more than 70 percent of their functions start competing with each other.

      When new protocols compete

      I think one way of reading this would be to say that HTTP replaced FTP because it did at least 70% of what FTP did. And in order to compete/replace HTTP, something is going to need to do at least 70% of it—and presumably in some better fashion before it too will be replaced.

      It would be interesting to think of this in an HTTP/1.1, HTTP/2.0, HTTP-over-QUIC framing. Will HTTP/1.1 eventually be replaced?

    2. The EvoArch model predicts the emergence of few powerful and old protocols in the middle layers, referred to as evolutionary kernels. The evolutionary kernels of the Internet architecture include IPv4 in the network layer, and TCP and the User Datagram Protocol (UDP) in the transport layer. These protocols provide a stable framework through which an always-expanding set of physical and data-link layer protocols, as well as new applications and services at the higher layers, can interoperate and grow. At the same time, however, those three kernel protocols have been difficult to replace, or even modify significantly.

      Defining the "EvoArch" (Evolutionary Architecture) hour-glass model

      The hour-glass model is the way it is because these middle core protocols profile a stable foundation experimentation and advancement in upper and lower level protocols. That also makes these middle protocols harder to change, as we have seen with the slow adoption of IPv6.

    1. All the evidence indicates that at the edge of the Internet lies an endless frontier of new potential applications and that new transmission technologies are eagerly absorbed as we have seen with the arrival of smartphones, 4G and 5G. The Internet continues to evolve as new ideas for its use and implementation bubble to the surface in the minds of inventors everywhere.

      Will the future of the internet always be open

      This paragraph has an embedded assumption that open standards of encapsulated protocols will continue to the the norm on the internet. Is there so much momentum in that direction that we can assume this to be true? What would it look like if this started to change?

    2. A higher layer protocol is encapsulated as payload in lower layers which provides a well-defined boundary between layers.  This boundary isolates a higher layer from lower layer implementation.

      Excellent summary of encapsulated protocol layers. From someone who was there...Vinton Cerf.

    1. Save around $11.30 for every 100 miles driven in an EV instead of a gasoline fueled vehicle.

      A later tweet provides the math. 4 gallons for 100 miles = $16.80. 34.6kWh for 100 miles = $5.50.

    1. In the information age, filtering systems, driven by algorithms and artificial intelligence (AI), have become increasingly prominent, to such an extent that most of the information you encounter on the internet is now rearranged, ranked and filtered in some way. The trend towards a more customised information landscape is a result of multiple factors. But advances in technology and the fact that the body of information available online grows exponentially are important contributors.

      And, in fact, the filtering systems are driven by signals of the searcher, not signals of the content. Past behavior (and user profiling), current location (IP address recognition), device type (signal of user intent and/or social-economic status), and other user-specific attributes are being used to attempt to offer users the information that the provider thinks the user is looking for.

    1. As a woman in America in 2022, I will also observe the sexist hostility implicit in this viewpoint is unsurprising as it is insidious. Library work is one of a  small number of professions that have not been (completely) dominated by white men. Libraries are easy targets for this style of prescriptive opinion piece, and I challenge the desire by powerful men to tell others how to do their jobs because it reeks of a desire to dominate which is wholly inappropriate to the collective challenges we face.

      One white male librarian technologist viewpoint.

      I was nodding in agreement with Lindsay's writing until this point. And while I acknowledge the seen-as-feminine-profession problem and the issue of white-man-blinders, I think the argument in this article is more powerful without this paragraph. Mr. Kurtz's op-ed is about librarians on a political spectrum, not librarians on a gender spectrum. Adding this paragraph conflates "woke librarian" with "female librarian".

    1. Hold on...this is like search-engine-optimization for speech? Figure out what the algorithm wants—or doesn't—and adjust what you say to match the effect you seek. Does this strike anyone as a really, really bad idea? https://t.co/nEZlN0bANr

      — Peter Murray (@DataG) April 12, 2022
    2. Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it’s common to say “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” instead of “vibrator.”

      Definition of "Algospeak"

      In order to get around algorithms that demote content in social media feeds, communities have coined new words or new meanings to existing words to communicate their sentiment.

      This is affecting TikTok in particular because its algorithm is more heavy-handed in what users see. This is also causing people who want to be seen to tailor their content—their speech—to meet the algorithms needs. It is like search engine optimization for speech.

      Article discovered via Cory Doctorow at The "algospeak" dialect

    1. Much of the time, the blurred automation/enforcement distinction doesn’t matter. If you and I trust one another, and you send me a disappearing message in the mistaken belief that the thing preventing me from leaking it is the disappearing message bit and not my trustworthiness, that’s okay. The data still doesn’t leak, so we’re good.But eventually, the distinction turns into a fracture line.

      Automation versus enforcement

      As a message sender, I'm trusting the automation to delete the message in the same manner as a pair-wise agreement to manually delete a conversation. But, as the essay started with, there isn't an active enforcement of that deletion that survives the fact that the recipient has full control over their own computer (and messaging app).

      When that automation is all in one closed platform, it is somewhat straightforward to assume that the automation will occur as anticipated. Once a platform is opened up and the automation rules are encoded into APIs, enforcement becomes much harder. The recipient can receive a message containing the automation parameters for deletion, but choose whether or not to honor that in a way that the sender doesn't understand or know.

    2. But beyond this danger is a subtler — and more profound — one. We should not normalize the idea that our computers are there to control us, rather than to empower us.

      The general case against uncontrollable automation

      As Doctorow says a paragraph earlier, the danger lies in the implementation of the automation; a computer that can be told not to take action can also be coerced by another party to take an action we didn't intend.

      At a fundamental level, is the computer a tool that empowers us or controls us? Does the computer implement commands from us, or are we at the mercy of commands from other users? This is the key question of digital rights.

    3. Disappearing message apps take something humans are bad at (remembering to do a specific task at a specific time) and hand that job to a computer, which is really good at that.

      Disappearing message apps automate the agreement

      The people in the message thread turn the responsibility for deleting the thread over to a machine. One person doesn't need to rely on the memory of another person to ensure the contents are deleted. People might forget; the machine just runs its rules.

    4. I thought that the point of disappearing messages was to eat your cake and have it too, by allowing you to send a message to your adversary and then somehow deprive them of its contents. This is obviously a stupid idea.But the threat that Snapchat — and its disappearing message successors —was really addressing wasn’t communication between untrusted parties, it was automating data-retention agreements between trusted parties.

      Why use a disappearing message service

      The point of a disappearing message service is to have the parties to the message agree on the data-retention provisions of a message. The service automates that agreement by deleting the message at the specified time. The point isn't to send a message to an adversary and then delete it so they can't prove that it has been sent. There are too many ways of capturing the contents of a message—as simple as taking a picture of the message with another device.

    1. On the flip side, getting more than 700 sign-ups for two weeks in a row made operators eligible for an additional Orb. “Just tell people it’s free money,” one operator said a Worldcoin representative advised them.

      Worldcoin pyramid scheme

      This is sounding like a pyramid scheme with the goal of getting all of the world's population involved. Is there a relationship to what commentators are calling the Bitcoin pyramid scheme?

    2. Biometrics play an important role in colonial history: British administrators began experimenting with them in the 1850s as a way to control and intimidate their subjects in colonial India. Worldcoin’s activities in India, as well as other former British colonies such as Zimbabwe, where banks are banned from processing crypto transactions, and Kenya, where a new law forbids the transfer of biometrics data beyond the country’s borders, evoke Silicon Valley’s history of ignoring sensitive cultural issues and skirting regulations.

      Colonial history of biometrics

      Article text links to The Origin of Finger-Printing . Nature 98, 268 (1916). https://doi.org/10.1038/098268a0.

    1. In Hypothes.is who are you annotating with?

      So far, "Public". I think it is cool that Hypothes.is supports groups, and I get how classrooms or research teams would be good groups. For me, though, not enough of my peers are using Hypothes.is to have it make sense to form a group.

      That said, I have gotten into a couple interesting conversations with public Hypothes.is annotations. I followed a couple more people on Lindy Annotations because of it.

    2. Do you annotate differently in public view, self censoring or self editing?

      So far, no. It might be useful to add a disclaimer footer to the bottom of any Hypothes.is annotation to say that the contents of the annotation might only make sense to me, but so far I haven't found the need to change what is included in an annotation.

    1. Weinberg’s tweet announcing the change generated thousands of comments, many of them from conservative-leaning users who were furious that the company they turned to in order to get away from perceived Big Tech censorship was now the one doing the censoring. It didn’t help that the content DuckDuckGo was demoting and calling disinformation was Russian state media, whose side some in the right-wing contingent of DuckDuckGo’s users were firmly on.

      There is an odd sort of self-selected information bubble here. DuckDuckGo promoted itself as privacy-aware, not unfiltered. On their Sources page, they talk about where they get content and how they don't sacrifice privacy to gather search results. Demoting disinformation sources in their algorithms would seem to be a good thing. Except if what you expect to see is disinformation, and then suddenly the search results don't match your expectations.

    1. even if it is necessary to adjust the policy over time as new risks and considerationsemerge.

      Wondering now if there is a sort of "agile" editorial approach to policy-making. Policy seems like something that is concrete and shouldn't change very often. The development of a policy could happen in focused sprint cycles (perhaps along-side the technology implementation), but certainly the publication of policies is something that should be more intentional.

      Also, this is a test annotation.

    2. Organizationsmust consider these threats before introducing new technologies, rather than the other way around

      Later in the article, the author says: "It is always better to start with a policy than to make one up as one goes along, even if it is necessary to adjust the policy over time as new risks and considerations emerge."

      Also, this is a test annotation.

  2. Mar 2022
    1. Students’ perspectives on their data may shift, however,when they are given opportunities to learn about the risks (Bowler et al., 2017), and there is astrong argument that such activities are a requirement for ethical practice in the use of data(Braunack-Mayer et al., 2020)

      On the value of teaching students about the risks of overly verbose and unnecessary data trails.

    2. graduate attribute statements

      Many universities, in recent years, have published formal statements 1 of what they believe graduates of their programmes should be capable, in terms of skills and abilities beyond specific subject knowledge. Or, perhaps more correctly, what graduates could potentially be capable of, if successful in their studies and taking all the opportunities available to them whilst they complete their degree programme (including, typically, engaging fully in the wider student experience with clubs, societies, volunteering, placements, etc.). Focus on Graduate Attribute Statements | Crannóg Project: collaborative knowledge exchange

    3. This in turn means that data ownership, privacy, ethics andtransparency are becoming issues that are dealt with by corporate players, based inthe global North, rather than negotiated through local policies and theirapplication.

      Ah, of course! The assumptions on which these SaaS offerings are made are primarily in the well developed nations, and are likely inappropriate for other countries.

    4. educators and administrators need to be wary about potential discriminations and asymmetriesresulting from continually categorising and normalising people as they work and study.

      Notable source of systemic inequalities that the adoption of data-driven decision-making is bringing into being.

    5. Surveillance technologies, especially those backed by significant amounts of venture capital, areoften underpinned by the same precarious labour and outsourcing practices that are critiqued fromwithin the academy

      Ah, vulture capitalism.

    6. s teachers’ roles become less coherent and satisfying, they alsobecome more stratified, with staff who perform the lower-valued (typically more caring, student-oriented and “feminised”) aspects of the role being increasingly casualised, monitored, andsubjected to “efficiency” measures.

      Ah! See previous annotation on the problems that qualitatively productive activities pose for analytics programs for instructor evaluation.

    7. Practices of monitoring and tracking students’ online behaviour also entrench the belief thatmeaningful learning activity is that which can be measured minutely and monitored closely,ignoring activities such as thinking, reading, imagining, creating, challenging, and unstructureddiscussion

      Employing data collection and analytics on student activity has the effect of valuing only that which can be quantitatively measured. Qualitative scholarly activities—arguably, especially activities that are seen as "idle" or "nonsense"—cannot be measured and so provide no valuable input into the models that predict student success or instructor performance.

    8. However,perversely, the more these tools are employed, the more adversarial teaching relationships withstudents become, fueling both the risk of cheating and the arguments against a trust model ofhigher education

      While plagiarism detection systems and remote test proctoring systems were put in place on the assumption that all students are included to cheat, the net effect of the introduction of these tools is to erode the trust between students and instructors that would have been a natural barrier to such activity.

    9. trust that research processes generate valid, useful knowledge and evidence that caninform practice and decision-making both within the HE context and in society morebroadly.

      There is also an intersection here with the long-standing problems with for-profit corporate interests in the scholarly communication chain that are probably not addressed in this article.

    10. the normalization of vendor-universityrelationships (which tend to privilege vendor profit-making)

      A mismatch between the values/goals of the university and the values/goals of the for-profit corporation.

    11. Theunilateral claiming of private human experience as free raw material for translation into behavioraldata constitutes, for Zuboff, a new economic order

      Holy crap!

    12. Thus,information about people and their behaviour is made visible to other people, systems andcompanies.

      "Data trails"—active information and passive telemetry—provide a web of details about a person's daily life, and the analysis of that data is a form of knowledge about a person.

    13. panopticon

      The panopticon is a disciplinary concept brought to life in the form of a central observation tower placed within a circle of prison cells.

      From the tower, a guard can see every cell and inmate but the inmates can’t see into the tower. Prisoners will never know whether or not they are being watched. Ethics Explainer: The Panopticon - What is the panopticon effect?

    14. algorithmic embedding and enhancement of biases that reinforceracism, sexism, and structural inequality

      Of note.

    15. carceral

      In the Merriam-Webster dictionary, “carceral” is defined as “of, relating to, or suggesting a jail or prison” (Webster). However, the carceral system has been extended outside of physical prison walls and into minoritized communities in the form of predictive policing.Glossary: Carcerality - Critical Data Studies - Purdue University

    16. Data-driven decision making in education settings is becoming an established practice to optimizeinstitutional functioning and structures (e.g., knowledge management, and strategic planning), tosupport institutional decision-making (e.g., decision support systems and academic analytics), tomeet institutional or programmatic accreditation and quality assurance, to facilitate participatorymodels of decision-making, and to make curricular and/or instructional improvements

      Kinds of data-driven decision making in higher education.

    17. DIGITAL CULTURE & EDUCATION, 14(1) 2022, ISSN 1836-8301Surveillance Practices, Risks and Responsesin the Post Pandemic University
    1. Over vast distances, the sonic exhaust of our digital lives reverberates: the minute vibrations of hard disks, the rumbling of air chillers, the cranking of diesel generators, the mechanical spinning of fans. Data centers emit acoustic waste, what environmentalists call “noise pollution.”

      This is a byproduct of data centers that I hadn't considered. In the Chicago case, the data center is in an 8-story downtown building adjacent to residential housing. Of the few reasons I can think to put a data center there is because of its physical proximity to something else—perhaps a Chicago stock exchange where shaving microseconds of latency means big money?

    2. In some cases, only 6 to 12 percent of energy consumed is devoted to active computational processes.

      The cited article is from 2012, which is 10 years after this article was written. Especially at the "hyperscale" facilities, this number is nowhere near 6-12%.

      I'm surprised this article never mentioned the Power Usage Effectiveness ratio—a way of measuring how efficient a data center is. Companies try to drive this number down to 1, which would mean that there is no overhead energy use when compared to the IT infrastructure. (For instance, no air conditioning or fans.) Google's and Facebook's data centers have PUEs of somewhere around 1.2.

    3. The flotsam and jetsam of our digital queries and transactions, the flurry of electrons flitting about, warm the medium of air. Heat is the waste product of computation, and if left unchecked, it becomes a foil to the workings of digital civilization. Heat must therefore be relentlessly abated to keep the engine of the digital thrumming in a constant state, 24 hours a day, every day.

      "Cloud Computing" has a waste stream, and one of the waste streams is heat exhaust from servers. This is a poetic description of that waste stream.

    1. computers might therefore easily outperform humans at facial recognition and do so in a much less biased way than humans. And at this point, government agencies will be morally obliged to use facial recognition software since it will make fewer mistakes than humans do.

      Banning it now because it isn't as good as humans leaves little room for a time when the technology is better than humans. A time when the algorithm's calculations are less biased than human perception and interpretation. So we need rigorous methodologies for testing and documenting algorithmic machine models as well as psychological studies to know when the boundary of machine-better-than-human is crossed.

    2. In June 2020, in the first known case of its type, a man in Detroit was arrested in front of his family for burglary because he was mistakenly identified by facial recognition software. It may come as no surprise the man was Black
    3. Researchers including MIT's Joy Buolamwini have demonstrated the technology often works better on men than women, better on white people than Black people, and worst of all on Black women.
    1. Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight.

      What regulatory possibilities are there? Will this become the type of job where psyc evals are required? Does that even matter with open source tools and open datasets?

    2. Discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse10, but not on national and international security.

      Add one more facet of concern for the misapplication of AI techniques: national security.

    3. Importantly, we had a human in the loop with a firm moral and ethical ‘don’t-go-there’ voice to intervene.

      The human-in-the-loop was a key breakpoint between the model's findings as concepts and the physical instantiation of the model's findings. As the article goes on to say, unwanted outcomes come from both taking the human out of the loop and replacing the human in the loop with someone with a different moral or ethical driver.

    4. the better we can predict toxicity, the better we can steer our generative model to design new molecules in a region of chemical space populated by predominantly lethal molecules.

      In its normal operation, the model would screen out toxic molecules as the desired effect. But the model can be changed to select for that capability.

    5. In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible.

      Although the model was driven "towards compounds such as the nerve agent VX", it found VX but also many other known chemical warfare agents and many new molecules...that looked equally plausible."

      AI is the tool. The parameters by which it is set up makes something "good" or "bad".

    6. This generative model normally penalizes predicted toxicity and rewards predicted target activity. We simply proposed to invert this logic by using the same approach to design molecules de novo, but now guiding the model to reward both toxicity and bioactivity instead.

      By changing the parameters of the AI, the output of the AI changed dramatically.

    7. de novo
    8. Dual use of artificial-intelligence-powered drug discovery

      Citation: Urbina, F., Lentzos, F., Invernizzi, C. et al. Dual use of artificial-intelligence-powered drug discovery. Nat Mach Intell (2022). https://doi.org/10.1038/s42256-022-00465-9

    1. The growing prevalence of AI systems, as well as their growing impact on every aspect of our daily life create a great need to that AI systems are "responsible" and incorporate important social values such as fairness, accountability and privacy.

      An AI is the sum of its programming along with its training data. Its "perspecitive" of social values such as fairness, accountability, and privacy are a function of the data used to create it.

    2. inherent precision-recall trade-off

      Ah, back to my library science degree classes...

    1. Newton arranged an experiment in which one person — a “tapper” — was asked to tap out the melody of a popular song, while another person — the “listener” — was asked to identify it. The tappers assumed that their listeners would correctly identify about 50% of their melodies; they were amazed to learn that the listeners only got about one out of 40 songs correct. To the tappers, their melodies sounded perfectly clear and obvious, but the listeners heard no music, no instrumentation in their heads — only the muffled noise of a finger tapping on a table.

      An example of the curse of knowledge effect.

    1. Trust is paramount to the way networks self-organize and interoperate with other networks.

      This is the key sentence. The "internet" is a network of interconnected networks. The independent operator of a network agrees to peering arrangements and interoperability with other networks. The internet—through organizations like ICANN and RIPE—only works because the network operators voluntarily follow the decisions of these organizations. Trust is a key component.

    2. But now the government of Ukraine has called on ICANN to disconnect Russia from the internet by revoking its Top Level domain names

      What is striking about this request and EFF's argument against is how this goes against "common carrier" principles—although this phrase isn't specifically used. In the net neutrality wars, "common carrier" status means that the network pipes are dumb...they neither understand nor promote/demote particular kinds of traffic. Their utility is in passing bits from one location another in the service of broader connectivity. "Common carrier" is a useful phrase for net neutrality in the United States...as a phrase, it may not translate well to other languages.

  3. Feb 2022
    1. Each application will therefore provide users with one or more trust lists, which are lists of certification authorities that issue credentials for that application.

      Certificate authorities will be provided by the C2PA-compliant software.

      TODO: How these lists of CAs will be formed and distributed?

    2. This is accomplished through the use of a certification authority (CA). CAs perform real-world due diligence to ensure credentials are only issued to actors who are whom they claim to be.

      TODO: See if this is a top-down certificate authority, as in the HTTPS domain where browsers embed lists of trusted CAs.

    3. Provenance generally refers to the facts about the history of a piece of digital content assets (image, video, audio recording, document).

      Definition 2 from Society of American Archivists: "information regarding the origins, custody, and ownership of an item or collection" (source)

    1. “Socialism with Chinese Characteristics” has rapidly transformed China into one of the most economically unequal societies on earth. It now boasts a Gini Coefficient of, officially, around 0.47, worse than the U.S.’s 0.41. The wealthiest 1% of the population now holds around 31% of the country’s wealth (not far behind the 35% in the U.S.). But most people in China remain relatively poor: some 600 million still subsist on a monthly income of less than 1,000 yuan ($155) a month.

      This is statistics about societal inequities in China that I was not aware of.

    2. From the smug point of view of millions who now inhabit the Chinese internet, Wang’s dark vision of American dissolution was nothing less than prophetic

      Which came first…Wang’s dark vision (perhaps leaking out in Chinese government propaganda) or the smug point-of-view (organically realized in the population)?

    3. he marvels at homeless encampments in the streets of Washington DC, out-of-control drug crime in poor black neighborhoods in New York and San Francisco, and corporations that seemed to have fused themselves to and taken over responsibilities of government. Eventually, he concludes that America faces an “unstoppable undercurrent of crisis” produced by its societal contradictions, including between rich and poor, white and black, democratic and oligarchic power, egalitarianism and class privilege, individual rights and collective responsibilities, cultural traditions and the solvent of liquid modernity.

      Reading this description of 1988 seems so quaint now with what the country is experiencing today. Of the three things mentioned, I think statistics would show that only the petty crime situation is better now than it was in 1988. The other areas—homelessness, corporate fusion with government, economic inequity, collective responsibility, racial equality—have all gotten worse in America.

    4. he believed that the modernization of “Socialism with Chinese characteristics” was effectively leaving China without any real cultural direction at all. “There are no core values in China’s most recent structure,” he warned. This could serve only to dissolve societal and political cohesion.

      TODO: look at the Pew Foundation polling about the decline of Christian church attendance—especially outside of the highly-individualized evangelical movements—for parallelisms of “no core values” in American culture. Have a sense that the widening political divide and the lack of common truths causing U.S. to “dissolve societal and political cohesion” maybe have kinship with what the author is describing about Chinese culture.

    5. Wang perceived a country “in a state of transformation” from “an economy of production to an economy of consumption,” while evolving “from a spiritually oriented culture to a materially oriented culture,” and “from a collectivist culture to an individualistic culture.”

      This is key in understanding what comes later in the article. Near the end, the author is going to point to where Wang apparently inspires political policies that seek to bring a top-down imposition of collectivist culture. To transform the Chinese society into something that it was before.

      Unmentioned in this article is the societal crackdown and reëducation of the Uyigur people. Is that a roadmap for Wang-inspired policies in the rest of China? Would that happen? Could that happen?

  4. Dec 2021
    1. Since the start of the pandemic, Gloo’s Mr. Beck said, the company has been focusing on how to help churches get more attention on Google search. The company has a program for churches to pool their funds and buy search keywords—something a single church couldn’t afford on its own, he said.

      Going beyond creating community profiles—becoming a co-op to raise common funds for digital advertising.

    2. Clients can integrate their internal databases with Gloo, adding to its data trove. The company offers technology that churches can put on their websites to collect data, and has questionnaires churches can give their congregants.

      The members of the congregation become part of the product through actions of the church. One wonders what the [[data sharing disclosure]]s look like in this case. One also wonders what GDPR regulators would think of this activity.

    3. Gloo said third-party data has always been anonymized to users—it said it doesn’t reveal people’s names or exact locations to them. In response to questions from the Journal, the company said it also began de-identifying data within its own databases last year.

      Important [[privacy]] considerations, including processing of [[pseudoanonymous]] data. No mention in the article about the risk of [[re-identification]] of user—particularly in the context of geolocated data within a radius of a church. ("Gloo offers to provide churches with snapshots of data to better understand their communities and focus their ministries on relevant issues" from earlier in the article.)

    1. Second, knowledge may be contested, where it has been constructed within particular power relations and dominant perspectives.

      This sentence has me thinking about how Google Maps has to have different names for different physical features or have boundaries in different locations depending on the cultural background of the person using the map.

    2. the presentation on which it is based

      See https://www.lorcandempsey.net/presentation-two-metadata-directions/ for the presentation at the Eurasian Academic Libraries Conference - 2021, organized by The Nazarbayev University Library and the Association of University Libraries in the Republic of Kazakhstan.

    3. Metadata is about both value and values

      Oh, excellent formulation here. Embedded in the metadata that is created are the values infused in the people and processes creating it (stretching back to the values of the people writing the software generating the programmatic metadata).

    4. data which relieves a potential user (whether human or machine) of having to have full advance knowledge of the existence or characteristics of a resource of potential interest in the environment.

      The "schematized assertions about a resource of interest" definition is clearly answers a "what" question. This definition answers a "why" question? I'm left unsatisfied by this definition, and I can't quite put my finger on it. It is good to have the end-user's purpose in mind when creating and curating metadata. Maybe it is the open-ended nature of the challenge of creating a description that "relieves a potential user of having to have full advance knowledge".

    5. I have spoken about four sources of metadata in the past.

      Somewhere between "Professional" and "Community" is another source. The "Professional" definition is geared towards librarians and other information professionals. "Community" is "crowdsourced". Professionals other than information professionals have their own metadata schemes, though, that can be just as formal as the once created by librarians, albeit more specialized towards the needs of a particular community. These are neither "crowdsourced"—which has an ad hoc and/or educated amateur connotation—nor the specialized formats from libraries and archives. Think Darwin Core or PBCore.

    6. the network environment

      I'm hoping I can find a definition for the networked environment. I can't tell if this is a statement about the internet in general (or the subset that is the web), or of something more formal like [[linked data]]. The way this notion is used in the first couple of paragraphs makes me think it is something with a somewhat concrete definition.

    1. Controlled Digital Lending: Unlocking the Library’s Full Potential

      This document was in an HTML frame at https://www.libraryfutures.net/policy-document-2021 — I needed to bust it out of the frame in order to comment on it.

      Although not explicitly stated, this document seems to be information document for those seeking legislative sanctioning of CDL activity. Only at the fourth paragraph is the phrase "Congress should support their communities" included. The remainder of the document also include several calls for legislative cover for CDL.

    2. libraries generally lend digitized versions ofprint materials from their collections, strictly limiting them to a single digital copy per physicalcopy owned—a one-to-one “owned-to-loaned” ratio. If a library owns two physical copies of TheGiving Tree, it only loans out two copies at any time, whether physically or digitally.

      Concise definition of [[controlled digital lending]]. It maintains the same circulation model with similar points of friction as my library users experience now.

    1. Develop a process to ensure privacy is aconsideration in student analytics andinstitutional research. Unfortunately, concernsabout student data privacy have been minimal sincethe earliest days of the student success movement

      Yes—this needs to feed back into the the discussion in the EDUCAUSE Top 10 Higher Education IT Issues for 2022, specifically in the Needed Technologies and IT Capabilities section of Issue #3: Digital Faculty for a Digital Future.

    2. Comprehensive and sustained privacyawareness campaigns

      Got to page 5 of the document, and I'm now wondering "who". The "what" and "why" are more self-evident, but as of yet this document hasn't described a framework of who should be doing this work.

    3. Efforts to clarify and disseminatethe differences between “privacy as advocacy” (e.g.,privacy is a fundamental right; privacy is an ethicalnorm) and “privacy as compliance” (e.g., ensuringprivacy policies and laws are followed; privacyprograms train, monitor, and measure adherence torules) help frame conversations and set expectations.

      This is an interesting distinction... privacy-because-it-is-the-right-thing-to-do versus privacy-because-you-must. I think the latter is where most institutions are today. It will take a lot more education to get institutions to the former.

    4. These enhanced capabilities at theinstitution will no doubt necessitate investments in privacystaffing and infrastructure, resulting in fully staffed andresourced privacy units within the institution.

      Is there an enhanced role for Institutional Review Boards in assessing the data privacy aspects of research? To what extent does a privacy staffing/infrastructure component take in assisting researchers and shepherding data collection/analysis from a more central (either university-wide or department-centered) perspective?

    5. As informed and engagedstakeholders, students understand how and why theirinstitutions use academic and personal data.

      Interesting that there is a focus here on advocacy from an active student body. Is it the expectation that change from some of the more stubborn areas of the campus would be driven by informed student push-back? This section on "Students, Faculty, and Staff" doesn't have the same advocacy role from the other portions of the campus community.

    1. By extracting the center frame of every shot, the user couldview all of the frames simultaneously on the contact sheet to review all visual content in thevideo.

      Interesting choice to pick out the middle frame of each shot.

    2. Leveraging the metadata from AMP, for example, users are already able to conduct searches(with varying levels of results) such as:● Take me to every point in a video interview with Herman B Wells where Herman B Wellsmentions Eleanor Roosevelt on the subjects of Presidents’ spouses and 20th-centuryleaders.● Show me every video interview with Herman B Wells in the 1970s where the intervieweris Thomas D. Clark, and it was produced at WTIU Bloomington.● Take me to every point in a video interview with Herman B Wells where Herman B Wellsis on camera and talking about Midwest universities where there is no music present.

      Thinking about these searches—and the kinds of metadata needed to answer them—I wonder how much of this metadata can be transmitted to DPLA. I wouldn't expect these same kinds of searches to be possible in a multi-collection search tool like DPLA, but what would it look like to crosswalk this metadata into something DPLA can consume? (I'm less familiar with the metadata characteristics of Europeana.)

    3. Based on work and results so far, the project team has concluded that the approach taken inAMP is effective and scalable for generation of metadata for certain types of AV collections,particularly those that involve significant amounts of spoken word content. This includeslectures, events, and documentaries, along with oral history interviews and other ethnographiccontent.

      Pilot project results. Seemingly good for "scholarly" kinds of material—perhaps not so much for consumer content? The spoken word content also makes me wonder if there are similar machine-generation tools for musical performance content. YouTube's ContentID system certainly generates hits for some musical content. Also remembering the origins of Pandora to classify music characteristics.

    4. as of February 2021, Europeana comprises 59%images and 38% text objects, but only 1% sound objects and 2% video objects.3 DPLA iscomposed of 25% images and 54% text, with only 0.3% sound objects, and 0.6% videoobjects.4Another reason, beyond cost, that audiovisual recordings are not widely accessible is the lack ofsufficiently granular metadata to support identification, discovery, and use, or to supportinformed rights determination and access control and permissions decisions on the part ofcollections staff and users.

      Despite concerted efforts, there is a minimal amount of A/V material in Europeana and DPLA. This report details a pilot project to use a variety of machine-generated-metadata mechanisms to augment the human description efforts. Although this paragraph mentions rights determination, it isn't clear from the problem statement whether the machine-generated description includes anything that will help with rights. I would expect that unclear rights—especially for moving image content—would be a significant barrier to the open publication of A/V material.

    1. The goal of data brokers is to allow consumers to decide which information may be shared with advertisers, then share in some of the revenue generated by its use. These services ask users to sign up on the Web or via an application, connect their social media and Web accounts, then ask them to answer specific questions about their interests. Based on the data provided and collected initially and over time, the brokers will place users into segments, and advertisers can purchase access to data from one or more segments for use in personalized advertising. Each time their data is shared, or advertisers purchase access to a segment in which the user's data has been placed, the user can earn points, rewards, or cash. All the data brokers note that they store their user data on the cloud using a variety of encryption and security protocols, and that the end users with whom they work can opt out of having specific data shared if they so choose.

      The thought being: if a private file is going to be created about me, at least I should be able to cash in on that. How can we know if we are getting a good “price” for selling our behavior data and interests? Is there a divide between those that can afford not to be tracked versus those that need to be tracked as a source of income?

    2. Data broker Invisibly (www.invisibly.com) provides a listing of various types of data available for sale on the dark web, ranging from a Social Security number (valued at just $0.53) to a complete healthcare record ($250).

      Social security numbers, often thought of as important personally identifying keys, are relatively inexpensive according to this website.

    3. data on demographics that are in limited supply (such as data on Middle Eastern male consumers) is more valuable than demographic data on white millennial women. Similarly, the browsing data of individuals seeking to purchase a Tesla or Ferrari automobile within the next month would be valued more highly by data brokers and advertisers than the data of someone browsing for the best deals on a used Chrysler minivan.

      Demographic data gathered from behavioral advertising systems is not equally valuable. Value can vary by the attributes of the person and by attributes of what that person was doing.

    1. Catala, a programming language developed by Protzenko's graduate student Denis Merigoux, who is working at the National Institute for Research in Digital Science and Technology (INRIA) in Paris, France. It is not often lawyers and programmers find themselves working together, but Catala was designed to capture and execute legal algorithms and to be understood by lawyers and programmers alike in a language "that lets you follow the very specific legal train of thought," Protzenko says.

      A domain-specific language for encoding legal interpretations.

  5. Nov 2021
    1. Raspberry Pi Trading

      At the moment, this company is wholly owned by the Raspberry Pie Foundation.

      For clarity, tell us the distinction between the foundation, which you run, and the trading company that Eben Upton presides over, and how they work in conjunction with each other. The Raspberry Pi Foundation is a UK-registered charity with an educational mission and Raspberry Pi Trading Limited is a wholly owned subsidiary of the Foundation. That means that the Foundation is the shareholder of the trading company, which is an independent, commercial business. That distinction is really important because there are limits on what charities can do commercially. For example, a charity couldn't sell computers that are used in industry, which is a huge part of the Raspberry Pi computer business now. I lead the foundation and I also serve as a director on the board of the trading company. As you said, Eben Upton leads the trading company. [source]

      So it will be interesting to see how much control the Foundation has if/when the trading company goes public.

    1. The ultimate solution probably requires incentives that provide enough deterrence to eliminate such misconduct proactively rather than treating it reactively.

      There seems to be a lack of consequences when these deeds are done. Reputations are tarnished in the moment, but then forgotten. There is a new NISO work item on handling corrections. If those retractions and corrections are tied to ORCID identifiers, could this data be aggregated into actionable information in review workflows?

    2. Certainly, the pandemic has brought home not only enormous challenges in public communication about science but also the serious consequences of our failures in this respect.

      At least in the US, there was a pandemic playbook at the National level that was ignored politico. To what extent were there other playbooks that were ignored? Should each field of study have a shared responsibility for having these playbooks set up? (Should the field of entomology scholars have a communications playbook in place for when a plague of locusts Swiss the country?)

    3. journal editors and publishers — are becoming responsible not only for facilitating peer to peer communications but also for public access. As such, they are grappling with the upstream exploitations and downstream public communications and misinformation which were previously squarely outside their remit.

      If they did this well, would it be reason enough to justify the high prices that publishers charge? They would, of course, need to channel profits into people and tools to manage the public discourse—much like the social media companies have had (and failed?) to do. It is an interesting problem because the misinformation and mischaracterization is happening off the publisher’s platform.

    4. tack towards discovery, towards truth

      This sailing metaphor is a useful one. Buffeted on all sides by distraction. Constantly shifting pressures, but generally going in the same direction. There maybe an ideal path forward, but the variables are too numerous to know where the ideal path lies.

    1. When I worked at Google, it will still VERY normal until 2009/2010-ish to ask IT for a machine, put it under your desk, and run a CI like Jenkins on it.

      Does Google still have developers run servers under their desks?

    1. The only issue that did not appear on the Top 10 list of any type of institution was Creating a Culture of Care. Its average importance for the institutional groupings ranged from 6.37 to 7.02 on a scale from 1 to 10. These relatively low ratings may not indicate that mental health is not an important issue that technology can help address. They may instead suggest that the contributions of technology to improving mental health are nascent. The Early Adopter institutions rated Creating a Culture of Care most highly, whereas the lowest average rating for this issue came from Late Adopter institutions.

      ...which is too bad because there are a lot of overworked and over-stressed people mentioned in this article. Some thought and care is going to be needed to bring about desired transformations.

    2. #12. Where Have All the Applicants Gone?: Using technology to streamline administrative processes and leveraging artificial intelligence to assist the enrollment pipeline

      Artificial intelligence to assist in the enrollment pipeline. Now that is a scary thought towards more institutional homogeneity!

    3. Collaboration spaces equipped with big screens and state-of-the-art simulcast and videoconferencing capability can support both face-to-face and virtual collaborations. Maker spaces can enable students to work on creative projects, both academic and personal. Radical creativity might inspire institutional staff to design and develop a maker space 2.0 that builds on the first generation of maker spaces.

      Speaking of "go big or go home" as this section was talking about just a few paragraphs earlier, this sounds like a lot of risky "Go Big". Also, are these capabilities going to be equitably distributed?

    4. Leaders need to continue that communication and develop those relationships in remote and hybrid working environments, whether those environments become an ongoing fixture of the institution or are only part of a business continuity plan.

      This is the tough bit. Working fully remote for two organizations now, I find it is important to have in-person time with colleagues—meetings, meals, relaxation—to build relationships that can weather rough patches when people aren't face-to-face. In addition to communication plans for remote and hybrid, leaders need to recognize that humans are social creatures and almost all will benefit from having relationship-building time.

    5. They are overwhelmed by continuing to live with the pandemic, and many of them long for equilibrium instead of having to constantly adapt as the public health situation morphs.

      More descriptions of being overwhelmed.

      Still one wonders if this is a good time for retrospectives that codify what was learned.

    6. In all cases, institutions need to have a security and privacy strategy. Endpoint protection platforms, two-factor authentication, and cloud monitoring tools are some of the technologies that IT staff use to protect institutional data and individuals' identities.

      How to ingrain this into an organization without being dictatorial? I imagine: public pronouncements from high levels about the importance of cloud service governance, lots of education for decision-makers and implementers, clearinghouses of common information, open/blameless reports of problems.

    7. Achieving this requires a strong partnership among those in the procurement, cybersecurity, and legal departments, business process owners, and other key stakeholders to ensure that there are clear and equitable terms and conditions in the cloud contract.

      For service providers, this can be a differentiating factor— especially at the high-end of providers. High-touch versus mass-produced.

    8. Department staff may purchase cloud-based software to address business needs, only later learning that it may not meet institutional security requirements or be easily integrated with institutional applications and infrastructure.

      The institutional governance around this must be awful. Use of procurement cards probably means that these expenditures are hidden from purchasing departments. Is there any centralization that can happen—either for pricing or to bring institutional weight to bear on functional needs?

    9. The biggest transformation that institutional stakeholders are seeing now is a much broader collaboration between teaching faculty and the staff who support the curriculum, including academic technologists, instructional designers, and librarians. The most equitable level of access for all happens when faculty and staff are working together to improve the learning experience for students. As a result of this collaboration, faculty understand what other staff at the institution bring to the table, and staff become more involved in the classroom experience, physical or remote, and better understand what that experience is like for students and for faculty.

      This is the lead paragraph!

    10. some college and university leaders in Germany are considering adopting an on-premises cloud architecture to preserve digital sovereignty, avoid an over-reliance on proprietary systems, mitigate financial risks, and adhere to the emphasis by the General Data Protection Regulation (GDPR) on controlling one's digital destiny.

      Yes...Index Data is seeing this with FOLIO discussions in German territories. That is an interesting advantage for open source—the ability for institutions to host a multi-tenant cloud implementation on their own infrastructure and get support for implementation and customization from a service provider.

    11. As has long been the practice of solution providers, pricing is a black box that varies from contract to contract.

      At first I thought this is true of service providers that are specific to higher education. Almost all of the mainstream SaaS suppliers have the pricing tiers listed on their websites with discounts for yearly subscriptions. But in reality, many of these site have an "enterprise" tier that says "call us"—and that is where the pricing transparency breaks down.

    12. The need to maintain business continuity prevailed over the need to plan and to mitigate risk through appropriate cloud contract terms. The consequences may need to be addressed in 2022.

      Good point. Many new SaaS implementations were brought to bear in the heat of the moment just to keep things going. They may not be the right solutions, or there may be better solutions now.

    13. Beyond that, employers have been clamoring for clearer information about what graduates know and can do with their education (e.g., competency-based certifications). An associate's, bachelor's, or master's degree is not specific enough to enable employers to evaluate talent for the needs of their companies.

      Reflective of the portfolio discussions from the late 1990s and early 2000s: a set of artifacts that is student-centric (not class-centric as with learning management systems), and evaluated by instructors and peers.

    14. For those who simply want to return to the way things were, the talk of permanent changes is frightening and exhausting. Faculty and staff may interpret plans for dual modes of working and teaching as plans to double their workloads.

      "May interpret"?

    15. Both faculty and staff will need to become more flexible and adaptive in order to respond rapidly to changing circumstances and students' needs. Faculty will need to become adept at remote teaching, learning, collaboration, and advising so that they can confidently revise and improvise in the moment.

      ...and staff will need to understand the context of how the delivery of their services fit into the student experience as well as the training on how to create the supportive, equitable environment.

    16. They must be well supported by IT staff who understand not just the technology but also the concepts behind its application to teaching and learning

      Speaks to the need to raise the skill level of technology staff.

    17. heutagogical
    18. Institutions will need IT staff who are able to engage with students to provide them with the technology training and skills that they'll need to be successful.

      This requirement isn't new to many library staff, but I can see where library staff might be pressed into service to meet these instructional needs. Especially where library staff have existing liaison relationships with faculty.

    19. Similarly, many faculty and academic leaders are entrenched around the idea that certain modalities of teaching and learning are intrinsically better or more effective than others. That must change to serve the "everywhere" learner (and the "anywhere" faculty).

      What kind of data will be needed to change (or confirm) this position? Will libraries be asked to gather info?

    20. The biggest challenge may be finding ways of successfully working and learning in a hybrid mode. Meetings, teaching, and other synchronous group activities work best when everyone is online or when everyone is in the same room. Technologists are investing in various technologies that support "dual mode" instruction or meetings; these technologies include additional cameras, screens, audio, and collaboration technologies. Not every effort will work, so technologists often frame the technologies as experiments or pilots and encourage faculty and staff to test various options. Yet the solution is not only a technical one; equally important is re-engineering academic and work processes to enable people to conduct their work in a seamless way regardless of the modality.

      Yup—hybrid is more than in-person plus online. We're seeing this with efforts to hold conferences online and in-person. Hybrid will be different.

    21. Many institutional leaders are considering whether to make big bets on technology to change the game at their campuses. Those big bets will have major impacts on institutional culture and the very nature of how constituents get work done.

      This is another case where schools can differentiate themselves. Also see previous discussion about how "hybrid" is more than the addition of costs of in-person and online.

    22. As a result of the pandemic, students want and expect more opportunities outside of the normal, traditional hours that institutions typically offer. They want weekend, evening, and holiday hours for everything from classes to student services to the library.

      Students want weekend, evening, and holiday hours for ... the library.

      Much of what the library offers is self-service already, but I'm trying to imagine what this means for all library services. Not just the building space, but the staff services as well.

    23. Options for reimagining the campus include (1) redesigning campus physical spaces to support hybrid learning and work, (2) addressing space crunches by encouraging administrative groups to work remotely and then converting administrative spaces to academic spaces that support learning, research, and scholarship that is better conducted on campus, and/or (3) lowering costs by reducing the physical campus space.

      I'm reminded by what has happened to space management in the Dublin Rec Center over the 15 years we've lived here. At first there was a respectable amount of space in the building for administrator offices. Over time, activities such as the toddler care, the teen room, the computer lab, and various classroom spaces moved around the first floor as more spaces were converted from offices to programming space. Even when Ethan's guitar class was in a room that was converting from a staff conference room to classroom. Now almost all of the staff space has moved to the old city hall building down the road.

    24. The two models will coexist at many institutions, forcing leaders to consider what can be done only on campus and what can be done virtually.

      I wonder how this will factor into the earlier observation about institutions needing to specialize?

      Arguably, "hybrid" is not a mix of "on campus" and "virtually"—it is a separate thing all its own. See the related efforts to have hybrid conferences.

    25. Instructional support and IT staff must provide more training for faculty and staff, to keep them up-to-date and to ensure that they have the skills needed to teach and work securely and effectively beyond the traditional campus.

      Reinforcing that there isn't an end goal in sight—except to be more nimble to the change that is coming. "The only constant is change?" Everyone needs training in not only the technology being deployed now, but also how to learn the technology that will be coming after.

      Layer that onto how tired everyone is. I can hear: "I just want to learn what I need to know now...the rest of what's coming doesn't concern me."

    26. Yet faculty, staff, and students are tired, stressed, and overwhelmed.

      THIS is a continuing theme—everyone is tired, stressed, and overwhelmed.

    27. Learning analytics can help faculty adapt their teaching to identify and support students quickly and efficiently. Assessment technologies, although often controversial, are maturing, and with the help of learning and assessment advocates, these technologies can become more valid and better safeguard privacy.

      Too bad there are no citations here. There may be advances in safeguarding student privacy in assessment technologies, but it isn't clear.

    28. Using collaborative technologies like Slack or Microsoft Teams can foster dialogue and community around how faculty are using technology in their teaching, how they are teaching, and how they are changing the curriculum.

      It's not the technologies, though—it is the community management of the dialog that will be key. Introducing Slack or Microsoft Teams for its own sake is not going to foster the desired discussion. But the folks that could be community managers for this discussion are the already overworked instructional designers.

      There is probably a need for an eat-your-own-dogfood approach here. Whatever technologies end up being used in the classroom need to form the foundation of this discussion space.

    29. Change decisions can't be made behind closed doors. They will require dialogue across staff groups and student groups to help all stakeholders understand and agree on goals and feel that they have a hand in choices and timing.

      Clear communication and inspired/inspiring leadership will be needed.

    30. They will provide a holistic view of students, alumni, employees, resources, and more in ways that can result in beneficial outcomes. New architectures increase access to data and resources, which can offer better insights about institutional products and services and enable faster, more accurate decisions.

      This should be contrasted with the comment in point 1 above about data privacy:

      Culture clashes between data preservationists and leaders managing institutional risk and legal exposure may intensify as higher education institutions introduce more conservative records-retention policies and processes.

      This isn't reconciled here, but there will clearly be a need to prioritize (and extremes on both sides are going to make the conversation hard).

    31. digital transformation (Dx)

      The "Dx" abbreviation is going to be used constantly throughout this document. Keep this expansion in mind as you read. As I read the document in several sittings, I had to keep coming back to this definition.

    32. The coming demographic cliff—a steep drop-off in potential first-time full-time freshmen projected to arrive in 2025 due to the decline in birth rate in the 2008 recession—may further erode enrollment income at US institutions.

      I had heard this before, but I wanted to see some data. On page 3 of National Vital Statistics Reports, Vol. 61, No. 1 (8/2012) - nvsr61_01.pdf from the CDC, there is quite a clear decline in birth rates starting in 2008 and extending further.

    33. Institutions that collaborate to manage cybersecurity can share costs and expertise, both reducing the burden on individual institutions and increasing the level and effectiveness of cybersecurity at institutions of all sizes.

      Is there a role for the Open Library Foundation here?

    34. Some cybersecurity tasks can be outsourced, however, to extend staff and/or acquire specialized skills.

      In what way can service providers have offerings that distinguish them from others?

    35. What were once clear distinctions among hardware, software, cloud, and services and between primary and secondary suppliers continue to blur and overlap to the point where they're no longer distinguishable as separate categories. That raises questions and challenges about identifying the perimeter—or, where institutionally owned and managed technology infrastructure ends. Additionally, the integration between technology run in-house and that run by an external supplier continues to blur boundaries between consumers' responsibilities and suppliers' responsibilities. That, in turn, creates the challenge of clarifying which technology and data components can and should be secured by institutions versus by suppliers versus by end users and, thus, where security risk factors and responsibilities reside. Sometimes suppliers' security and privacy controls may not be as tight as institutions require or realize.

      The overlap between these areas is going to required increased communication between service providers and campus personnel, and probably helping students/faculty understand where reporting of issues needs to happen.

    1. Nyquist rate

      In signal processing, the Nyquist rate, named after Harry Nyquist, specifies a sampling rate (in units of samples per second[1] or hertz, Hz) equal to twice the highest frequency (bandwidth) of a given function or signal. With an equal or higher sampling rate, the resulting discrete-time sequence is said to be free of the distortion known as aliasing. Conversely, for a given sample rate, the corresponding Nyquist frequency in Hz is the largest bandwidth that can be sampled without aliasing, and its value is one-half the sample-rate. Note that the Nyquist rate is a property of a continuous-time signal, whereas Nyquist frequency is a property of a discrete-time system. Nyquist rate - Wikipedia

  6. Oct 2021
    1. Either you get solar power or you get trees. In California, they put their thumb on the scale of solar panels and basically said the trees have to come down.

      In a Sunnyvale, California lawsuit involving solar panels and redwood trees, the state courts picked solar panels. Two environmental efforts pitted against each other.

      The judge found that Trees Nos. 4, 5 and 6, which cast little shade when the solar panels were installed, were now collectively blocking more than 10 percent of the panels over the hot tub. Trees Nos. 1, 2 and 3 shaded the area when the panels were installed, so they were exempt, and Trees Nos. 7 and 8 did not violate the law, the judge ruled. Trees Block Solar Panels, and a Feud Ends in Court - The New York Times

    1. .In 1622, the same year that Galileo was reiterating his defense of the heliocentric model of the solar system, Pope Gregory XV created the Sacred Congregation for the Propagation of the Faith—known in Latin as the Sacra Congregatio de Propaganda Fide, or the Propaganda Fide for short—a body tasked with coordinating and expanding the missionary activity of the Catholic Church

      Origins of propaganda

  7. Sep 2021
    1. Proper long-term, digital preservation involves curation: careful attention to ensure the content and associated metadata are safe and secure and are managed so that they remain usable despite changes in file formats and technologies in order to remain accessible.

      Easy to understand definition of curation in a digital preservation context.

    1. And yet, you don’t have to prove anything to get that email in the first place. I’ve had that Gmail address longer than I’ve had any one physical address in my adult life, or any phone number or any driver’s license number. The only identifier I’ve had for longer is my Social Security number. I got that from the federal government after my parents submitted proof of my identity and citizenship status. I just had to fill out a few prompts on a website to get my email address.

      On the longevity of email address assignments as identifiers.

    2. Ray Tomlinson is widely credited as the inventor of email, but the technology evolved in a piecemeal fashion, over time, with additions and improvements from a lot of people. Dave Crocker worked on an early effort to create email standards in 1977 and spent the rest of his career creating or contributing to internet mail standards, which he is still doing today. Crocker told me that email was the result of a “massive amount of increments,” most of which were reactive; each iteration was a solution to an existing problem, or someone just coming up with “a cool idea.”

      Includes a brief history of email and it’s lacking security foundation.

    1. In traditional paper-based schemes, voters verify by visually inspecting that their ballot represents their intention.

      I remember seeing some research about how few times a voter will examine the paper coming out of the ballot marker before they insert it into the ballot marker.

  8. Aug 2021
    1. This report will focus primarily on the hardware and software associated with cameras, location trackers, and sensors. These technologies are common components in broader “smart city” technologies and projects, and they have the high-risk ability to collect data that can directly, or in combination with other data, identify individuals.

      Technology that can be used to track individuals...the focus of this report. There are other "smart city" technologies that are not covered in this report.

    2. technology that is capable of collecting data that can identify individuals because that data can be used to target individuals, which in turn can erode the sense of safety and inclusivity requisite for public spaces to serve as commons for democratic functions.

      The why of this is important. Its existence is a thread to democratic activity. If one is being monitored in the public space, then can they truly act as they feel. In some sense, this pushes people towards the median—no individualism because that would stick out.

    3. This story, with its mission creep and mishaps, is representative of a broader set of “smart city” cautionary trends that took place in the last year. These cautionary trends call us to question if our public spaces become places where one fears punishment, how will that affect collective action and political movements?

      The anecdotes in the article come from San Diego, where police used streetlight cameras for surveillance of protesters. (The cameras were originally for traffic control and air quality monitoring.) After local activists found this, the city stopped receiving data from the cameras (they couldn't be turned off). It was later found that the police department also held back materials from a congressional inquiry on facial recognition technology.

    1. “Donor-advised funds have grown even more rapidly in number and assets and DAFs have NO LEGAL MANDATE to pay out anything each year. So donors take tax breaks immediately when they transfer their wealth to DAFs. But those DAFs are not actually legally required to actually distribute those funds to nonprofits.” So there's a lot of very wealthy people who use DAFs and private foundations for tax advantages.

      sigh — so this is a thing.

    2. And if you look at for instance, Freexian, which is an effort where Debian developers club together and get sponsorship money, so they can each spend a certain number of hours each month consulting on really important parts of Debian software and infrastructure.

      Example of an open source project that crowdsources funding to maintain the code.

    1. As shown in the previous section, accelerometers in mobile devices can allow serious invasions of user privacy. Even when other sensors, such as cameras, microphones and GPS are turned off, accelerometer data can be sufficient to obtain information about a device holder’s location, health condition, body features, age, gender, emotions and personality traits. Acceleration signals may even be used to uniquely identify a person based on biometric movement patterns and to reconstruct sequences of text entered into a device.

      Lead paragraph of the "Discussions and Implications" section, which is a high-level summary of the various ways accelerometers can track user characteristics. There are noted limitations, including controlled lab settings and position on the body where the accelerometer is located.

    2. Body movement patterns recorded by accelerometers in mobile devices have been demonstrated to be discriminative enough to differentiate between, or even uniquely identify, users.

      Might have to read further to understand this, but i wonder if these characteristics transfer from device to device...can body movement patterns detected on one device be ported to another device for tracking. Later in this section the authors say:

      Following an approach commonly referred to as device fingerprinting, users can further be told apart based on unique characteristics and features of their personal devices. Calibration errors in accelerometers, which are caused by imperfections in the manufacturing process, have been found sufficient to uniquely identify their encapsulating device.

      So maybe there are enough errors in the sensors of each device to prevent a pattern from being ported to another device?

    3. It has been shown that accelerometers in mobile devices can be exploited for user localization and reconstruction of travel trajectories, even when other localization systems, such as GPS, are disabled. In [38], Han et al. were able to geographically track a person who is driving a car based solely on accelerometer readings from the subject’s smartphone. In their approach, they first calculate the vehicle’s approximate motion trajectory using three-axis acceleration measurements from an iPhone located inside the vehicle, and then map the derived trajectory to the shape of existing routes on a map. An example application of the algorithm is displayed in Fig. 2. Han et al. describe their results as “comparable to the typical accuracy for handheld global positioning systems.”

      GPS off but accelerometer on? Your location can still be inferred from the movements.

    1. The concept is based on fundamentally wrong assumptions.

      From section 7:

      The prerequisite for meaningful progress on the legal front is that the manifold limitations of privacy self-management are recognized and treated as such by legislators. This also means to admit that our privacy laws – including recent and much-anticipated ones, such as the GDPR and the California Consumer Privacy Act – are based on wrong assumptions and therefore not truly fit for purpose. While sticking with privacy self-management may be the path of least effort in the short run, this policy ignores the long-term consequences of uninformed and involuntary privacy choices, which can be severe not only for individuals but also for society at large..

    2. this article provides an overview and classification of the manifold obstacles that render privacy self-management largely useless in practice.

      From the article's introduction:

      To underpin the debate going forward and support knowledge transfer into politics, this article provides a structured overview of arguments that scholars have brought forth in opposition of privacy self-management. These arguments concern the informedness and rationality (Sect. 2), the voluntariness (Sect. 3) and the unaccounted-for externalities (Sect. 4) of individual privacy choices. Additionally, we point out loopholes in privacy law that undermine the effectiveness of privacy self-management (Setc. 5). While our legal analysis focuses primarily on the GDPR, which is presently regarded as the most comprehensive and influential privacy regulation worldwide (Miglicco 2018; Zarsky 2016), the essence of the arguments is generally applicable to privacy laws that embrace the notice-and-choice paradigm.

    3. People's privacy choices are typically irrational, involuntary and/or circumventable due to human limitations, corporate tricks, legal loopholes and the complexities of modern data processing. Moreover, the self-management approach ignores the consequences that individual privacy choices have on other people and society at large.

      Privacy self-management is a lot of little David-and-Goliath situations. Each person has to examine the privacy implications of each decision—if the user can understand the implications. One user's actions also impact a lot of connected people, and those connected people don't get a say in the privacy choices.

  9. Jul 2021
    1. But Apple isn’t going to do any of this if they don’t think they have to, and they won’t think they have to if people aren’t calling for their heads.

      Excellent point. I think they have to. Do they know they have to? Is there enough government pressure for them to act? Does the government have realistic options at this point (e.g. Android)?

    2. But companies like Apple and Google can raise both the cost and risk of exploitation — not just everywhere, but at least on specific channels like iMessage. This could make NSO’s scaling model much harder to maintain. A world where only a handful of very rich governments can launch exploits (under very careful vetting and controlled circumstances) isn’t a great world, but it’s better than a world where any tin-pot authoritarian can cut a check to NSO and surveil their political opposition or some random journalist.

      This is an interesting point. It isn’t a question of all or nothing. There is a gradation of effort to make it harder for companies to have a business that mass-markets these exploits.

    3. An entirely separate area is surveillance and detection: Apple already performs some remote telemetry to detect processes doing weird things. This kind of telemetry could be expanded as much as possible while not destroying user privacy. While this wouldn’t necessarily stop NSO, it would make the cost of throwing these exploits quite a bit higher — and make them think twice before pushing them out to every random authoritarian government.

      There is a security/privacy trade off here. More telemetry means the device manufacturer has more privacy-busting data, which makes the manufacturer a prime target for exploitation (human and mechanical). Should a user be given a spectrum of telemetry versus protection? What about corporate/government agencies…should there be an MDM option to receive their own stream of telemetry? Their own tap of all network traffic from a device? (Is that possible in the baseband?)

    4. A case against security nihilism

      Blog post with broad outlines of how the Pegasus exploits worked, how we can’t shrug this off, and what device manufacturers could do to raise the stakes for such services.

    1. correlated a unique mobile device to Burrill when it was used consistently from 2018 until at least 2020 from the USCCB staff residence and headquarters, from meetings at which Burrill was in attendance, and was also used on numerous occasions at Burrill’s family lake house, near the residences of Burrill’s family members, and at a Wisconsin apartment in Burrill’s hometown, at which Burrill himself has been listed as a resident.

      This reporting doesn’t say if it is an app identifier or if it was a mobile device identifier (ISMI or the OS-supplied advertising id).

    2. Commercially available app signal data does not identify the names of app users, but instead correlates a unique numerical identifier to each mobile device using particular apps. Signal data, collected by apps after users consent to data collection, is aggregated and sold by data vendors. It can be analyzed to provide timestamped location data and usage information for each numbered device.

      No identification of the user in the signal data, and the user consents to the data gathering.

    3. the mobile device correlated to Burrill emitted hookup app signals at the USCCB staff residence, and from a street in a residential Washington neighborhood. He traveled to Las Vegas shortly thereafter, data records show.On June 22, the mobile device correlated to Burrill emitted signals from Entourage, which bills itself as Las Vegas’ “gay bathhouse.”

      Correlation with known residence and to Las Vegas on a business trip.

    4. But an analysis of app data signals correlated to Burrill’s mobile device shows the priest also visited gay bars and private residences while using a location-based hookup app in numerous cities from 2018 to 2020, even while traveling on assignment for the U.S. bishops’ conference

      High-level details of the correlation and re-identification.

    5. Pillar Investigates: USCCB gen sec Burrill resigns after sexual misconduct allegations

      Notable for the correlation of app location data with the physical presence of a device in office and home locations plus travel. Re-identification of anonymized data from a location-check-in app (Grindr).

    1. Science really needs global governance.

      What would this governance look like? Does a professional ethics/licensing program need to come to bear?

    2. a systems problem—the system provides incentives to publish fraudulent research and does not have adequate regulatory processes. Researchers progress by publishing research, and because the publication system is built on trust and peer review is not designed to detect fraud it is easy to publish fraudulent research. The business model of journals and publishers depends on publishing, preferably lots of studies as cheaply as possible. They have little incentive to check for fraud and a positive disincentive to experience reputational damage—and possibly legal risk—from retracting studies.

      A systemic problem where the current publishing structure has the wrong incentives for authors, publishers, and others. If we are to solve the access problem (with some flavor of open access), how can the incentives be changed to account for this bad research problem? Are changes to finding models a net positive or negative effect on the trustable nature of the published research?

    1. High Social Media

      High social media usage is correlated with believe in "the steal" and usage of violence

    2. FEAR OF “GREAT REPLACEMENT” MOST CONSISTENT FACTOR ACROSS STUDIES

      Bottom line of the studies. As the next slide on implications shows:

      1. Not just a segment of right-of-center organizations, but "a broader mass movement with violence at its core"
      2. Fundamentally a political movement of pro-Trump supporters
      3. As a driver is "Great Replacement" idea...that minorities have more rights than whites
    3. We need a fine-grained understanding of who stormed the Capitol on January 6 and who currently believe the 2020 election was stolen and would participate in a violent protest in order to know who we are dealing with and create viable solutions for the future

      Key reasons for study—who was arrested assaulting the capitol, understanding the national scope of the insurrectionist movement, and gauge the extend that conservatives as a political identity are involved.

    1. it's actually been kind of healing in a way, because you see that every generation of us has to confront this idea of what it's supposed to be and sort of say, but in a spirit, in a generous spirit, in a spirit of sharing.

      To the younger generation, stand on the shoulders of giants. To the older generation, see in the younger generation the struggles that you encountered, and support their voices.

    2. And I remember listening to or trying to watch one of the Sunday morning political affairs shows. And I had no idea what they were talking about. And I didn't like how that made me feel because I didn't - and then I realized, oh, it's not for me. It's for the insiders.

      This is part of being a welcoming place…a place where you didn’t feel like you had to know the inside language before being able to participate. Don’t have someone feel stupid when they start participating in the community.

    3. But what about the people who haven't had a chance to think about that yet? Could we think about them? And I hope that there will always be some room for those folks because, you know, why should everybody have already decided everything or know everything? I feel that there has to be some place where you can find things out without being made to feel stupid. And I'm hoping that we will continue to be that place.

      This has to be a tough line to walk. A first I thought that the average NPR listener is more curious than the average citizen, so this is a pointless path to take. But then I found myself interested in this discussion, and it clearly wasn’t something that I had thought a lot about. (See also the previous episode on the history of the Supreme Court and the next episode that was on the Stonewall uprising.)

    4. I'm not telling you what to think. I'm telling you what to think about.

      This is key…not telling you what to think. Rating putting information and perspectives in front of you so you can mix it with your own experience.

    5. we were really interested in putting on people on the air who were what you would call, quote-unquote, like, "regular people" because we felt their lived experience told a truth that needed to be told.

      Another part of broadening story-tellers is authentic storytelling from “regular people”.

    6. who are the rising stars in music in Ghana? Like, it wasn't war. It wasn't, you know, war crimes. It wasn't people being - you know, recovering from war. But it was daily life there. It was something that was hugely important to the people living there. And that's partly what we were trying to achieve and, I think, did achieve.

      News coverage, done well, goes beyond “war” and “crimes”. It goes into the stories that are important for the people there…who are the rising music stars, for instance. This is an important part of broadening the story-tellers.

    7. And this is why a real rendering of history is important, because people think - just like social movements, they think they just sort of happen. You just arose fully formed. No. After months of discussion, we sort of arrived in the same place and, you know, launched Tell Me More.

      Social movements don’t arrive fully formed. Look to those that have done the work to get a segment of society to this moment. Later in the episode, Michel Martin talks about standing on the shoulders of giants, and how these podcast hosts are now standing on her shoulders.

    1. These recommendation systems are getting so good that if we aren't vigilant, we're just going to end up drifting toward whatever the machine tells us we like. CHILDS: This isn't just a problem of human psychology. It's also a computer science problem. Jingjing says it becomes a feedback loop. Those little drifts add up.

      The problem with recommendation engines: if people put too much faith in them because they have worked well in the past, then there is a "drift to whatever the machine tells us we like".

    2. to figure out if recommendation systems are changing us, Jingjing and her team created a series of experiments using college students, basically fiddling with recommendations and seeing how those recommendations affected the students' behavior.

      Gediminas Adomavicius, Jesse C. Bockstedt, Shawn P. Curley, Jingjing Zhang. Effects of Online Recommendations on Consumers’ Willingness to Pay. Information Systems Research, 29 (1), 84-102. https://doi.org/10.1287/isre.2017.0703

    3. Jingjing Zhang

      Associate Professor, Fettig/Whirlpool Faculty Fellow, University of Indiana Kelley School of Business

      https://kelley.iu.edu/faculty-research/faculty-directory/profile.html?id=JJZHANG

    4. The original collaborative filtering required users to tell the algorithm, hey, I like what this guy likes, thumbs up, or thumbs down. Bob and his team took this idea further. They realized that machines could figure out people's tastes on their own by grouping movies together and teasing out what they had in common based on factors that you couldn't just see or guess.

      The innovation at Netflix: synthesizing characteristics of movies to group them together.

    5. Runaway Recommendation Engine

      On Bayesian algorithms for email filtering augmented by collaborative training data. Then the "Netflix Prize" to improve its recommendation engine.

    1. a 2020 report by the Chinese consulting firm Trivium argues that the social credit system seems more experimental and banal than Western critics describe.

      Perspective from a Chinese consulting firm. To be believed?

    2. On Tyranny, Timothy Snyder
    3. Rongcheng is a coastal town in China of 670,000 people where one thousand social credit points are given to each person as a default. Behavior the authorities want to deter, such as jaywalking, will cost you points, and praiseworthy behavior is rewarded with points. Fighting with neighbors detracts five points; failure to clean up after a dog detracts ten. Donating blood earns five. Punishment comes when one falls below a threshold: bank loans or high-speed train tickets become unattainable. Rewards come in such forms such as discounted utility bills, faster internet service, or improved health care services.

      Description of a real-world implementation—if local, it seems—of the social credit system. Includes examples of demerits and merits one can receive.

    4. study

      Stoycheff, Elizabeth (2016). Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring. Journalism & Mass Communication Quarterly, 93(2), 296-311. https://doi.org/10.1177/1077699016630255.

    5. PEN America
    6. 2016 study

      Penney, Jonathon W. (2016). Chilling Effects: Online Surveillance and Wikipedia Use. Berkeley Technology Law Journal, 31(1), 117-182. http://dx.doi.org/10.15779/Z38SS13.

      Berkeley Law institutional repository

    7. research study

      Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of being watched enhance cooperation in a real-world setting. Biology letters, 2(3), 412–414. https://doi.org/10.1098/rsbl.2006.0509.

      PubMed Central PMC1686213

    8. Heidi Boghosian on Self-Censorship and Expression

      Excerpt from the book “I Have Nothing to Hide”: And 20 Other Myths About Surveillance and Privacy — Beacon Press, copyright 2021. http://www.beacon.org/I-Have-Nothing-to-Hide-P1684.aspx

    1. Since race is a social construct it is difficult to measure. Yet data is critical to the development of standards; thus data collection must be assessed for unintended biases.

      Measuring a social construct is difficult. Yet in order to anticipate unintended biases, the attempt must be made.

    2. Gender-neutral language does not imply that the standards are developed without considering gender-specific needs and priorities.

      Gender-neutral does not mean gender isn't considered. Comment: in fact, striving for gender-neutrality may inspire a broader conversation about the nature of the standard and its applicability to a wider range of people.

    3. Inclusive, or bias-free, language uses expressions and terms that are likely to be perceived as neutral or welcoming by everyone, regardless of their gender, race, religion, age, etc. Using inclusive language can help people from diverse backgrounds feel more welcome and encourages precise, high quality work. As noted in a recent McKinsey survey, employees who feel more included are nearly three times more likely to feel excited by, and committed to, their organizations.

      Reason for using inclusive, bias-free language—it helps people feel welcome and creates an excited, committed community.

    1. Inside Facebook’s Data Wars

      My comment submitted to the NYTimes for consideration:

      If Facebook creates its own "Top 10" list, would it be believed? How far would Facebook go to devise an algorithm that shows a "Top 10" that they want the world to see based on whatever subset of internal signals they want to show? Would we know if a "Top 10" list was simply made up in the marketing department?

      Is it the responsibility of any regulator to check to see a "Top 10" list has any basis in reality? If it is no regulator's responsibility, how would well-informed citizens discover if they have been mislead? What recourse would a regulator have to put Facebook on the straight-and-narrow? If Facebook doesn't have to publish internal metrics, how firm would a citizen's case for fraud in civil court be?

      If the controlling interest in Facebook is Mark Zuckerberg himself [1], how can the public hold him accountable for Facebook's actions/inactions? If Mr. Zuckerberg used his wealth to buy off members of Congress, the executive branch, and the courts, would we know?

      Correct answers only. Our democracy and perhaps the continued existence of humankind are at stake.

      [1] https://www.vox.com/recode/2019/5/30/18644755/facebook-stock-shareholder-meeting-mark-zuckerberg-vote

    2. Mr. Zuckerberg is right about one thing: Facebook is not a giant right-wing echo chamber.But it does contain a giant right-wing echo chamber — a kind of AM talk radio built into the heart of Facebook’s news ecosystem, with a hyper-engaged audience of loyal partisans who love liking, sharing and clicking on posts from right-wing pages, many of which have gotten good at serving up Facebook-optimized outrage bait at a consistent clip.

      Facebook is not a giant echo chamber, but there is an echo chamber inside of it. Is it Facebook without the echo chamber? What is the monetization factor of the echo chamber participants versus the non-participants?

    3. “And it doesn’t want to make the data available for others to do the hard work and hold them accountable.”

      Facebook is such a black box that researchers and journalists that want to hold it accountable rely on Facebook for that data.

    4. The company, blamed for everything from election interference to vaccine hesitancy, badly wants to rebuild trust with a skeptical public. But the more it shares about what happens on its platform, the more it risks exposing uncomfortable truths that could further damage its image.

      Blamed for these things, but is the research conclusive about Facebook's effect on this society ills? Does "the more it shares about what happens on its platform" implicate it further in spreading misinformation about the election and anti-vax messaging?

    1. Some of these examples contain characters that are invalid, such as inline comments

      Sigh. Just another reminder of why JSON without comments stinks.

    2. person, group, organization, thing, or concept

      It may be further answered later in the spec, but this list sounds like an list of abstracts. Can the subject be an actual thing like a web resource?

  10. Jun 2021
    1. MATT: That the deeply traumatic act of coming into existence in the air breathing land world.  [SOUND CLIP, Baby: [Baby crying]] MATT: The severity of it and the harshness of it forces you to adapt. ANNIE: Right. MATT: In order to survive.

      Trauma is necessary for life. Part of a story about how a baby breathes its first Breath.

    1. she told me last fall that she expected this recognition of the 5 micron error and the kind of subsequent changes would take a generation or take 30 years that she hoped she would live to see it. And so, for it to happen in a year, both because of the urgency of the crisis and because of the tenacious pushing that she and others did, the reckoning that this pandemic has led to will have positive consequences for public health at long outlive this pandemic.

      What is the spark that moves the acceptance of a new finding from a generation to a year? Can that be harnessed?

      I wonder about middle school and high school science teachers; they must face this constant challenging of how they learned things, only to have new science force them to teach something that they thought wasn’t true.

    1. MARGARET ATWOOD Young people worry a lot more than older people. And the reason they worry a lot more than older people is that they don't know the plot of their own lives yet. They don't know how it's going to turn out for them. Will they meet their true love? Will they be successful? At my age, I kind of know how the story goes. So should I get hit by a truck tomorrow? The plot will pretty much have unfolded.

      Young people are more anxious because the course of their lives is unknown—there are so many doorways yet to be gone through. Older people have seen much of the arc of their life stories.

      Is that the root of anxiety—not knowing the outcome? Can one be at peace with that?

    1. Now, if you’re on a decentralized platform, data is distributed across many servers or computers. Those aren’t necessarily owned or operated by the creator of the platform you’re using. The power, the authority, the control is spread out. A decentralized system gives everyone more freedom, but it also means that because the data is distributed, there’s no authority who gets the final word, so it’s harder to find and remove illegal or objectionable content.

      Holy cow...listening to this brought back dormant memories. UUCP then Bitnet for point-to-point email exchange. IRC, of course. Then NNTP for newsgroups. Later in the podcast, they interview someone with Mastodon who talks about funding an instance for $500/month using Patreon. Could there be a resurgence of distributed communication tools run by dedicated hobbyists? Should there be?

    1. One thing Amazon doesn’t bring up is that athletes train for an event with a definite end date. Athletes aren’t competing day in and day out, and they have time to rest and recuperate in between.

      More on the [[Societal Cost of Advancing Technologies]] theme, along with a bit of [[Two-tier workers]].