212 Matching Annotations
  1. Dec 2021
    1. Since the start of the pandemic, Gloo’s Mr. Beck said, the company has been focusing on how to help churches get more attention on Google search. The company has a program for churches to pool their funds and buy search keywords—something a single church couldn’t afford on its own, he said.

      Going beyond creating community profiles—becoming a co-op to raise common funds for digital advertising.

    2. Clients can integrate their internal databases with Gloo, adding to its data trove. The company offers technology that churches can put on their websites to collect data, and has questionnaires churches can give their congregants.

      The members of the congregation become part of the product through actions of the church. One wonders what the [[data sharing disclosure]]s look like in this case. One also wonders what GDPR regulators would think of this activity.

    3. Gloo said third-party data has always been anonymized to users—it said it doesn’t reveal people’s names or exact locations to them. In response to questions from the Journal, the company said it also began de-identifying data within its own databases last year.

      Important [[privacy]] considerations, including processing of [[pseudoanonymous]] data. No mention in the article about the risk of [[re-identification]] of user—particularly in the context of geolocated data within a radius of a church. ("Gloo offers to provide churches with snapshots of data to better understand their communities and focus their ministries on relevant issues" from earlier in the article.)

    1. Second, knowledge may be contested, where it has been constructed within particular power relations and dominant perspectives.

      This sentence has me thinking about how Google Maps has to have different names for different physical features or have boundaries in different locations depending on the cultural background of the person using the map.

    2. the presentation on which it is based

      See https://www.lorcandempsey.net/presentation-two-metadata-directions/ for the presentation at the Eurasian Academic Libraries Conference - 2021, organized by The Nazarbayev University Library and the Association of University Libraries in the Republic of Kazakhstan.

    3. Metadata is about both value and values

      Oh, excellent formulation here. Embedded in the metadata that is created are the values infused in the people and processes creating it (stretching back to the values of the people writing the software generating the programmatic metadata).

    4. data which relieves a potential user (whether human or machine) of having to have full advance knowledge of the existence or characteristics of a resource of potential interest in the environment.

      The "schematized assertions about a resource of interest" definition is clearly answers a "what" question. This definition answers a "why" question? I'm left unsatisfied by this definition, and I can't quite put my finger on it. It is good to have the end-user's purpose in mind when creating and curating metadata. Maybe it is the open-ended nature of the challenge of creating a description that "relieves a potential user of having to have full advance knowledge".

    5. I have spoken about four sources of metadata in the past.

      Somewhere between "Professional" and "Community" is another source. The "Professional" definition is geared towards librarians and other information professionals. "Community" is "crowdsourced". Professionals other than information professionals have their own metadata schemes, though, that can be just as formal as the once created by librarians, albeit more specialized towards the needs of a particular community. These are neither "crowdsourced"—which has an ad hoc and/or educated amateur connotation—nor the specialized formats from libraries and archives. Think Darwin Core or PBCore.

    6. the network environment

      I'm hoping I can find a definition for the networked environment. I can't tell if this is a statement about the internet in general (or the subset that is the web), or of something more formal like [[linked data]]. The way this notion is used in the first couple of paragraphs makes me think it is something with a somewhat concrete definition.

    1. Controlled Digital Lending: Unlocking the Library’s Full Potential

      This document was in an HTML frame at https://www.libraryfutures.net/policy-document-2021 — I needed to bust it out of the frame in order to comment on it.

      Although not explicitly stated, this document seems to be information document for those seeking legislative sanctioning of CDL activity. Only at the fourth paragraph is the phrase "Congress should support their communities" included. The remainder of the document also include several calls for legislative cover for CDL.

    2. libraries generally lend digitized versions ofprint materials from their collections, strictly limiting them to a single digital copy per physicalcopy owned—a one-to-one “owned-to-loaned” ratio. If a library owns two physical copies of TheGiving Tree, it only loans out two copies at any time, whether physically or digitally.

      Concise definition of [[controlled digital lending]]. It maintains the same circulation model with similar points of friction as my library users experience now.

    1. Develop a process to ensure privacy is aconsideration in student analytics andinstitutional research. Unfortunately, concernsabout student data privacy have been minimal sincethe earliest days of the student success movement

      Yes—this needs to feed back into the the discussion in the EDUCAUSE Top 10 Higher Education IT Issues for 2022, specifically in the Needed Technologies and IT Capabilities section of Issue #3: Digital Faculty for a Digital Future.

    2. Comprehensive and sustained privacyawareness campaigns

      Got to page 5 of the document, and I'm now wondering "who". The "what" and "why" are more self-evident, but as of yet this document hasn't described a framework of who should be doing this work.

    3. Efforts to clarify and disseminatethe differences between “privacy as advocacy” (e.g.,privacy is a fundamental right; privacy is an ethicalnorm) and “privacy as compliance” (e.g., ensuringprivacy policies and laws are followed; privacyprograms train, monitor, and measure adherence torules) help frame conversations and set expectations.

      This is an interesting distinction... privacy-because-it-is-the-right-thing-to-do versus privacy-because-you-must. I think the latter is where most institutions are today. It will take a lot more education to get institutions to the former.

    4. These enhanced capabilities at theinstitution will no doubt necessitate investments in privacystaffing and infrastructure, resulting in fully staffed andresourced privacy units within the institution.

      Is there an enhanced role for Institutional Review Boards in assessing the data privacy aspects of research? To what extent does a privacy staffing/infrastructure component take in assisting researchers and shepherding data collection/analysis from a more central (either university-wide or department-centered) perspective?

    5. As informed and engagedstakeholders, students understand how and why theirinstitutions use academic and personal data.

      Interesting that there is a focus here on advocacy from an active student body. Is it the expectation that change from some of the more stubborn areas of the campus would be driven by informed student push-back? This section on "Students, Faculty, and Staff" doesn't have the same advocacy role from the other portions of the campus community.

    1. By extracting the center frame of every shot, the user couldview all of the frames simultaneously on the contact sheet to review all visual content in thevideo.

      Interesting choice to pick out the middle frame of each shot.

    2. Leveraging the metadata from AMP, for example, users are already able to conduct searches(with varying levels of results) such as:● Take me to every point in a video interview with Herman B Wells where Herman B Wellsmentions Eleanor Roosevelt on the subjects of Presidents’ spouses and 20th-centuryleaders.● Show me every video interview with Herman B Wells in the 1970s where the intervieweris Thomas D. Clark, and it was produced at WTIU Bloomington.● Take me to every point in a video interview with Herman B Wells where Herman B Wellsis on camera and talking about Midwest universities where there is no music present.

      Thinking about these searches—and the kinds of metadata needed to answer them—I wonder how much of this metadata can be transmitted to DPLA. I wouldn't expect these same kinds of searches to be possible in a multi-collection search tool like DPLA, but what would it look like to crosswalk this metadata into something DPLA can consume? (I'm less familiar with the metadata characteristics of Europeana.)

    3. Based on work and results so far, the project team has concluded that the approach taken inAMP is effective and scalable for generation of metadata for certain types of AV collections,particularly those that involve significant amounts of spoken word content. This includeslectures, events, and documentaries, along with oral history interviews and other ethnographiccontent.

      Pilot project results. Seemingly good for "scholarly" kinds of material—perhaps not so much for consumer content? The spoken word content also makes me wonder if there are similar machine-generation tools for musical performance content. YouTube's ContentID system certainly generates hits for some musical content. Also remembering the origins of Pandora to classify music characteristics.

    4. as of February 2021, Europeana comprises 59%images and 38% text objects, but only 1% sound objects and 2% video objects.3 DPLA iscomposed of 25% images and 54% text, with only 0.3% sound objects, and 0.6% videoobjects.4Another reason, beyond cost, that audiovisual recordings are not widely accessible is the lack ofsufficiently granular metadata to support identification, discovery, and use, or to supportinformed rights determination and access control and permissions decisions on the part ofcollections staff and users.

      Despite concerted efforts, there is a minimal amount of A/V material in Europeana and DPLA. This report details a pilot project to use a variety of machine-generated-metadata mechanisms to augment the human description efforts. Although this paragraph mentions rights determination, it isn't clear from the problem statement whether the machine-generated description includes anything that will help with rights. I would expect that unclear rights—especially for moving image content—would be a significant barrier to the open publication of A/V material.

    1. The goal of data brokers is to allow consumers to decide which information may be shared with advertisers, then share in some of the revenue generated by its use. These services ask users to sign up on the Web or via an application, connect their social media and Web accounts, then ask them to answer specific questions about their interests. Based on the data provided and collected initially and over time, the brokers will place users into segments, and advertisers can purchase access to data from one or more segments for use in personalized advertising. Each time their data is shared, or advertisers purchase access to a segment in which the user's data has been placed, the user can earn points, rewards, or cash. All the data brokers note that they store their user data on the cloud using a variety of encryption and security protocols, and that the end users with whom they work can opt out of having specific data shared if they so choose.

      The thought being: if a private file is going to be created about me, at least I should be able to cash in on that. How can we know if we are getting a good “price” for selling our behavior data and interests? Is there a divide between those that can afford not to be tracked versus those that need to be tracked as a source of income?

    2. Data broker Invisibly (www.invisibly.com) provides a listing of various types of data available for sale on the dark web, ranging from a Social Security number (valued at just $0.53) to a complete healthcare record ($250).

      Social security numbers, often thought of as important personally identifying keys, are relatively inexpensive according to this website.

    3. data on demographics that are in limited supply (such as data on Middle Eastern male consumers) is more valuable than demographic data on white millennial women. Similarly, the browsing data of individuals seeking to purchase a Tesla or Ferrari automobile within the next month would be valued more highly by data brokers and advertisers than the data of someone browsing for the best deals on a used Chrysler minivan.

      Demographic data gathered from behavioral advertising systems is not equally valuable. Value can vary by the attributes of the person and by attributes of what that person was doing.

    1. Catala, a programming language developed by Protzenko's graduate student Denis Merigoux, who is working at the National Institute for Research in Digital Science and Technology (INRIA) in Paris, France. It is not often lawyers and programmers find themselves working together, but Catala was designed to capture and execute legal algorithms and to be understood by lawyers and programmers alike in a language "that lets you follow the very specific legal train of thought," Protzenko says.

      A domain-specific language for encoding legal interpretations.

  2. Nov 2021
    1. Raspberry Pi Trading

      At the moment, this company is wholly owned by the Raspberry Pie Foundation.

      For clarity, tell us the distinction between the foundation, which you run, and the trading company that Eben Upton presides over, and how they work in conjunction with each other. The Raspberry Pi Foundation is a UK-registered charity with an educational mission and Raspberry Pi Trading Limited is a wholly owned subsidiary of the Foundation. That means that the Foundation is the shareholder of the trading company, which is an independent, commercial business. That distinction is really important because there are limits on what charities can do commercially. For example, a charity couldn't sell computers that are used in industry, which is a huge part of the Raspberry Pi computer business now. I lead the foundation and I also serve as a director on the board of the trading company. As you said, Eben Upton leads the trading company. [source]

      So it will be interesting to see how much control the Foundation has if/when the trading company goes public.

    1. The ultimate solution probably requires incentives that provide enough deterrence to eliminate such misconduct proactively rather than treating it reactively.

      There seems to be a lack of consequences when these deeds are done. Reputations are tarnished in the moment, but then forgotten. There is a new NISO work item on handling corrections. If those retractions and corrections are tied to ORCID identifiers, could this data be aggregated into actionable information in review workflows?

    2. Certainly, the pandemic has brought home not only enormous challenges in public communication about science but also the serious consequences of our failures in this respect.

      At least in the US, there was a pandemic playbook at the National level that was ignored politico. To what extent were there other playbooks that were ignored? Should each field of study have a shared responsibility for having these playbooks set up? (Should the field of entomology scholars have a communications playbook in place for when a plague of locusts Swiss the country?)

    3. journal editors and publishers — are becoming responsible not only for facilitating peer to peer communications but also for public access. As such, they are grappling with the upstream exploitations and downstream public communications and misinformation which were previously squarely outside their remit.

      If they did this well, would it be reason enough to justify the high prices that publishers charge? They would, of course, need to channel profits into people and tools to manage the public discourse—much like the social media companies have had (and failed?) to do. It is an interesting problem because the misinformation and mischaracterization is happening off the publisher’s platform.

    4. tack towards discovery, towards truth

      This sailing metaphor is a useful one. Buffeted on all sides by distraction. Constantly shifting pressures, but generally going in the same direction. There maybe an ideal path forward, but the variables are too numerous to know where the ideal path lies.

    1. When I worked at Google, it will still VERY normal until 2009/2010-ish to ask IT for a machine, put it under your desk, and run a CI like Jenkins on it.

      Does Google still have developers run servers under their desks?

    1. The only issue that did not appear on the Top 10 list of any type of institution was Creating a Culture of Care. Its average importance for the institutional groupings ranged from 6.37 to 7.02 on a scale from 1 to 10. These relatively low ratings may not indicate that mental health is not an important issue that technology can help address. They may instead suggest that the contributions of technology to improving mental health are nascent. The Early Adopter institutions rated Creating a Culture of Care most highly, whereas the lowest average rating for this issue came from Late Adopter institutions.

      ...which is too bad because there are a lot of overworked and over-stressed people mentioned in this article. Some thought and care is going to be needed to bring about desired transformations.

    2. #12. Where Have All the Applicants Gone?: Using technology to streamline administrative processes and leveraging artificial intelligence to assist the enrollment pipeline

      Artificial intelligence to assist in the enrollment pipeline. Now that is a scary thought towards more institutional homogeneity!

    3. Collaboration spaces equipped with big screens and state-of-the-art simulcast and videoconferencing capability can support both face-to-face and virtual collaborations. Maker spaces can enable students to work on creative projects, both academic and personal. Radical creativity might inspire institutional staff to design and develop a maker space 2.0 that builds on the first generation of maker spaces.

      Speaking of "go big or go home" as this section was talking about just a few paragraphs earlier, this sounds like a lot of risky "Go Big". Also, are these capabilities going to be equitably distributed?

    4. Leaders need to continue that communication and develop those relationships in remote and hybrid working environments, whether those environments become an ongoing fixture of the institution or are only part of a business continuity plan.

      This is the tough bit. Working fully remote for two organizations now, I find it is important to have in-person time with colleagues—meetings, meals, relaxation—to build relationships that can weather rough patches when people aren't face-to-face. In addition to communication plans for remote and hybrid, leaders need to recognize that humans are social creatures and almost all will benefit from having relationship-building time.

    5. They are overwhelmed by continuing to live with the pandemic, and many of them long for equilibrium instead of having to constantly adapt as the public health situation morphs.

      More descriptions of being overwhelmed.

      Still one wonders if this is a good time for retrospectives that codify what was learned.

    6. In all cases, institutions need to have a security and privacy strategy. Endpoint protection platforms, two-factor authentication, and cloud monitoring tools are some of the technologies that IT staff use to protect institutional data and individuals' identities.

      How to ingrain this into an organization without being dictatorial? I imagine: public pronouncements from high levels about the importance of cloud service governance, lots of education for decision-makers and implementers, clearinghouses of common information, open/blameless reports of problems.

    7. Achieving this requires a strong partnership among those in the procurement, cybersecurity, and legal departments, business process owners, and other key stakeholders to ensure that there are clear and equitable terms and conditions in the cloud contract.

      For service providers, this can be a differentiating factor— especially at the high-end of providers. High-touch versus mass-produced.

    8. Department staff may purchase cloud-based software to address business needs, only later learning that it may not meet institutional security requirements or be easily integrated with institutional applications and infrastructure.

      The institutional governance around this must be awful. Use of procurement cards probably means that these expenditures are hidden from purchasing departments. Is there any centralization that can happen—either for pricing or to bring institutional weight to bear on functional needs?

    9. The biggest transformation that institutional stakeholders are seeing now is a much broader collaboration between teaching faculty and the staff who support the curriculum, including academic technologists, instructional designers, and librarians. The most equitable level of access for all happens when faculty and staff are working together to improve the learning experience for students. As a result of this collaboration, faculty understand what other staff at the institution bring to the table, and staff become more involved in the classroom experience, physical or remote, and better understand what that experience is like for students and for faculty.

      This is the lead paragraph!

    10. some college and university leaders in Germany are considering adopting an on-premises cloud architecture to preserve digital sovereignty, avoid an over-reliance on proprietary systems, mitigate financial risks, and adhere to the emphasis by the General Data Protection Regulation (GDPR) on controlling one's digital destiny.

      Yes...Index Data is seeing this with FOLIO discussions in German territories. That is an interesting advantage for open source—the ability for institutions to host a multi-tenant cloud implementation on their own infrastructure and get support for implementation and customization from a service provider.

    11. As has long been the practice of solution providers, pricing is a black box that varies from contract to contract.

      At first I thought this is true of service providers that are specific to higher education. Almost all of the mainstream SaaS suppliers have the pricing tiers listed on their websites with discounts for yearly subscriptions. But in reality, many of these site have an "enterprise" tier that says "call us"—and that is where the pricing transparency breaks down.

    12. The need to maintain business continuity prevailed over the need to plan and to mitigate risk through appropriate cloud contract terms. The consequences may need to be addressed in 2022.

      Good point. Many new SaaS implementations were brought to bear in the heat of the moment just to keep things going. They may not be the right solutions, or there may be better solutions now.

    13. Beyond that, employers have been clamoring for clearer information about what graduates know and can do with their education (e.g., competency-based certifications). An associate's, bachelor's, or master's degree is not specific enough to enable employers to evaluate talent for the needs of their companies.

      Reflective of the portfolio discussions from the late 1990s and early 2000s: a set of artifacts that is student-centric (not class-centric as with learning management systems), and evaluated by instructors and peers.

    14. For those who simply want to return to the way things were, the talk of permanent changes is frightening and exhausting. Faculty and staff may interpret plans for dual modes of working and teaching as plans to double their workloads.

      "May interpret"?

    15. Both faculty and staff will need to become more flexible and adaptive in order to respond rapidly to changing circumstances and students' needs. Faculty will need to become adept at remote teaching, learning, collaboration, and advising so that they can confidently revise and improvise in the moment.

      ...and staff will need to understand the context of how the delivery of their services fit into the student experience as well as the training on how to create the supportive, equitable environment.

    16. They must be well supported by IT staff who understand not just the technology but also the concepts behind its application to teaching and learning

      Speaks to the need to raise the skill level of technology staff.

    17. heutagogical
    18. Institutions will need IT staff who are able to engage with students to provide them with the technology training and skills that they'll need to be successful.

      This requirement isn't new to many library staff, but I can see where library staff might be pressed into service to meet these instructional needs. Especially where library staff have existing liaison relationships with faculty.

    19. Similarly, many faculty and academic leaders are entrenched around the idea that certain modalities of teaching and learning are intrinsically better or more effective than others. That must change to serve the "everywhere" learner (and the "anywhere" faculty).

      What kind of data will be needed to change (or confirm) this position? Will libraries be asked to gather info?

    20. The biggest challenge may be finding ways of successfully working and learning in a hybrid mode. Meetings, teaching, and other synchronous group activities work best when everyone is online or when everyone is in the same room. Technologists are investing in various technologies that support "dual mode" instruction or meetings; these technologies include additional cameras, screens, audio, and collaboration technologies. Not every effort will work, so technologists often frame the technologies as experiments or pilots and encourage faculty and staff to test various options. Yet the solution is not only a technical one; equally important is re-engineering academic and work processes to enable people to conduct their work in a seamless way regardless of the modality.

      Yup—hybrid is more than in-person plus online. We're seeing this with efforts to hold conferences online and in-person. Hybrid will be different.

    21. Many institutional leaders are considering whether to make big bets on technology to change the game at their campuses. Those big bets will have major impacts on institutional culture and the very nature of how constituents get work done.

      This is another case where schools can differentiate themselves. Also see previous discussion about how "hybrid" is more than the addition of costs of in-person and online.

    22. As a result of the pandemic, students want and expect more opportunities outside of the normal, traditional hours that institutions typically offer. They want weekend, evening, and holiday hours for everything from classes to student services to the library.

      Students want weekend, evening, and holiday hours for ... the library.

      Much of what the library offers is self-service already, but I'm trying to imagine what this means for all library services. Not just the building space, but the staff services as well.

    23. Options for reimagining the campus include (1) redesigning campus physical spaces to support hybrid learning and work, (2) addressing space crunches by encouraging administrative groups to work remotely and then converting administrative spaces to academic spaces that support learning, research, and scholarship that is better conducted on campus, and/or (3) lowering costs by reducing the physical campus space.

      I'm reminded by what has happened to space management in the Dublin Rec Center over the 15 years we've lived here. At first there was a respectable amount of space in the building for administrator offices. Over time, activities such as the toddler care, the teen room, the computer lab, and various classroom spaces moved around the first floor as more spaces were converted from offices to programming space. Even when Ethan's guitar class was in a room that was converting from a staff conference room to classroom. Now almost all of the staff space has moved to the old city hall building down the road.

    24. The two models will coexist at many institutions, forcing leaders to consider what can be done only on campus and what can be done virtually.

      I wonder how this will factor into the earlier observation about institutions needing to specialize?

      Arguably, "hybrid" is not a mix of "on campus" and "virtually"—it is a separate thing all its own. See the related efforts to have hybrid conferences.

    25. Instructional support and IT staff must provide more training for faculty and staff, to keep them up-to-date and to ensure that they have the skills needed to teach and work securely and effectively beyond the traditional campus.

      Reinforcing that there isn't an end goal in sight—except to be more nimble to the change that is coming. "The only constant is change?" Everyone needs training in not only the technology being deployed now, but also how to learn the technology that will be coming after.

      Layer that onto how tired everyone is. I can hear: "I just want to learn what I need to know now...the rest of what's coming doesn't concern me."

    26. Yet faculty, staff, and students are tired, stressed, and overwhelmed.

      THIS is a continuing theme—everyone is tired, stressed, and overwhelmed.

    27. Learning analytics can help faculty adapt their teaching to identify and support students quickly and efficiently. Assessment technologies, although often controversial, are maturing, and with the help of learning and assessment advocates, these technologies can become more valid and better safeguard privacy.

      Too bad there are no citations here. There may be advances in safeguarding student privacy in assessment technologies, but it isn't clear.

    28. Using collaborative technologies like Slack or Microsoft Teams can foster dialogue and community around how faculty are using technology in their teaching, how they are teaching, and how they are changing the curriculum.

      It's not the technologies, though—it is the community management of the dialog that will be key. Introducing Slack or Microsoft Teams for its own sake is not going to foster the desired discussion. But the folks that could be community managers for this discussion are the already overworked instructional designers.

      There is probably a need for an eat-your-own-dogfood approach here. Whatever technologies end up being used in the classroom need to form the foundation of this discussion space.

    29. Change decisions can't be made behind closed doors. They will require dialogue across staff groups and student groups to help all stakeholders understand and agree on goals and feel that they have a hand in choices and timing.

      Clear communication and inspired/inspiring leadership will be needed.

    30. They will provide a holistic view of students, alumni, employees, resources, and more in ways that can result in beneficial outcomes. New architectures increase access to data and resources, which can offer better insights about institutional products and services and enable faster, more accurate decisions.

      This should be contrasted with the comment in point 1 above about data privacy:

      Culture clashes between data preservationists and leaders managing institutional risk and legal exposure may intensify as higher education institutions introduce more conservative records-retention policies and processes.

      This isn't reconciled here, but there will clearly be a need to prioritize (and extremes on both sides are going to make the conversation hard).

    31. digital transformation (Dx)

      The "Dx" abbreviation is going to be used constantly throughout this document. Keep this expansion in mind as you read. As I read the document in several sittings, I had to keep coming back to this definition.

    32. The coming demographic cliff—a steep drop-off in potential first-time full-time freshmen projected to arrive in 2025 due to the decline in birth rate in the 2008 recession—may further erode enrollment income at US institutions.

      I had heard this before, but I wanted to see some data. On page 3 of National Vital Statistics Reports, Vol. 61, No. 1 (8/2012) - nvsr61_01.pdf from the CDC, there is quite a clear decline in birth rates starting in 2008 and extending further.

    33. Institutions that collaborate to manage cybersecurity can share costs and expertise, both reducing the burden on individual institutions and increasing the level and effectiveness of cybersecurity at institutions of all sizes.

      Is there a role for the Open Library Foundation here?

    34. Some cybersecurity tasks can be outsourced, however, to extend staff and/or acquire specialized skills.

      In what way can service providers have offerings that distinguish them from others?

    35. What were once clear distinctions among hardware, software, cloud, and services and between primary and secondary suppliers continue to blur and overlap to the point where they're no longer distinguishable as separate categories. That raises questions and challenges about identifying the perimeter—or, where institutionally owned and managed technology infrastructure ends. Additionally, the integration between technology run in-house and that run by an external supplier continues to blur boundaries between consumers' responsibilities and suppliers' responsibilities. That, in turn, creates the challenge of clarifying which technology and data components can and should be secured by institutions versus by suppliers versus by end users and, thus, where security risk factors and responsibilities reside. Sometimes suppliers' security and privacy controls may not be as tight as institutions require or realize.

      The overlap between these areas is going to required increased communication between service providers and campus personnel, and probably helping students/faculty understand where reporting of issues needs to happen.

    1. Nyquist rate

      In signal processing, the Nyquist rate, named after Harry Nyquist, specifies a sampling rate (in units of samples per second[1] or hertz, Hz) equal to twice the highest frequency (bandwidth) of a given function or signal. With an equal or higher sampling rate, the resulting discrete-time sequence is said to be free of the distortion known as aliasing. Conversely, for a given sample rate, the corresponding Nyquist frequency in Hz is the largest bandwidth that can be sampled without aliasing, and its value is one-half the sample-rate. Note that the Nyquist rate is a property of a continuous-time signal, whereas Nyquist frequency is a property of a discrete-time system. Nyquist rate - Wikipedia

  3. Oct 2021
    1. Either you get solar power or you get trees. In California, they put their thumb on the scale of solar panels and basically said the trees have to come down.

      In a Sunnyvale, California lawsuit involving solar panels and redwood trees, the state courts picked solar panels. Two environmental efforts pitted against each other.

      The judge found that Trees Nos. 4, 5 and 6, which cast little shade when the solar panels were installed, were now collectively blocking more than 10 percent of the panels over the hot tub. Trees Nos. 1, 2 and 3 shaded the area when the panels were installed, so they were exempt, and Trees Nos. 7 and 8 did not violate the law, the judge ruled. Trees Block Solar Panels, and a Feud Ends in Court - The New York Times

    1. .In 1622, the same year that Galileo was reiterating his defense of the heliocentric model of the solar system, Pope Gregory XV created the Sacred Congregation for the Propagation of the Faith—known in Latin as the Sacra Congregatio de Propaganda Fide, or the Propaganda Fide for short—a body tasked with coordinating and expanding the missionary activity of the Catholic Church

      Origins of propaganda

  4. Sep 2021
    1. Proper long-term, digital preservation involves curation: careful attention to ensure the content and associated metadata are safe and secure and are managed so that they remain usable despite changes in file formats and technologies in order to remain accessible.

      Easy to understand definition of curation in a digital preservation context.

    1. And yet, you don’t have to prove anything to get that email in the first place. I’ve had that Gmail address longer than I’ve had any one physical address in my adult life, or any phone number or any driver’s license number. The only identifier I’ve had for longer is my Social Security number. I got that from the federal government after my parents submitted proof of my identity and citizenship status. I just had to fill out a few prompts on a website to get my email address.

      On the longevity of email address assignments as identifiers.

    2. Ray Tomlinson is widely credited as the inventor of email, but the technology evolved in a piecemeal fashion, over time, with additions and improvements from a lot of people. Dave Crocker worked on an early effort to create email standards in 1977 and spent the rest of his career creating or contributing to internet mail standards, which he is still doing today. Crocker told me that email was the result of a “massive amount of increments,” most of which were reactive; each iteration was a solution to an existing problem, or someone just coming up with “a cool idea.”

      Includes a brief history of email and it’s lacking security foundation.

    1. In traditional paper-based schemes, voters verify by visually inspecting that their ballot represents their intention.

      I remember seeing some research about how few times a voter will examine the paper coming out of the ballot marker before they insert it into the ballot marker.

  5. Aug 2021
    1. This report will focus primarily on the hardware and software associated with cameras, location trackers, and sensors. These technologies are common components in broader “smart city” technologies and projects, and they have the high-risk ability to collect data that can directly, or in combination with other data, identify individuals.

      Technology that can be used to track individuals...the focus of this report. There are other "smart city" technologies that are not covered in this report.

    2. technology that is capable of collecting data that can identify individuals because that data can be used to target individuals, which in turn can erode the sense of safety and inclusivity requisite for public spaces to serve as commons for democratic functions.

      The why of this is important. Its existence is a thread to democratic activity. If one is being monitored in the public space, then can they truly act as they feel. In some sense, this pushes people towards the median—no individualism because that would stick out.

    3. This story, with its mission creep and mishaps, is representative of a broader set of “smart city” cautionary trends that took place in the last year. These cautionary trends call us to question if our public spaces become places where one fears punishment, how will that affect collective action and political movements?

      The anecdotes in the article come from San Diego, where police used streetlight cameras for surveillance of protesters. (The cameras were originally for traffic control and air quality monitoring.) After local activists found this, the city stopped receiving data from the cameras (they couldn't be turned off). It was later found that the police department also held back materials from a congressional inquiry on facial recognition technology.

    1. “Donor-advised funds have grown even more rapidly in number and assets and DAFs have NO LEGAL MANDATE to pay out anything each year. So donors take tax breaks immediately when they transfer their wealth to DAFs. But those DAFs are not actually legally required to actually distribute those funds to nonprofits.” So there's a lot of very wealthy people who use DAFs and private foundations for tax advantages.

      sigh — so this is a thing.

    2. And if you look at for instance, Freexian, which is an effort where Debian developers club together and get sponsorship money, so they can each spend a certain number of hours each month consulting on really important parts of Debian software and infrastructure.

      Example of an open source project that crowdsources funding to maintain the code.

    1. As shown in the previous section, accelerometers in mobile devices can allow serious invasions of user privacy. Even when other sensors, such as cameras, microphones and GPS are turned off, accelerometer data can be sufficient to obtain information about a device holder’s location, health condition, body features, age, gender, emotions and personality traits. Acceleration signals may even be used to uniquely identify a person based on biometric movement patterns and to reconstruct sequences of text entered into a device.

      Lead paragraph of the "Discussions and Implications" section, which is a high-level summary of the various ways accelerometers can track user characteristics. There are noted limitations, including controlled lab settings and position on the body where the accelerometer is located.

    2. Body movement patterns recorded by accelerometers in mobile devices have been demonstrated to be discriminative enough to differentiate between, or even uniquely identify, users.

      Might have to read further to understand this, but i wonder if these characteristics transfer from device to device...can body movement patterns detected on one device be ported to another device for tracking. Later in this section the authors say:

      Following an approach commonly referred to as device fingerprinting, users can further be told apart based on unique characteristics and features of their personal devices. Calibration errors in accelerometers, which are caused by imperfections in the manufacturing process, have been found sufficient to uniquely identify their encapsulating device.

      So maybe there are enough errors in the sensors of each device to prevent a pattern from being ported to another device?

    3. It has been shown that accelerometers in mobile devices can be exploited for user localization and reconstruction of travel trajectories, even when other localization systems, such as GPS, are disabled. In [38], Han et al. were able to geographically track a person who is driving a car based solely on accelerometer readings from the subject’s smartphone. In their approach, they first calculate the vehicle’s approximate motion trajectory using three-axis acceleration measurements from an iPhone located inside the vehicle, and then map the derived trajectory to the shape of existing routes on a map. An example application of the algorithm is displayed in Fig. 2. Han et al. describe their results as “comparable to the typical accuracy for handheld global positioning systems.”

      GPS off but accelerometer on? Your location can still be inferred from the movements.

    1. The concept is based on fundamentally wrong assumptions.

      From section 7:

      The prerequisite for meaningful progress on the legal front is that the manifold limitations of privacy self-management are recognized and treated as such by legislators. This also means to admit that our privacy laws – including recent and much-anticipated ones, such as the GDPR and the California Consumer Privacy Act – are based on wrong assumptions and therefore not truly fit for purpose. While sticking with privacy self-management may be the path of least effort in the short run, this policy ignores the long-term consequences of uninformed and involuntary privacy choices, which can be severe not only for individuals but also for society at large..

    2. this article provides an overview and classification of the manifold obstacles that render privacy self-management largely useless in practice.

      From the article's introduction:

      To underpin the debate going forward and support knowledge transfer into politics, this article provides a structured overview of arguments that scholars have brought forth in opposition of privacy self-management. These arguments concern the informedness and rationality (Sect. 2), the voluntariness (Sect. 3) and the unaccounted-for externalities (Sect. 4) of individual privacy choices. Additionally, we point out loopholes in privacy law that undermine the effectiveness of privacy self-management (Setc. 5). While our legal analysis focuses primarily on the GDPR, which is presently regarded as the most comprehensive and influential privacy regulation worldwide (Miglicco 2018; Zarsky 2016), the essence of the arguments is generally applicable to privacy laws that embrace the notice-and-choice paradigm.

    3. People's privacy choices are typically irrational, involuntary and/or circumventable due to human limitations, corporate tricks, legal loopholes and the complexities of modern data processing. Moreover, the self-management approach ignores the consequences that individual privacy choices have on other people and society at large.

      Privacy self-management is a lot of little David-and-Goliath situations. Each person has to examine the privacy implications of each decision—if the user can understand the implications. One user's actions also impact a lot of connected people, and those connected people don't get a say in the privacy choices.

  6. Jul 2021
    1. But Apple isn’t going to do any of this if they don’t think they have to, and they won’t think they have to if people aren’t calling for their heads.

      Excellent point. I think they have to. Do they know they have to? Is there enough government pressure for them to act? Does the government have realistic options at this point (e.g. Android)?

    2. But companies like Apple and Google can raise both the cost and risk of exploitation — not just everywhere, but at least on specific channels like iMessage. This could make NSO’s scaling model much harder to maintain. A world where only a handful of very rich governments can launch exploits (under very careful vetting and controlled circumstances) isn’t a great world, but it’s better than a world where any tin-pot authoritarian can cut a check to NSO and surveil their political opposition or some random journalist.

      This is an interesting point. It isn’t a question of all or nothing. There is a gradation of effort to make it harder for companies to have a business that mass-markets these exploits.

    3. An entirely separate area is surveillance and detection: Apple already performs some remote telemetry to detect processes doing weird things. This kind of telemetry could be expanded as much as possible while not destroying user privacy. While this wouldn’t necessarily stop NSO, it would make the cost of throwing these exploits quite a bit higher — and make them think twice before pushing them out to every random authoritarian government.

      There is a security/privacy trade off here. More telemetry means the device manufacturer has more privacy-busting data, which makes the manufacturer a prime target for exploitation (human and mechanical). Should a user be given a spectrum of telemetry versus protection? What about corporate/government agencies…should there be an MDM option to receive their own stream of telemetry? Their own tap of all network traffic from a device? (Is that possible in the baseband?)

    4. A case against security nihilism

      Blog post with broad outlines of how the Pegasus exploits worked, how we can’t shrug this off, and what device manufacturers could do to raise the stakes for such services.

    1. correlated a unique mobile device to Burrill when it was used consistently from 2018 until at least 2020 from the USCCB staff residence and headquarters, from meetings at which Burrill was in attendance, and was also used on numerous occasions at Burrill’s family lake house, near the residences of Burrill’s family members, and at a Wisconsin apartment in Burrill’s hometown, at which Burrill himself has been listed as a resident.

      This reporting doesn’t say if it is an app identifier or if it was a mobile device identifier (ISMI or the OS-supplied advertising id).

    2. Commercially available app signal data does not identify the names of app users, but instead correlates a unique numerical identifier to each mobile device using particular apps. Signal data, collected by apps after users consent to data collection, is aggregated and sold by data vendors. It can be analyzed to provide timestamped location data and usage information for each numbered device.

      No identification of the user in the signal data, and the user consents to the data gathering.

    3. the mobile device correlated to Burrill emitted hookup app signals at the USCCB staff residence, and from a street in a residential Washington neighborhood. He traveled to Las Vegas shortly thereafter, data records show.On June 22, the mobile device correlated to Burrill emitted signals from Entourage, which bills itself as Las Vegas’ “gay bathhouse.”

      Correlation with known residence and to Las Vegas on a business trip.

    4. But an analysis of app data signals correlated to Burrill’s mobile device shows the priest also visited gay bars and private residences while using a location-based hookup app in numerous cities from 2018 to 2020, even while traveling on assignment for the U.S. bishops’ conference

      High-level details of the correlation and re-identification.

    5. Pillar Investigates: USCCB gen sec Burrill resigns after sexual misconduct allegations

      Notable for the correlation of app location data with the physical presence of a device in office and home locations plus travel. Re-identification of anonymized data from a location-check-in app (Grindr).

    1. Science really needs global governance.

      What would this governance look like? Does a professional ethics/licensing program need to come to bear?

    2. a systems problem—the system provides incentives to publish fraudulent research and does not have adequate regulatory processes. Researchers progress by publishing research, and because the publication system is built on trust and peer review is not designed to detect fraud it is easy to publish fraudulent research. The business model of journals and publishers depends on publishing, preferably lots of studies as cheaply as possible. They have little incentive to check for fraud and a positive disincentive to experience reputational damage—and possibly legal risk—from retracting studies.

      A systemic problem where the current publishing structure has the wrong incentives for authors, publishers, and others. If we are to solve the access problem (with some flavor of open access), how can the incentives be changed to account for this bad research problem? Are changes to finding models a net positive or negative effect on the trustable nature of the published research?

    1. High Social Media

      High social media usage is correlated with believe in "the steal" and usage of violence

    2. FEAR OF “GREAT REPLACEMENT” MOST CONSISTENT FACTOR ACROSS STUDIES

      Bottom line of the studies. As the next slide on implications shows:

      1. Not just a segment of right-of-center organizations, but "a broader mass movement with violence at its core"
      2. Fundamentally a political movement of pro-Trump supporters
      3. As a driver is "Great Replacement" idea...that minorities have more rights than whites
    3. We need a fine-grained understanding of who stormed the Capitol on January 6 and who currently believe the 2020 election was stolen and would participate in a violent protest in order to know who we are dealing with and create viable solutions for the future

      Key reasons for study—who was arrested assaulting the capitol, understanding the national scope of the insurrectionist movement, and gauge the extend that conservatives as a political identity are involved.

    1. it's actually been kind of healing in a way, because you see that every generation of us has to confront this idea of what it's supposed to be and sort of say, but in a spirit, in a generous spirit, in a spirit of sharing.

      To the younger generation, stand on the shoulders of giants. To the older generation, see in the younger generation the struggles that you encountered, and support their voices.

    2. And I remember listening to or trying to watch one of the Sunday morning political affairs shows. And I had no idea what they were talking about. And I didn't like how that made me feel because I didn't - and then I realized, oh, it's not for me. It's for the insiders.

      This is part of being a welcoming place…a place where you didn’t feel like you had to know the inside language before being able to participate. Don’t have someone feel stupid when they start participating in the community.

    3. But what about the people who haven't had a chance to think about that yet? Could we think about them? And I hope that there will always be some room for those folks because, you know, why should everybody have already decided everything or know everything? I feel that there has to be some place where you can find things out without being made to feel stupid. And I'm hoping that we will continue to be that place.

      This has to be a tough line to walk. A first I thought that the average NPR listener is more curious than the average citizen, so this is a pointless path to take. But then I found myself interested in this discussion, and it clearly wasn’t something that I had thought a lot about. (See also the previous episode on the history of the Supreme Court and the next episode that was on the Stonewall uprising.)

    4. I'm not telling you what to think. I'm telling you what to think about.

      This is key…not telling you what to think. Rating putting information and perspectives in front of you so you can mix it with your own experience.

    5. we were really interested in putting on people on the air who were what you would call, quote-unquote, like, "regular people" because we felt their lived experience told a truth that needed to be told.

      Another part of broadening story-tellers is authentic storytelling from “regular people”.

    6. who are the rising stars in music in Ghana? Like, it wasn't war. It wasn't, you know, war crimes. It wasn't people being - you know, recovering from war. But it was daily life there. It was something that was hugely important to the people living there. And that's partly what we were trying to achieve and, I think, did achieve.

      News coverage, done well, goes beyond “war” and “crimes”. It goes into the stories that are important for the people there…who are the rising music stars, for instance. This is an important part of broadening the story-tellers.

    7. And this is why a real rendering of history is important, because people think - just like social movements, they think they just sort of happen. You just arose fully formed. No. After months of discussion, we sort of arrived in the same place and, you know, launched Tell Me More.

      Social movements don’t arrive fully formed. Look to those that have done the work to get a segment of society to this moment. Later in the episode, Michel Martin talks about standing on the shoulders of giants, and how these podcast hosts are now standing on her shoulders.

    1. These recommendation systems are getting so good that if we aren't vigilant, we're just going to end up drifting toward whatever the machine tells us we like. CHILDS: This isn't just a problem of human psychology. It's also a computer science problem. Jingjing says it becomes a feedback loop. Those little drifts add up.

      The problem with recommendation engines: if people put too much faith in them because they have worked well in the past, then there is a "drift to whatever the machine tells us we like".

    2. to figure out if recommendation systems are changing us, Jingjing and her team created a series of experiments using college students, basically fiddling with recommendations and seeing how those recommendations affected the students' behavior.

      Gediminas Adomavicius, Jesse C. Bockstedt, Shawn P. Curley, Jingjing Zhang. Effects of Online Recommendations on Consumers’ Willingness to Pay. Information Systems Research, 29 (1), 84-102. https://doi.org/10.1287/isre.2017.0703

    3. Jingjing Zhang

      Associate Professor, Fettig/Whirlpool Faculty Fellow, University of Indiana Kelley School of Business

      https://kelley.iu.edu/faculty-research/faculty-directory/profile.html?id=JJZHANG

    4. The original collaborative filtering required users to tell the algorithm, hey, I like what this guy likes, thumbs up, or thumbs down. Bob and his team took this idea further. They realized that machines could figure out people's tastes on their own by grouping movies together and teasing out what they had in common based on factors that you couldn't just see or guess.

      The innovation at Netflix: synthesizing characteristics of movies to group them together.

    5. Runaway Recommendation Engine

      On Bayesian algorithms for email filtering augmented by collaborative training data. Then the "Netflix Prize" to improve its recommendation engine.

    1. a 2020 report by the Chinese consulting firm Trivium argues that the social credit system seems more experimental and banal than Western critics describe.

      Perspective from a Chinese consulting firm. To be believed?

    2. On Tyranny, Timothy Snyder
    3. Rongcheng is a coastal town in China of 670,000 people where one thousand social credit points are given to each person as a default. Behavior the authorities want to deter, such as jaywalking, will cost you points, and praiseworthy behavior is rewarded with points. Fighting with neighbors detracts five points; failure to clean up after a dog detracts ten. Donating blood earns five. Punishment comes when one falls below a threshold: bank loans or high-speed train tickets become unattainable. Rewards come in such forms such as discounted utility bills, faster internet service, or improved health care services.

      Description of a real-world implementation—if local, it seems—of the social credit system. Includes examples of demerits and merits one can receive.

    4. study

      Stoycheff, Elizabeth (2016). Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring. Journalism & Mass Communication Quarterly, 93(2), 296-311. https://doi.org/10.1177/1077699016630255.

    5. PEN America
    6. 2016 study

      Penney, Jonathon W. (2016). Chilling Effects: Online Surveillance and Wikipedia Use. Berkeley Technology Law Journal, 31(1), 117-182. http://dx.doi.org/10.15779/Z38SS13.

      Berkeley Law institutional repository

    7. research study

      Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of being watched enhance cooperation in a real-world setting. Biology letters, 2(3), 412–414. https://doi.org/10.1098/rsbl.2006.0509.

      PubMed Central PMC1686213

    8. Heidi Boghosian on Self-Censorship and Expression

      Excerpt from the book “I Have Nothing to Hide”: And 20 Other Myths About Surveillance and Privacy — Beacon Press, copyright 2021. http://www.beacon.org/I-Have-Nothing-to-Hide-P1684.aspx

    1. Since race is a social construct it is difficult to measure. Yet data is critical to the development of standards; thus data collection must be assessed for unintended biases.

      Measuring a social construct is difficult. Yet in order to anticipate unintended biases, the attempt must be made.

    2. Gender-neutral language does not imply that the standards are developed without considering gender-specific needs and priorities.

      Gender-neutral does not mean gender isn't considered. Comment: in fact, striving for gender-neutrality may inspire a broader conversation about the nature of the standard and its applicability to a wider range of people.

    3. Inclusive, or bias-free, language uses expressions and terms that are likely to be perceived as neutral or welcoming by everyone, regardless of their gender, race, religion, age, etc. Using inclusive language can help people from diverse backgrounds feel more welcome and encourages precise, high quality work. As noted in a recent McKinsey survey, employees who feel more included are nearly three times more likely to feel excited by, and committed to, their organizations.

      Reason for using inclusive, bias-free language—it helps people feel welcome and creates an excited, committed community.

    1. Inside Facebook’s Data Wars

      My comment submitted to the NYTimes for consideration:

      If Facebook creates its own "Top 10" list, would it be believed? How far would Facebook go to devise an algorithm that shows a "Top 10" that they want the world to see based on whatever subset of internal signals they want to show? Would we know if a "Top 10" list was simply made up in the marketing department?

      Is it the responsibility of any regulator to check to see a "Top 10" list has any basis in reality? If it is no regulator's responsibility, how would well-informed citizens discover if they have been mislead? What recourse would a regulator have to put Facebook on the straight-and-narrow? If Facebook doesn't have to publish internal metrics, how firm would a citizen's case for fraud in civil court be?

      If the controlling interest in Facebook is Mark Zuckerberg himself [1], how can the public hold him accountable for Facebook's actions/inactions? If Mr. Zuckerberg used his wealth to buy off members of Congress, the executive branch, and the courts, would we know?

      Correct answers only. Our democracy and perhaps the continued existence of humankind are at stake.

      [1] https://www.vox.com/recode/2019/5/30/18644755/facebook-stock-shareholder-meeting-mark-zuckerberg-vote

    2. Mr. Zuckerberg is right about one thing: Facebook is not a giant right-wing echo chamber.But it does contain a giant right-wing echo chamber — a kind of AM talk radio built into the heart of Facebook’s news ecosystem, with a hyper-engaged audience of loyal partisans who love liking, sharing and clicking on posts from right-wing pages, many of which have gotten good at serving up Facebook-optimized outrage bait at a consistent clip.

      Facebook is not a giant echo chamber, but there is an echo chamber inside of it. Is it Facebook without the echo chamber? What is the monetization factor of the echo chamber participants versus the non-participants?

    3. “And it doesn’t want to make the data available for others to do the hard work and hold them accountable.”

      Facebook is such a black box that researchers and journalists that want to hold it accountable rely on Facebook for that data.

    4. The company, blamed for everything from election interference to vaccine hesitancy, badly wants to rebuild trust with a skeptical public. But the more it shares about what happens on its platform, the more it risks exposing uncomfortable truths that could further damage its image.

      Blamed for these things, but is the research conclusive about Facebook's effect on this society ills? Does "the more it shares about what happens on its platform" implicate it further in spreading misinformation about the election and anti-vax messaging?

    1. Some of these examples contain characters that are invalid, such as inline comments

      Sigh. Just another reminder of why JSON without comments stinks.

    2. person, group, organization, thing, or concept

      It may be further answered later in the spec, but this list sounds like an list of abstracts. Can the subject be an actual thing like a web resource?

  7. Jun 2021
    1. MATT: That the deeply traumatic act of coming into existence in the air breathing land world.  [SOUND CLIP, Baby: [Baby crying]] MATT: The severity of it and the harshness of it forces you to adapt. ANNIE: Right. MATT: In order to survive.

      Trauma is necessary for life. Part of a story about how a baby breathes its first Breath.

    1. she told me last fall that she expected this recognition of the 5 micron error and the kind of subsequent changes would take a generation or take 30 years that she hoped she would live to see it. And so, for it to happen in a year, both because of the urgency of the crisis and because of the tenacious pushing that she and others did, the reckoning that this pandemic has led to will have positive consequences for public health at long outlive this pandemic.

      What is the spark that moves the acceptance of a new finding from a generation to a year? Can that be harnessed?

      I wonder about middle school and high school science teachers; they must face this constant challenging of how they learned things, only to have new science force them to teach something that they thought wasn’t true.

    1. MARGARET ATWOOD Young people worry a lot more than older people. And the reason they worry a lot more than older people is that they don't know the plot of their own lives yet. They don't know how it's going to turn out for them. Will they meet their true love? Will they be successful? At my age, I kind of know how the story goes. So should I get hit by a truck tomorrow? The plot will pretty much have unfolded.

      Young people are more anxious because the course of their lives is unknown—there are so many doorways yet to be gone through. Older people have seen much of the arc of their life stories.

      Is that the root of anxiety—not knowing the outcome? Can one be at peace with that?

    1. Now, if you’re on a decentralized platform, data is distributed across many servers or computers. Those aren’t necessarily owned or operated by the creator of the platform you’re using. The power, the authority, the control is spread out. A decentralized system gives everyone more freedom, but it also means that because the data is distributed, there’s no authority who gets the final word, so it’s harder to find and remove illegal or objectionable content.

      Holy cow...listening to this brought back dormant memories. UUCP then Bitnet for point-to-point email exchange. IRC, of course. Then NNTP for newsgroups. Later in the podcast, they interview someone with Mastodon who talks about funding an instance for $500/month using Patreon. Could there be a resurgence of distributed communication tools run by dedicated hobbyists? Should there be?

    1. One thing Amazon doesn’t bring up is that athletes train for an event with a definite end date. Athletes aren’t competing day in and day out, and they have time to rest and recuperate in between.

      More on the [[Societal Cost of Advancing Technologies]] theme, along with a bit of [[Two-tier workers]].

    1. When contacted by Ars, Charter said that "Spectrum Internet retail prices, speeds, and features are consistent in each market—regardless of the competitive environment." But "retail prices" are the standard rates customers pay after promotional rates expire. Stop the Cap showed that Charter's promotional rates vary between competitive and noncompetitive areas.

      This is key to understanding the article, and should be further up towards the top. The retail price is the came, but the promotional price offered to new customers is different. This isn't the full story, because the length of time the promotional price is locked in is different, as is the installation price.

    1. My own Dewey Decimal Classification is 306.765, for bisexual. But that isn’t my favorite word; I believe it reinforces the gender binary and overemphasizes sex. During my long relationship with a woman, I tried calling myself a lesbian, but that didn’t fit either. When we were over, I stopped calling myself anything.
    2. People often ask me why, in the digital age, libraries still have print books with obscure coded labels. I find comfort in categorization. But knowledge, like love, is as vast and ever-changing as the ocean.

      Related notion of the classification of physical items (of course): a physical item can only be shelved in one place...given one classification, as it were. But knowledge, like love, defies being put in one place. Nice.

    1. The published fee for a Visa Signature Preferred card on a restaurant charge, for example, is 2.7 percent.

      This is true! See page 7 of the linked document with the heading "Visa U.S.A. Consumer Credit Interchange Reimbursement Fees" (also on Wayback).

    2. Credit card rewards aren’t generally taxed like regular income, so to a certain extent, they’re even a bigger benefit than they appear on paper. In a 2019 piece for NBC News, Klein offered up a concrete example: Say a family spends $80,000 a year on a credit card and gets 1.5 percent cash back, translating to $1,200. By his estimate, that’s equivalent to about $2,000 in pre-tax earnings.

      Credit card rewards—a kind of income—are not taxed. Those that use reward cards—likely more affluent people—effectively get a tax break by making purchases with a credit card over someone who pays in cash.

      This article doesn't describe it, but I wonder to what extent this is true for debit cards. Debit cards can offer rewards, too, but the equivalent of the interchange fees is much less, I think.

    1. About 81 percent of rural households are plugged into broadband, compared with about 86 percent in urban areas, according to Census Bureau data. But the number of urban households without a connection, 13.6 million, is almost three times as big as the 4.6 million rural households that don’t have one.

      Demonstrating the difference in population, 81% of rural households versus 86% of urban households, but there are 3x the number of urban households that don't have broadband access.

    1. Finally, one should probably not underestimate the importance of a haven in society that supports community without financial barriers; 75% believes that the public library strengthens the local community and environment. This is basically where the modern library really stands out from other learning and cultural institutions.

      The author calls out "haven" as an underestimated characteristic. That rings true for me as well. How can libraries build on this perception in its service offerings and its justification for existence?

    2. Haven: The public library is a haven in everyday life, where citizens find room for contemplation and take time for themselves and each other Perspective: The public library is a credible communicator of knowledge and gives citizens an enlightened and critical perspective on life. Community: The public library is a place where citizens experience togetherness – alone, or with others – and where they experience that materials and facilities are a common property without financial barriers to use. Creativity: The public library is a source of inspiration and stimulates citizens imagination. The public library can also help motivate citizens to try something new and acquire new skills

      The study found these four features of public libraries to be of significance: Haven, Perspective, Community, and Creativity. It might be interesting to compare this with the ALA Library Bill of Rights—which speaks almost exclusively of content ("credible communicator of knowledge")— or Ranganathan's Five laws of library science—which, too, is almost all about content. If these are the ultimate metrics that libraries are using to guide their services, what is being missed by not including the three other facets.

    1. From 1999 to 2017, the number of alcohol-related deaths in the U.S. doubled, to more than 70,000 a year—making alcohol one of the leading drivers of the decline in American life expectancy.

      Cites NIH news release.

    2. he later became one of the country’s leading whiskey distillers. But he nonetheless took to moralizing when it came to other people’s drinking, which in 1789 he called “the ruin of half the workmen in this Country.”

      George Washington was "one of the country's leading whiskey distillers? Again...verify before reuse.

    3. The Mayflower landed at Plymouth Rock because, the crew feared, the Pilgrims were going through the beer too quickly.

      Interesting anecdote; I've never heard this before. If used further, verify with other sources.

  8. May 2021
    1. Newsrooms ought to consider adopting tools to suit their workflows and make link preservation a seamless part of the journalistic process. Partnerships between library and information professionals and digital newsrooms would be fruitful for creating these strategies.

      One such tool that could be used is [[RobustLinks]], which ties into web archive services. Such services could even be run by the publisher itself.

    2. Thirteen percent of intact links from that sample of 4,500 had drifted significantly since the Times published them. Four percent of reachable links published in articles from 2019 had drifted, as compared to 25 percent of reachable links from 2009. 

      Researchers conducted a human review of 4,500 links.

    3. Of these deep links, 25 percent of all links were completely inaccessible. Linkrot became more common over time: 6 percent of links from 2018 had rotted, as compared to 43 percent of links from 2008 and 72 percent of links from 1998. Fifty-three percent of all articles that contained deep links had at least one rotted link. 

      Analyzing 2.3m links from 550k articles. About 1.6m were "deep links" (beyond the home page). Link rot appears as a linear function over time.

    1. She understood that, you know, we have power and sometimes you have to wait ‘til the moment is right.

      Quote from Kitt Shapiro, mother to Eartha Kitt, on how her mother reacted to a situation when she was thrown out of a amusement park, only to be at a photoshoot days later in front of the park and remarking to the press how she was asked to leave. An example of using personal power to affect change with a greater impact.

    1. the reality is when we founded the company, there was no centralized data on how these attacks happened. And we felt that the first thing you have to do to solve the problem is to collect the data. And I think we've done that very well.

      Although being a company that has its basis in helping others respond to ransomware attacks does perpetuate the cycle of attacks, being a company in the space with a wide view of the landscape gives them data that can be used to ultimately end the practice.

    2. What I would say is that the contributory factors that have led us to where we are today are as much socioeconomic as they are other things. There are such low barriers to entry to cybercrime, and there are lots of well-educated, sometimes STEM-educated individuals in lots of parts of the world. They don't have the job prospects that will pay them the money that they aspire to make.

      Contributing factors:

      • Low barriers to entry
      • Well-educated people with poor job prospects
      • Jurisdictions that look the other way because of income for local economy
    3. The answer is you have very little, but you still have to find ways to negotiate successfully on behalf of your client. You can't just concede. You can't look desperate. And so you have to find ways to draw the negotiation to some semblance of a successful conclusion.

      Negotiating tactic: you may have little leverage, but you can't concede and you can't look desperate. Keep your eye on the goal of having a successful outcome.

    4. "It's not a foregone conclusion that a company has to pay a ransom," he says. Large companies may need days to figure out whether their data is safely backed up. They can start talking just to buy time. "We'll kick off negotiation, knowing that a very likely outcome is that we actually don't end up paying."

      First—there are companies that specialize in negotiating ransomware attack responses. Secondly, it is entirely possible that a company may start negotiations to stave off harsher ransoms while figuring out if their internal processes can safely recover systems without paying the ransome.

    1. Bitdefender recognized that DarkSide might correct the flaw, Botezatu said. “We are well aware that attackers are agile and adapt to our decryptors.” But DarkSide might have “spotted the issue” anyway. “We don’t believe in ransomware decryptors made silently available. Attackers will learn about their existence by impersonating home users or companies in need, while the vast majority of victims will have no idea that they can get their data back for free.”

      Counter-argument: the attackers would have found this vulnerability anyway, and a public announcement levels the field for everyone that is affected—not just those with the means to know where to go for help.

    2. Wosar said that publicly releasing tools, as Bitdefender did, has become riskier as ransoms have soared and the gangs have grown wealthier and more technically adept.

      Careless disclosure caused ransomware creators to improve their code before it was necessary, which raises societal cost of dealing with attacks.

    3. The incident also shows how antivirus companies eager to make a name for themselves sometimes violate one of the cardinal rules of the cat-and-mouse game of cyber-warfare: Don’t let your opponents know what you’ve figured out.

      Individual good—promotion of the company's skills—over the public good—reducing the impact of the ransomware attacks. #ITethics

    1. The format of this property depends on the current value of sort. When search_after is used in conjunction with a chronological sort value—e.g. updated, created—this parameter should be formatted as an ISO 8601 string. It may also be formatted in ms (milliseconds) since the Epoch.

      A sort order of asc also seems to be a requirement, otherwise the API returns earlier hits.

    1. The final challenge was about assessing the level of use of different parts of the Schema.org model. If we wanted to propose a change in how a term was documented or suggest a revision to its expected values, it is difficult to assess the potential impact of that change. There’s no easy way to see which applications might be relying on specific parts of the model. Or how many people are publishing data that uses different terms.

      Anticipate difficulty—or at least potential confusion—when taking a broader, open community's vocabulary and making more narrow definitions that fit your domain.

    2. Too much flexibility made it harder for implementers to understand what data would be most useful to publish. And how to do it well. Many publishers were building new services to expose the data so they needed a clearer specification for their development teams. We addressed this in Version 2 of the specifications by considerably tightening up the requirements. We defined which terms were required or just recommended (and why). And added cardinalities and legal values for terms. Our specification became a more formal, extended profile of Schema.org. This also allowed us build a data validator that is now being released and maintained alongside the specifications.

      Suggestions for communities considering using [[SchemaOrg]] as a basis. These suggestions are of particular importance for implementers who are not used to open standards development work. Be more prescriptive, which offers the option of setting up a conformance validator.

    3. Our initial community sessions around the OpenActive standards involved demonstrating how well the existing Schema.org model fitted the core requirements. And exploring where additional work was needed. This meant we skipped any wrangling around how to describe events and instead focused on what we wanted to say about them. Important early questions focused on what information would potential participants find helpful in understanding whether this is specific activity or event is something that they might want to try? For example, details like: what activities they involved and for what level of competency?

      Applying [[SchemaOrg]] to a particular use case meant skipping the already-defined parts (like describing events) and focusing on information and terms that are specific to the target domain.

    4. For many people Schema.org may be more synonymous with publishing data for use by search engines. But as a project its goal is much broader, it is “a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data“.

      Broader use of [[SchemaOrg]]

    1. Workers were extremely limited in functionality to start; just a bit of stateless Javascript code running in a V8 isolate, but as close to users as possible. In 2018 Cloudflare added a key-value store, giving Workers access to highly distributed eventually-consistent data storage; in 2020 the company introduced Workers Unbound, dramatically expanding Workers capabilities, and Durable Objects, which not only store data but also state, which means a single source of truth.

      With a CDN foundation, Cloudflare added Workers to move processing to the edge. Then it added eventually-consistent key-value storage. Now it is adding "Durable Objects" of data-and-state that move across the Cloudflare network to where they are closest to the users.

    2. What Cloudflare had in its favor, though, was the most potent advantage on the Internet: the service, much like Google a decade-earlier with its link-based ranking system, got better with use. This was because Cloudflare paired its content delivery network with DDoS protection; the latter was extremely attractive to websites, gave Cloudflare an in with ISPs who valued the protection to build point-of-presence servers around the world, and, critically, gave Cloudflare better-and-better data about how data flowed around the world (improving its service) even as it improved its CDN capabilities.

      [[DDoS]] protection made the service valuable to ISPs as well as content providers.

    3. That was basically Prince’s value proposition: Cloudflare’s CDN would be cheaper (free), simpler (just change DNS servers), smaller (only 5 servers to start), and more convenient (ridiculously easy!).

      How Cloudflare would use [[Innovator's Dilemma]] principles to start at the fringe and move up-market.

    1. BRENDAN CHAMBERLAIN-SIMON It really does, and it's so funny. I went to visit my parents back in June. I got to drive the Mars rover from my childhood bed. You know, my mom used to say, you'll never make anything happen if you don't get out of bed. [BROOKE CHUCKLES] Touché.

      Good quote about what you can do from your childhood bedroom.

    1. Colati argues in the panel (55:52) that our perceived notions of quality should go down, such as digitization at lower resolution

      Colati's argument was for a refocusing of effort towards lower quality reference scans for a wider range of material, then see what floats to the top for better treatment. This comment was also made in the context of machine-generated metadata as a way to try to get past the human-generated metadata bottleneck.

    2. Their audience is, largely, academic library and IT administrators, although sometimes they let some of folks in the “unwashed masses” (26:44) attend if we’re presenting.

      "unwashed masses" is an unfair pull from the talk. Scheinfeldt's reference was not to CNI attendance, but towards "democratization of access to [archives]"

    3. Sourcery endeavors to make document requests “easy” by “[giving] archival staff and patrons one easy platform for requesting and receiving scans”.

      Definition of "Sourcery"

    1. Scheinfeldt

      33:08 — there has been a bias towards when an archival document is digitized, it is done at the highest quality level and put in the best system with the most complete metadata. That there is only one scan of a document on the archive's website; that there can't more than one scan of document.

      But there are already parallel digitization paths, mostly driven by individual researchers for reference needs. Do they need to be integrated or should they exist in parallel?

      56:54 — Discovery though footnotes in books is underestimated by archivists versus use of finding aids. Sourcery founding idea was linking footnotes to finding aids for serendipitous discovery.

    2. Greg

      31:40 — Analog collections are managed at the collection or box or folder level. Digitizing an item in a folder almost mandates item-level metadata.

      55:49 — Turn the formal digitization model on its head...put as much out there as possible in whatever low quality you can do, and then see what rises to the top.

    3. Sourcery Project

      Does the Sourcery Project provide new signals to archives on where to prioritize effort for systematic digitization?

      Cliff, prompted by audience text questions, asks if there is a place for outreach for remote access to archives...to have a digitization specialist on a call with a remote scholar flipping through the contents of a folder or box?

    4. Dan

      29:00 — Opens up collections to access beyond credentialed users.

    5. Barbara

      28:00 — Scholars need different types of sources...for reference, for publication, etc. Sourcery becomes a new source.

    6. Tom

      27:30 — Software as a provocation, and in this case a provocation for openness...collections that are not open on Sourcery would be cited fewer times; encouraging closed archives to open up their collections.

    1. The investigators behind the report found that nearly 80% of the comments funded by the broadband industry were collected by lead generation companies that offered consumers various rewards in exchange for their information. "Marketing offers varied widely, and included everything from discounted children's movies to free trials of male enhancement products," the report reads. The broadband industry would then run additional solicitations alongside those promotions, asking consumers to join the anti-net neutrality campaign, according to the report. But the lead generation companies did not always run those solicitations, the report says. "Instead, they copied names and addresses they had purchased or collected months or years earlier through unrelated lead generation efforts, and passed it off as information submitted by consumers who had agreed to join the broadband industry's campaign," it reads.One lead generator went so far as to use information obtained through a data breach to submit fake comments.

      Did broadband companies have deniability because they used these "lead generation" companies.?

    1. EDWARD DOLNICK: He came out with how gravity works, how light works, how rainbows work, how the tides work.

      Isaac Newton, around 1665, is sent home from Cambridge University (UK) because of a plague. In the course of one summer, he works out all sorts of scientific findings, including the basics of orbital mechanics.

  9. Apr 2021
    1. Facebook, on the other hand, is worried that if you have a choice, you'll choose not to let it track you, which would be bad for Facebook. The social media giant literally doesn't want you to have a choice because it's more concerned about what's good for Facebook than what's good for users.  

      Facebook's position is to benefit Facebook, not the user.

    2. Craig Federighi, Apple's senior vice president of software engineering, told The Wall Street Journal's Joanna Stern that the company's goal is to "give users a choice." Those four words are at the core of the problem with the position Facebook has taken since Apple announced the changes last year at its developer conference.

      Apple is saying that giving users the choice over how their data is used on their devices is "the right thing".

    1. If you've seen Denis Shirayev's upscaled historical videos, you've seen the past enhanced by a touch of the future. He takes videos scanned from very old films, like our poignant A Trip Down Market Street Before the Fire, shot just days before the 1906 quake and fire that devastated San Francisco, upscales them to 4K, smoothes out jitter and adds color. (Today any video editor can make something almost as good using off-the-shelf tools like Topaz Video Enhance AI.) Shirayev's videos are beautiful and compelling, but they show you something that never was. They're not archival; they're fiction

      Applying AI enhancements to historical artifacts introduces the bias of the AI algorithm to the representation.

    2. Registries are emerging to authenticate sources and provenance, and perhaps even indemnify purchasers against false representations by sellers. These have long existed in the collectibles business. Rare coins are frequently processed by trusted grading and authentication services, which charge to inspect coins and then encapsulate them in sealed plastic slabs.

      These registration services—providing authenticity and quality grading—don’t come for free and the cost of them must be somehow factored into the blockchain transaction.

    3. While the blockchain is supposed to draw an unbroken link between creator/tokenizer and purchaser, it's just a record of transactions that might be tainted or even bogus. We know the original Mona Lisa resides in the Louvre, but it's very hard to identify who really created and who owns many of the millions of creative works made in the analog era.

      NFTs do not answer the question of provenance. A statement is just attributed to a blockchain address. The provenance is only as secure as that blockchain address is recognized and not compromised. Black-and-white...it is or it isn’t.

    4. This would worsen an already bad situation, where institutions like our Library of Congress hold physical copies of millions of films, TV programs, and recordings that can't be touched because someone else holds the copyright. Ideally, archives and museums should own and control both the physical and digital states of its collections. That won't happen if they have to sell or license NFTs in order to survive.

      I don’t see the connection here—if the archive holds the copyright, why would selling an NFT on an object prevent them from controlling the physical and digital archives?

    5. Law professor Tonya M. Evans optimistically suggests that crypto art offers Black artists and communities opportunities to bypass white art gatekeepers and "capture and own the value of the culture that they produce."

      NFTs may help underrepresented artists find support by bypassing gatekeepers.

    6. By design, archives are deliberate and thoughtful, with a timeline designed to preserve culture "forever." They're not built to nimbly weather disruption.

      Archives are haphazardly scattered through culture and are typically underfunded. Deliberate and thoughtful, but rigid and underfunded.

    7. I give old films away for free. It started in 1999 when I was seduced by the promise, excitement, and just-felt-rightness of the gift economy. Not 30 seconds after we first met, Internet Archive founder Brewster Kahle asked me, "Want to put your film archives online for free?"

      Opinion piece author is Rick Prelinger

    1. NISO Standard for the ResourceSync Framework (ResourceSync Framework, 2014). This would facilitate the automatic updating of the links upon changes to any of the versions.

      [[ResourceSync]] to "facilitate the automatic updating of links"...as described by [[Herbert Van de Sompel]] through John.

    2. So, what would it take to build what I would call an ‘Open Web Smart Link’, which would facilitate a user getting to a shared version of an article via a clearly labelled link so that the user knows what to expect? Librarians have been working on things like this for many years (Sugita et al., 2007), but I am not aware of any publishers adopting similar linking technologies.

      John's definition of what something like [[GetFTR]] would do.

    3. Some readers will know that they need the article of record. A very high percentage, however, who are reading this article for the first time and want to follow a citation simply need to reassure themselves that they understand the argument being presented. They will simply need read‐access to the cited work.

      Some users will need the article-of-record. Others ("a very high percentage", John says) can make use of something that isn't the article-of-record.

    4. the users’ need in this case is a link or a set of links that give them their best access choices. In my opinion, these links should be revealing as to whether or not clicking on them will get you to the full text or not.

      John sees the need for what [[GetFTR]] is trying to solve.

    5. But inside of a library that does not subscribe to this particular journal, this single open access article is very unlikely to participate in any of the discovery services, and when other articles cite her article, the link‐resolvers that would normally provide the reader with a direct link to her article will not do so.

      Because library tools for accessing journal articles are tied to the journal level, this one open access article in a journal for which the library does not have a subscription cannot be discovered.

    6. it does cover the landscape of where there might be unnecessary friction between someone's intent to share and a user's desire to discover

      Purpose of the Open Content Discovery Grid

    7. If an institution passes an open access policy, it is in large part motivated by having the results of scholarship at that institution have maximum reach and influence. If a funding agency establishes an open access mandate, as most of them have, it often stems from a desire to accelerate research in areas in which they provide funding. If a scholarly publisher comes out with an open access journal or offers for a fee to make certain articles ‘open’, they should certainly seek to have that article or that journal achieve maximum reach. And scholars/researchers who share their papers on personal websites or on scholar sites, like academia.edu or ResearchGate, tell me that this is their way of making sure that colleagues in their field have access to their works.

      Reasons why institutions, funding agencies, scholarly publishers, and authors post content in open access.

    8. ‘helping people find things that they need, but are not looking for’

      John Dove's definition of serendipity in discovery.

    1. This is an example of a dark pattern: design that manipulates or heavily influences users to make certain choices. Instagram uses terms like “activity” and “personalized” instead of “tracking” and “targeting,” so the user may not realize what they’re actually giving the app permission to do. Most people don’t want Instagram and its parent company, Facebook, to know everything they do and everywhere they go. But a “better experience” sounds like a good thing, so Instagram makes the option it wants users to select more prominent and attractive than the one it hopes they’ll avoid.

      Definition of [[Dark Pattern]].

      Article covers legislative efforts in states and nationally to ban dark patterns.

    1. No provider or user of any interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

      The 26 words of Section 230 of the Communications Decency Act of 1996.

    2. So what he's saying here is if someone had run these same ads about these T-shirts in the classifieds section of a newspaper, Ken could have sued the newspaper. He might not have won that lawsuit, but he could have had his day in court. But what this ruling in Ken's case means is he doesn't even get to argue the merits of his case. He simply cannot sue AOL over these posts, period.

      Why Section 230 made internet platforms different from newspapers—newspapers could be sued for publishing inaccurate information. (Might not have won, but at least the avenue was open.)

    1. the other thing is this whole thing of the military having this culture that you keep things secret, which means that it's very hard to have, like, an open - and it's very top-down. So it's very hard to have an open discussion about - like, a scientific discussion going on around these topics. I mean, now I make it sound like they are very different from the rest of us, but in a way, they are just human beings. And you can easily wind yourself up in some kind of explanation. If you have a few authorities telling you how things are, you can easily start to collect evidence that that must be how it was.

      It is sort of like "group-think", but enforced in a rigid, top-down structure such that you can't question it—you don't know to question it.

  10. Aug 2019
    1. pseudonymous identifiers
    2. eduPersonEntitlement
    3. Each of these use cases has a different demand of the metadata about the user.

      Implementers of OAUTH have a good example of attribute release policy pages.

    4. Let’s consider first the issue that a user is logged into an online system, individually and therefore can be tracked.

      Logged into an online system is reminiscent of the days of CompuServe and AOL. Today it is about being a node on a network. Logged in isn't as accurate as Connected.

  11. Feb 2017
    1. Well the leaks are real. You're the one that wrote about them and reported them, I mean the leaks are real. You know what they said, you saw it and the leaks are absolutely real. The news is fake because so much of the news is fake.

      The leaks are real, but the news based on the leaked information is false?