221 Matching Annotations
  1. Nov 2024
    1. this is a graph showing the average connection speed uh of the G7 countries and this is from 2007 to 2012 and the average connection speed hasn't increased as much as other things like processing power or or storage

      for - stats - internet - average connection speed - hasn't increased as much as storage and processing power

    1. Een chunk (letterlijk ‘brok’) is een verzameling elementen die sterke associaties met elkaar hebben. Samen vormen ze een betekenisvolle informatie-eenheid. Die chunks, groot of klein, gebruiken we in ons interne informatieverwerkings- en geheugensysteem. Ons brein houdt namelijk van logica en voorspelbare patronen. Het opdelen van informatie gebeurt automatisch en continu, maar kan ook bewust worden ingezet. Dat heet doel-georiënteerde chunking.Ons brein kan slechts een aantal zaken opslaan in het kortetermijngeheugen. Maar door veel gegevens te groeperen in kleinere brokjes informatie, kunnen we de limieten van ons geheugen uitdagen. En dus meer informatie verwerken en onthouden.

      Chapeau! Een Belgische website kaart dit aan in de context gezond leven.

  2. Oct 2024
    1. process your inbox

      How to get round the idea of processing which is a stumbling block for fun?

      processed notes are akin to the amount of nutritive value in processed food...

    2. Engagingwith the slip box should feel exciting, not anxiety-producing.

      I often find that people who discuss "workflows" and the idea of "processing" their notes are the ones who are falling trap to the anxiety-producing side of the work.

      BD should have found more exciting words for "processing" which he uses two more times in the next paragraph.

      This relates to Luhmann's quote about only doing what is easy/fun/flow:<br /> - https://hypothes.is/a/TQyC1q1HEe2J9fOtlKPXmA<br /> - https://hypothes.is/a/EyKrfK1WEe2RpEuwUuFA7A

      Compare: - being trapped in the box: https://hypothes.is/a/AY7ABO0qEeympasqOZHoMQ - idea of drudgery in the phrase "word processing"

  3. Aug 2024
    1. Typewriter Video Series - Episode 147: Font Sizes and the Writing Process by [[Joe Van Cleave]]

      typewriters for note making

      double or 1 1/2 spacing with smaller typefaces may be more efficient for drafting documents, especially first drafts

      editing on actual paper can be more useful for some

      Drafting on a full sheet folded in half provides a book-like reading experience for reading/editing and provides an automatic backing sheet

      typewritten (or printed) sheets may be easier to see and revise than digital formats which may hide text the way ancient scrolls did for those who read them.

      Jack Kerouac used rolls of paper to provide continuous writing experience. Doesn't waste the margins of paper at the top/bottom. This may be very useful for first drafts.

      JVC likes to thread rolls of paper into typewriters opposite to the original curl so as to flatten the paper out in the end.

    1. For true deep processing and learning, intellectualism, one must think beyond the single source they are consuming and think about everything they know. Although keep in mind selective attention for true learning and thinking.

      This process is habitualized by means of Zettelkasten and further aided in tool like hypothes.is

  4. Jul 2024
    1. Whoosh provides methods for computing the “key terms” of a set of documents. For these methods, “key terms” basically means terms that are frequent in the given documents, but relatively infrequent in the indexed collection as a whole.

      Very interesting method, and way of looking at the signal. "What makes a document exceptional because something is common within itself and uncommon without".

  5. Jun 2024
    1. Testing culture also discourages deep reading, critics say, because it emphasizes close reading of excerpts, for example, to study a particular literary technique, rather than reading entire works.

      Indeed. But testing in general, as it is done currently, in modern formal education, discourages deep learning as opposed to shallow learning.

      Why? Because tests with marks implore students to start learning at max 3 days before the test, thus getting knowledge into short-term memory and not long term memory. Rendering the process of learning virtually useless even though they "pass" the curriculum.

      I know this because I was such a student, and saw it all around me with virtually every other student I met, and I was in HAVO, a level not considered "low".

      It does not help that teachers, or the system, expect students to know how to learn (efficiently) without it ever being taught to them.

      My message to the system: start teaching students how to learn the moment they enter high school

  6. May 2024
    1. Matthew van der Hoorn Yes totally agree but could be used for creating a draft to work with, that's always the angle I try to take buy hear what you are saying Matthew!

      Reply to Nidhi Sachdeva: Nidhi Sachdeva, PhD Just went through the micro-lesson itself. In the context of teachers using to generate instruction examples, I do not argue against that. The teacher does not have to learn the content, or so I hope.

      However, I would argue that the learners themselves should try to come up with examples or analogies, etc. But this depends on the learner's learning skills, which should be taught in schools in the first place.

    2. ***Deep Processing***-> It's important in learning. It's when our brain constructs meaning and says, "Ah, I get it, this makes sense." -> It's when new knowledge establishes connections to your pre-existing knowledge.-> When done well, It's what makes the knowledge easily retrievable when you need it. How do we achieve deep processing in learning? 👉🏽 STORIES, EXPLANATIONS, EXAMPLES, ANALOGIES and more - they all promote deep meaningful processing. 🤔BUT, it's not always easy to come up with stories and examples. It's also time-consuming. You can ask you AI buddies to help with that. We have it now, let's leverage it. Here's a microlesson developed on 7taps Microlearning about this topic.

      Reply to Nidhi Sachdeva: I agree mostly, but I would advice against using AI for this. If your brain is not doing the work (the AI is coming up with the story/analogy) it is much less effective. Dr. Sönke Ahrens already said: "He who does the effort, does the learning."

      I would bet that Cognitive Load Theory also would show that there is much less optimized intrinsic cognitive load (load stemming from the building or automation of cognitive schemas) when another person, or the AI, is thinking of the analogies.


      https://www.linkedin.com/feed/update/urn:li:activity:7199396764536221698/

  7. Apr 2024
  8. Mar 2024
    1. When processing an item in your in list the first question you need to ask is: is it actionable?—in other words, do you need to do something? If the answer is NO, you either throw it away if you no longer need it, keep it as reference material (“I will probably need this article again some day…”), add it to a some day/maybe list (for things like “learn Indonesian”), or incubate it. Wait, what‽ Sit on it? Yes, sort of. If it’s something that you want to remind yourself about later (“I really didn’t understand this article, I should have a look at it again in two weeks”) it should go into your calendar or your tickler file which will soon be explained. (Yes, even the weird name.)

      First, ask yourself if the item is actionable. Then, series of stuff you might do: throw away, reference, someday/maybe, incubate (calendar/tickler)

    1. Samuel Hartlib was well aware of this improvement. While extolling the clever invention of Harrison, Hartlib noted that combinations and links con-stituted the ‘argumentative part’ of the card index.60

      Hartlib Papers 30/4/47A, Ephemerides 1640, Part 2.

      In extolling the Ark of Studies created by Thomas Harrison, Samuel Hartlib indicated that the combinations of information and the potential links between them created the "argumentative part" of the system. In some sense this seems to be analogous to the the processing power of an information system if not specifically creating its consciousness.

    1. 1:35:00 The gap effect or spacing effect as time interleaved wherein information is processed. Embracing boredom and taking none stimulative breaks aids in this.

    1. some of our older applications rely substantially on manual extract, transform and load (ETL)processes to pass data from one system to another. This substantially increases the volumeof customer and staff data in transit on the network, which in a modern data managementand reporting infrastructure would be encapsulated in secure, automated end-to-end

      Reliance on ETL seen as risky

      I’m not convinced about this. Real-time API connectivity between systems is a great goal…very responsive to changes filtering through disparate systems. But a lot of “modern” processing is still done by ETL batches (sometimes daily, sometimes hourly, sometimes every minute).

  9. Feb 2024
  10. Jan 2024
    1. Deep processing is the foundation of all learning. It refers to your ability to think about information critically, find relationships, make sense of new information, and organise it into meaningful knowledge in your memory.
    1. I am particularly interested in how performance style and expressive vocabulary changes over time, as evidenced on sound recordings. I enjoy exploring aesthetics questions both empirically through experiments and measurements as well as philosophically, i.e. in their historical and cultural context.I try to embrace interdisciplinary approaches (e.g. cognitive neuroscience and perception as well as ethnographic and archival work) and learn from cross cultural investigations. I particularly like working with performers who are interested in research.
  11. Nov 2023
    1. Splitting a “transaction” into a mul-tistage pipeline of stream processors allows each stage to make progress based only on local data; it ensures that one partition is never blocked waiting for communication or coordination with another partition.

      Event logs allow for gradual processing of events.

    Tags

    Annotators

  12. Sep 2023
    1. (1:20.00-1:40.00) What he describes is the following: Most of his notes originate from the digital using hypothes.is, where he reads material online and can annotate, highlight, and tag to help future him find the material by tag or bulk digital search. He calls his hypothes.is a commonplace book that is somewhat pre-organized.

      Aldrich continues by explaining that in his commonplace hypothes.is his notes are not interlinked in a Luhmannian Zettelkasten sense, but he "sucks the data" right into Obsidian where he plays around with the content and does some of that interlinking and massage it.

      Then, the best of the best material, or that which he is most interested in working with, writing about, etc., converted into a more Luhmannesque type Zettelkasten where it is much more densely interlinked. He emphasizes that his Luhmann zettelkasten is mostly consisting of his own thoughts and is very well-developed, to the point where he can "take a string of 20 cards and ostensibly it's its own essay and then publish it as a blog post or article."

  13. Aug 2023
    1. If there’s only an asterisk: Click the style name, then move the pointer over the style name in the Paragraph Styles pop-up menu. Click the arrow that appears, then choose Redefine from Selection.

      Pages is so much more impressive than you'd expect in so many ways, but damn...

      The way styles are handled still perplexes the shit out of me... even after consuming this document.

    1. The essence for this video is correct; active learning, progressive summarization, deep processing, relational analytical thinking, even evaluative.

      Yet, the implementation is severely lacking; marginalia, text writing, etc.

      Better would be the use of mindmaps or GRINDEmaps. I personally would combine it with the Antinet of course.

      I do like this guy's teaching style though 😂

  14. Jul 2023
    1. Code for processing data samples can get messy and hard to maintain; we ideally want our dataset code to be decoupled from our model training code for better readability and modularity.

      Code for data processing and model training should be separated as different modules.

    1. We prioritize what we see versus what we hear, why is that? Now, what comes to mind when I say that is when, somebody is saying no, but shaking their head yes. And so we have this disconnect, but we tend to prioritize what the action and not what we're hearing. So something that we visually see instead of what we hear.Speaker 1There isn't a definitive answer on that, but one source of insight on why do we do that, it could be related to the neurological real estate that's taken up by our visual experience. There's far more of our cortex, the outer layer of our brain that responds to visual information than any other form of information

      (13:36) Perhaps this is also why visual information is so useful for learning and cognition (see GRINDE)... Maybe the visual medium should be used more in instruction instead of primarily auditory lectures (do take into account redundancy and other medium effects from CLT though)

  15. Jun 2023
    1. When it comes to thinking, the Zettelkasten solves an important issue which is the problem of scope, which is impossible at the current moment in mindmapping software such as Concepts.

      Mainly, Zettelkasten allows you gain a birds-eye holistic view of a topic, branch, or line of thought, while allowing you to at the same time also gain a microscopic view of an "atomic" idea within that thought-stream, therefore creating virtually infinite zoom-in and zoom-out capability. This is very, very, beneficial to the process of deep thinking and intellectual work.

    1. Recent work in computer vision has shown that common im-age datasets contain a non-trivial amount of near-duplicateimages. For instance CIFAR-10 has 3.3% overlap betweentrain and test images (Barz & Denzler, 2019). This results inan over-reporting of the generalization performance of ma-chine learning systems.

      CIFAR-10 performance results are overestimates since some of the training data is essentially in the test set.

  16. Mar 2023
    1. In short, in the absence of legal tender laws, the seller will not accept anything but money of certain value (good money), but the existence of legal tender laws will cause the buyer to offer only money with the lowest commodity value (bad money), as the creditor must accept such money at face value.

      During the coronavirus pandemic, many vendors facing inflation began to pass along the 3% (or more) credit card processing fees to their customers. Previously many credit card companies would penalize vendors for doing this (and possibly cut them off). This fee was considered "the cost of doing business".

      Some vendors prior to the pandemic would provide cash discounts on large orders because they could circumvent these fees.

      Does this affect (harm) inflation? Is it a form of Gresham's law at play here? What effect does this have on credit card companies? Are they so integral to the system that it doesn't affect them, but instead the customers using their legal tender?

  17. Feb 2023
    1. I finished processing the 22 page chapter. It took me about 10 hours total to read, take notes, polish notes, and connect them to 39 permanent notes (6 new notes and 33 existing notes). Bear in mind, this is an extremely important reference for me, so it's by far one of the most-linked literature notes in my vault.
    1. Remember that life in a Zettelkasten is supposed to be fun. It is a joyful experience to work with it when it works back with you. Life in Zettelkasten is more like dance than a factory.

      I've always disliked the idea of "work" involved in "making" notes and "processing" them. Framing zettelkasten and knowledge creation in terms of capitalism is a painful mistake.

      the quote is from https://blay.se/2015-06-21-living-with-a-zettelkasten.html

    1. Deutsch’s index was created out of an almost algorith-mic processing of historical sources in the pursuit of a totalized and perfect history of theJews; it presented, on one hand, the individualized facts, but together also constitutedwhat we might term a ‘history without presentation’, which merely held the ‘facts’themselves without any attempt to synthesize them (cf. Saxer, 2014: 225-32).

      Not sure that I agree with the framing of "algorithmic processing" here as it was done manually by a person pulling out facts. But it does bring out the idea of where collecting ends and synthesis of a broader thesis out of one's collection begins. Where does historical method end? What was the purpose of the collection? Teaching, writing, learning, all, none?

    1. Today’s students carry access to boundlessinformation that Eco’s students could not have begun tofathom, but Eco’s students owned every word they carried.

      This is a key difference in knowledge mastery...

    1. rank is not an assessment of who has thebest intrinsic properties, but rather a useful consensus view thatprovides rules for how to behave toward others.

      Rank (social or otherwise) can be a signal for predictability from the perspective of consensus views for how to behave towards others with respect to the abilities or values being measured.


      Ranking people for some sort of technical ability may be a better objective measure rather than ranking people on social status which is far less objective from a humanist perspective. In employment situations, individuals are more likely to rely on social and cultural biases and racist tendencies rather than on objective measures with respect to the job at hand. How can we better objectify the actual underlying values over and above the more subjective ones.

    2. First, rank can be an efficient way to summarize the accurate,but noisy, perceptions of individuals.

      rank as signal processing

  18. Jan 2023
    1. a common technique in natural language processing is to operationalize certain semantic concepts (e.g., "synonym") in terms of syntactic structure (two words that tend to occur nearby in a sentence are more likely to be synonyms, etc). This is what word2vec does.

      Can I use some of these sorts of methods with respect to corpus linguistics over time to better identified calcified words or archaic phrases that stick with the language, but are heavily limited to narrower(ing) contexts?

    1. Fried-berg Judeo-Arabic Project, accessible at http://fjms.genizah.org. This projectmaintains a digital corpus of Judeo-Arabic texts that can be searched and an-alyzed.

      The Friedberg Judeo-Arabic Project contains a large corpus of Judeo-Arabic text which can be manually searched to help improve translations of texts, but it might also be profitably mined using information theoretic and corpus linguistic methods to provide larger group textual translations and suggestions at a grander scale.

  19. Dec 2022
    1. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922

  20. Nov 2022
    1. partnerships, networking, and revenue generation such as donations, memberships, pay what you want, and crowdfunding

      I have thought long about the same issue and beyond. The triple (wiki, Hypothesis, donations) could be a working way to search for OER, form a social group processing them, and optionally support the creators.

      I imagine that as follows: a person wants to learn about X. They can head to the wiki site about X and look into its Hypothesis annotations, where relevant OER with their preferred donation method can be linked. Also, study groups interested in the respective resource or topic can list virtual or live meetups there. The date of the meetups could be listed in a format that Hypothesis could search and display on a calendar.

      Wiki is integral as it categorizes knowledge, is comprehensive, and strives to address biases. Hypothesis stitches websites together for the benefit of the site owners and the collective wisdom that emerges from the discussions. Donations support the creators so they can dedicate their time to creating high-quality resources.

      Main inspirations:

      Deschooling Society - Learning Webs

      Building the Global Knowledge Graph

      Schoolhouse calendar

    1. first we're looking for the "main" object. The word "main" is used in lots of places in Ruby, so that will be hard to track down. How else can we search?Luckily, we know that if you print out that object, it says "main". Which means we should be able to find the string "main", quotes and all, in C.
    1. Robert Amsler is a retired computational lexicology, computational linguist, information scientist. His P.D. was from UT-Austin in 1980. His primary work was in the area of understanding how machine-readable dictionaries could be used to create a taxonomy of dictionary word senses (which served as the motivation for the creation of WordNet) and in understanding how lexicon can be extracted from text corpora. He also invented a new technique in citation analysis that bears his name. His work is mentioned in Wikipedia articles on Machine-Readable dictionary, Computational lexicology, Bibliographic coupling, and Text mining. He currently lives in Vienna, VA and reads email at robert.amsler at utexas. edu. He is currenly interested in chronological studies of vocabulary, esp. computer terms.

      https://www.researchgate.net/profile/Robert-Amsler

      Apparently follow my blog. :)

      Makes me wonder how we might better process and semantically parse peoples' personal notes, particularly when they're atomic and cross-linked?

    1. I never immediately read an article then make a notecard.

      By waiting some amount of time (days/weeks/a few months) between originally reading something and processing one's notes on it allows them to slowly distill into one's consciousness. It also allows one to operate on their diffuse thinking which may also help to link ideas to others in their memory.

  21. Oct 2022
    1. elaboration n. 1. the process of interpreting or embellishing information to be remembered or of relating it to other material already known and in memory. The levels-of-processing model of memory holds that the level of elaboration applied to information as it is processed affects both the length of time that it can be retained in memory and the ease with which it can be retrieved.
    1. https://www.explainpaper.com/

      Another in a growing line of research tools for processing and making sense of research literature including Research Rabbit, Connected Papers, Semantic Scholar, etc.

      Functionality includes the ability to highlight sections of research papers with natural language processing to explain what those sections mean. There's also a "chat" that allows you to ask questions about the paper which will attempt to return reasonable answers, which is an artificial intelligence sort of means of having an artificial "conversation with the text".

      cc: @dwhly @remikalir @jeremydean

  22. Sep 2022
    1. maintenance rehearsal repeating items over and over to maintain them in short-term memory, as in repeating a telephone number until it has been dialed (see rehearsal). According to the levels-of-processing model of memory, maintenance rehearsal does not effectively promote long-term retention because it involves little elaboration of the information to be remembered. Also called rote rehearsal. See also phonological loop.

      The practice of repeating items as a means of attempting to place them into short-term memory is called maintenance rehearsal. Examples of this practice include repeating a new acquaintance's name or perhaps their phone number multiple times as a means of helping to remember it either for the short term or potentially the long term.

      Research on the levels-of processing model of memory indicates that maintenance rehearsal is not as effective at promoting long term memory as methods like elaborative rehearsal.

  23. Aug 2022
    1. Process the log file to determine the spread of data: cat /tmp/sslparams.log | cut -d ' ' -f 2,2 | sort | uniq -c | sort -rn | perl -ane 'printf "%30s %s\n", $F[1], "="x$F[0];'
  24. Jul 2022
  25. Jun 2022
    1. We are the leading independent Open Access publisher in the Humanities and Social Sciences in the UK: a not-for-profit Social Enterprise run by scholars who are committed to making high-quality research freely available to readers around the world. All our books are available to read online and download for free, with no Book Processing Charges (BPCs) for authors. We publish monographs and textbooks in all areas, offering the academic excellence of a traditional press combined with the speed, convenience and accessibility of digital publishing. We also publish bespoke Series for Universities and Research Centers and invite libraries to support Open Access publishing by joining our Membership Programme.
    1. The absence of Quick Note on the iPhone is a strange, glaring omission that’s baffling to me. I do research on every device, including the iPhone. In fact, I’d argue that the iPhone is the most important place to include Quick Note. That’s because, despite the ample screen of my iPhone 12 Pro Max, it’s still not the best place to read, making saving items for later with Quick Note more valuable there. However, my iPhone is still where I run across links and other material I want to save daily. I’d love to be able to drop links and blockquotes into Quick Note from my iPhone, so I could revisit the material later from the more comfortable reading environment of my iPad or Mac. Not having Quick Note on the iPhone is a significant blow to the feature’s utility.

      Considering how I've been publicly speaking and behaving (melodramatically, that is) - as someone who has returned to using my iPhone as my primary working device - this sort of oversight is precisely what I expected, actually, What I did not expect of Apple was to respond as early as the next numeric release to this omission.

      Running this very first build of iOS 16, I can indeed that Apple has thought of at least one original context for Quick Note creation, but obviously, it's quite hard to say at this point.

      Anywho/how, here's what it looks like at the moment.

      Quick Note implemented on iPhone as of iOS 16's very first available dev beta

  26. May 2022
    1. .Adopting the habit of knowledge capture has immediate benefitsfor our mental health and peace of mind. We can let go of the fearthat our memory will fail us at a crucial moment. Instead of jumpingat every new headline and notification, we can choose to consumeinformation that adds value to our lives and consciously let go of therest.

      Immediate knowledge capture by highlighting, annotating, or other means when taking notes can help to decrease cognitive load. This is similar to other productivity methods like quick logging within a bullet journal system, writing morning pages, or Getting Things Done (GTD). By putting everything down in one place, you can free your mind of the constant need to remember dozens of things. This frees up your working memory to decrease stress as you know you've captured the basic idea for future filtering, sorting, and work at a later date.

  27. Mar 2022
  28. Jan 2022
    1. Fernandez-Castaneda, A., Lu, P., Geraghty, A. C., Song, E., Lee, M.-H., Wood, J., Yalcin, B., Taylor, K. R., Dutton, S., Acosta-Alvarez, L., Ni, L., Contreras-Esquivel, D., Gehlhausen, J. R., Klein, J., Lucas, C., Mao, T., Silva, J., Pena-Hernandez, M., Tabachnikova, A., … Monje, M. (2022). Mild respiratory SARS-CoV-2 infection can cause multi-lineage cellular dysregulation and myelin loss in the brain (p. 2022.01.07.475453). https://doi.org/10.1101/2022.01.07.475453

    1. Most developers are familiar with MySQL and PostgreSQL. They are great RDBMS and can be used to run analytical queries with some limitations. It’s just that most relational databases are not really designed to run queries on tens of millions of rows. However, there are databases specially optimized for this scenario - column-oriented DBMS. One good example is of such a database is ClickHouse.

      How to use Relational Databases to process logs

    2. Another format you may encounter is structured logs in JSON format. This format is simple to read by humans and machines. It also can be parsed by most programming languages
  29. Dec 2021
    1. Catala, a programming language developed by Protzenko's graduate student Denis Merigoux, who is working at the National Institute for Research in Digital Science and Technology (INRIA) in Paris, France. It is not often lawyers and programmers find themselves working together, but Catala was designed to capture and execute legal algorithms and to be understood by lawyers and programmers alike in a language "that lets you follow the very specific legal train of thought," Protzenko says.

      A domain-specific language for encoding legal interpretations.

    1. One more thing ought to be explained in advance: why the card index is indeed a paper machine. As we will see, card indexes not only possess all the basic logical elements of the universal discrete machine — they also fi t a strict understanding of theoretical kinematics . The possibility of rear-ranging its elements makes the card index a machine: if changing the position of a slip of paper and subsequently introducing it in another place means shifting other index cards, this process can be described as a chained mechanism. This “ starts moving when force is exerted on one of its movable parts, thus changing its position. What follows is mechanical work taking place under particular conditions. This is what we call a machine . ” 11 The force taking effect is the user ’ s hand. A book lacks this property of free motion, and owing to its rigid form it is not a paper machine.

      The mechanical work of moving an index card from one position to another (and potentially changing or modifying links to it in the process) allows us to call card catalogues paper machines. This property is not shared by information stored in codices or scrolls and thus we do not call books paper machines.

  30. Nov 2021
    1. I spend most of my day in iOS Notes app.

      Did I ever really find this man intelligent??? Things sincerely do make a lot more sense now. Such a specific lack of aspiration.

  31. Oct 2021
  32. Sep 2021
  33. Jul 2021
    1. whereas now, they know that user@domain.com was subscribed to xyz.net at some point and is unsubscribing. Information is gold. Replace user@domain with abcd@senate and xyz.net with warezxxx.net and you've got tabloid gold.
  34. Jun 2021
    1. Different ways to prepend a line: (echo 'line to prepend';cat file)|sponge file sed -i '1iline to prepend' file # GNU sed -i '' $'1i\\\nline to prepend\n' file # BSD printf %s\\n 0a 'line to prepend' . w|ed -s file perl -pi -e 'print"line to prepend\n"if$.==1' file
  35. May 2021
  36. Apr 2021
    1. This post articulates a lot of what I've been thinking about for the past 18 months or so, but it adds the additional concept of community integration.

      Interestingly, this aligns with the early, tentative ideas around what the future of In Beta might look like as a learning community, rather than a repository of content.

  37. Mar 2021
  38. Jan 2021
    1. Process models, on the other hand, provide specification ofinternal structure, mechanism, and information flow

      predictive processing is a process model that is suggested (or constrained) by the FEP.

  39. Nov 2020
  40. Oct 2020
  41. Sep 2020
  42. Aug 2020
  43. Jul 2020
    1. As mentioned earlier in these guidelines, it is very important that controllers assess the purposes forwhich data is actually processed and the lawful grounds on which it is based prior to collecting thedata. Often companies need personal data for several purposes, and the processing is based on morethan one lawful basis, e.g. customer data may be based on contract and consent. Hence, a withdrawalof consent does not mean a controller must erase data that are processed for a purpose that is basedon the performance of the contract with the data subject. Controllers should therefore be clear fromthe outset about which purpose applies to each element of data and which lawful basis is being reliedupon.
    2. If there is no other lawful basisjustifying the processing (e.g. further storage) of the data, they should be deleted by the controller.
    3. In cases where the data subject withdraws his/her consent and the controller wishes to continue toprocess the personal data on another lawful basis, they cannot silently migrate from consent (which iswithdrawn) to this other lawful basis. Any change in the lawful basis for processing must be notified toa data subject in accordance with the information requirements in Articles 13 and 14 and under thegeneral principle of transparency.
    1. Some vendors may relay on legitimate interest instead of consent for the processing of personal data. The User Interface specifies if a specific vendor is relating on legitimate interest as legal basis, meaning that that vendor will process user’s data for the declared purposes without asking for their consent. The presence of vendors relying on legitimate interest is the reason why within the user interface, even if a user has switched on one specific purpose, not all vendors processing data for that purpose will be displayed as switched on. In fact, those vendors processing data for that specific purpose, relying only on legitimate interest will be displayed as switched off.
    2. Under GDPR there are six possible legal bases for the processing of personal data.
  44. Jun 2020
  45. May 2020
    1. learn how to be a data steward or data ally. Help organizations proactively think about what data they collect and how it is governed after its collected. Help organizations get their collective head around all the data they possess, how they curate it, how they back it up, and how over time they minimize it.
    1. Services generally fall into two categories: Services related to your own data collection activities (eg. contact forms)Services related to third-party data collection activities (eg. Google Analytics)
    1. Sure, anti-spam measures such as a CAPTCHA would certainly fall under "legitimate interests". But would targeting cookies? The gotcha with reCAPTCHA is that this legitimate-interest, quite-necessary-in-today's-world feature is inextricably bundled with unwanted and unrelated Google targeting (cookiepedia.co.uk/cookies/NID) cookies (_ga, _gid for v2; NID for v3).
    1. Because consent under the GDPR is such an important issue, it’s mandatory that you keep clear records and that you’re able to demonstrate that the user has given consent; should problems arise, the burden of proof lies with the data controller, so keeping accurate records is vital.
    2. The records should include: who provided the consent;when and how consent was acquired from the individual user;the consent collection form they were presented with at the time of the collection;which conditions and legal documents were applicable at the time that the consent was acquired.
    3. Non-compliant Record Keeping Compliant Record Keeping
    1. there’s no need to send consent request emails — provided that this basis of processing was stated in your privacy policy and that users had easy access to the notice prior to you processing their data. If this information was not available to users at the time, but one of these legal bases can currently legitimately apply to your situation, then your best bet would be to ensure that your current privacy notice meets requirements, so that you can continue to process your user data in a legally compliant way.
    2. Here’s why sending GDPR consent emails is tricky and should be handled very carefully.
    1. they sought to eliminate data controllers and processors acting without appropriate permission, leaving citizens with no control as their personal data was transferred to third parties and beyond
    1. Consent receipt mechanisms can be especially helpful in automatically generating such records.
    2. With that guidance in mind, and from a practical standpoint, consider keeping records of the following: The name or other identifier of the data subject that consented; The dated document, a timestamp, or note of when an oral consent was made; The version of the consent request and privacy policy existing at the time of the consent; and, The document or data capture form by which the data subject submitted his or her data.
    3. Where a processing activity is necessary for the performance of a contract.

      Would a terms of service agreement be considered a contract in this case? So can you just make your terms of service basically include consent or implied consent?

    4. “Is consent really the most appropriate legal basis for this processing activity?” It should be taken into account that consent may not be the best choice in the following situations:
    1. “Until CR 1.0 there was no effective privacy standard or requirement for recording consent in a common format and providing people with a receipt they can reuse for data rights.  Individuals could not track their consents or monitor how their information was processed or know who to hold accountable in the event of a breach of their privacy,” said Colin Wallis, executive director, Kantara Initiative.  “CR 1.0 changes the game.  A consent receipt promises to put the power back into the hands of the individual and, together with its supporting API — the consent receipt generator — is an innovative mechanism for businesses to comply with upcoming GDPR requirements.  For the first time individuals and organizations will be able to maintain and manage permissions for personal data.”
    2. CR 1.0 is an essential specification for meeting the proof of consent requirements of GDPR to enable international transfer of personal information in a number of applications.
    3. Its purpose is to decrease the reliance on privacy policies and enhance the ability for people to share and control personal information.
    1. It’s useful to remember that under GDPR regulations consent is not the ONLY reason that an organization can process user data; it is only one of the “Lawful Bases”, therefore companies can apply other lawful (within the scope of GDPR) bases for data processing activity. However, there will always be data processing activities where consent is the only or best option.
    2. Under EU law (specifically the GDPR) you must keep and maintain “full and extensive” up-to-date records of your business processing activities, both internal and external, where the processing is carried out on personal data.
    3. However, even if your processing activities somehow fall outside of these situations, your information duties to users make it necessary for you to keep basic records relating to which data you collect, its purpose, all parties involved in its processing and the data retention period — this is mandatory for everyone.
    1. If you’re a controller based outside of the EU, you’re transferring personal data outside of the EU each time you collect data of users based within the EU. Please make sure you do so according to one of the legal bases for transfer.

      Here they equate collection of personal data with transfer of personal data. But this is not very intuitive: I usually think of collection of data and transfer of data as rather different activities. It would be if we collected the data on a server in EU and then transferred all that data (via some internal process) to a server in US.

      But I guess when you collect the data over the Internet from a user in a different country, the data is technically being transferred directly to your server in the US. But who is doing the transfer? I would argue that it is not me who is transferring it; it is the user who transmitted/sent the data to my app. I'm collecting it from them, but not transferring it. Collecting seems like more of a passive activity, while transfer seems like a more active activity (maybe not if it's all automated).

      So if these terms are equivalent, then they should replace all instances of "transfer" with "collect". That would make it much clearer and harder to mistakenly assume this doesn't apply to oneself. Or if there is a nuanced difference between the two activities, then the differences should be explained, such as examples of when collection may occur without transfer occurring.

    2. If you profile your users, you have to tell them. Therefore, you must pick the relevant clause from the privacy policy generator.
    3. If you’re selling products and keep record of users’ choices for marketing purposes, dividing them into meaningful categories, such as by age, gender, geographical origin etc., you’re profiling them.
    1. you can think “sold” here as “shared with third parties for any profit, monetary or otherwise”
    2. under most legislations you’re required to inform extensively about the processing activities, their purposes and the rights of users.
    3. Full and extensive records of processing are expressly required in cases where your data processing activities are not occasional, where they could result in a risk to the rights and freedoms of others, where they involve the handling of “special categories of data” or where your organization has more than 250 employees — this effectively covers almost all data controllers and processors.
    1. If you have fewer than 250 employees, you only need to document processing activities that: are not occasional; or
    2. Most organisations are required to maintain a record of their processing activities, covering areas such as processing purposes, data sharing and retention; we call this documentation.
    1. it buys, receives, sells, or shares the personal information of 50,000 or more consumers annually for the business’ commercial purposes. Since IP addresses fall under what is considered personal data — and “commercial purposes” simply means to advance commercial or economic interests — it is likely that any website with at least 50k unique visits per year from California falls within this scope.
    1. You must disclose how the add-on collects, uses, stores and shares user data in the privacy policy field on AMO. Mozilla expects that the add-on limits data collection whenever possible, in keeping with Mozilla’s Lean Data Practices and Mozilla’s Data Privacy Principles, and uses the data only for the purpose for which it was originally collected.
  46. Apr 2020
    1. If the PIA identifies risks or high risks, based on the specific context and circumstances, the organization will need to request consent.
    2. Privacy impact assessments or data protection impact assessments under the EU GDPR, before the collection of personal data, will have a key role
    3. U.K. Information Commissioner Elizabeth Denham clearly states that consent is not the "silver bullet" for GDPR compliance. In many instances, consent will not be the most appropriate ground — for example, when the processing is based on a legal obligation or when the organization has a legitimate interest in processing personal data.
    4. data processing limited to purposes deemed reasonable and appropriate such as commercial interests, individual interests or societal benefits with minimal privacy impact could be exempt from formal consent. The individual will always retain the right to object to the processing of any personal data at any time, subject to legal or contractual restrictions.
    5. organizations may require consent from individuals where the processing of personal data is likely to result in a risk or high risk to the rights and freedoms of individuals or in the case of automated individual decision-making and profiling. Formal consent could as well be justified where the processing requires sharing of personal data with third parties, international data transfers, or where the organization processes special categories of personal data or personal data from minors.
    6. First, organizations must identify the lawful basis for processing prior to the collection of personal data. Under the GDPR, consent is one basis for processing; there are other alternatives. They may be more appropriate options.
    1. In geochemistry, we know that around US$7,000,000 each year is spent on open access to journals [9], with virtually none of this being reinvested into the community itself or the community being reimbursed. Given the immense value of preprints, reinvesting this value into more sustainable community-led non-profit ventures, such as EarthArXiv, is of great potential.
      • Biaya (dalam bentuk APC) yang dikeluarkan untuk menerbitkan makalah sangat tinggi. Biaya tersebut adalah di luar (on top) dari biaya riset yang telah dikeluarkan oleh peneliti atau lembaga pemberi dana riset (funder).
      • Biaya publikasi merupakan proporsi anggaran untuk sebuah dokumen yang berada di bagian akhir dari siklus riset, bukan anggaran inti.
      • Akan lebih baik kalau anggaran publikasi tersebut, sebagian besar atau seluruhnya dialirkan untuk membiayai kegiatan inti, yaitu riset.
      • Referensi: MDPI APC, NCBI, Tabel, King 2007, Calaos, 2011
    1. The data is stored in log files to ensure the functionality of the website. In addition, the data serves us to optimize the website and to ensure the security of our information technology systems. An evaluation of the data for marketing purposes does not take place in this context. The legal basis for the temporary storage of the data and the log files is Art. 6 para. 1 lit. f GDPR. Our legitimate interests lie in the above-mentioned purposes.
    2. The temporary storage of the IP address by the system is necessary to enable the website to be delivered to the user's computer. For this the IP address of the user must remain stored for the duration of the session.
    3. The collection of the data for the provision of the website and the storage of the data in log files is absolutely necessary for the operation of the website. Consequently, there is no possibility of objection on the part of the user.
    4. The legal basis for the processing of personal data using cookies is Art. 6 para. 1 lit. f GDPR. Our legitimate interests lie in the above-mentioned purposes.
  47. Mar 2020
    1. legitimate interest triggers when “processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject
    2. of the six lawful, GDPR-compliant ways companies can get the green light to process individual personal data, consent is the “least preferable.” According to guidelines in Article 29 Working Party from the European Commission, "a controller must always take time to consider whether consent is the appropriate lawful ground for the envisaged processing or whether another ground should be chosen instead." 
    3. “It is unfortunate that a lot of companies are blindly asking for consent when they don’t need it because they have either historically obtained the consent to contact a user,” said digital policy consultant Kristina Podnar. “Or better yet, the company has a lawful basis for contact. Lawful basis is always preferable to consent, so I am uncertain why companies are blindly dismissing that path in favor of consent.”
    1. Data has become a “natural resource” for advertising technology. “And, just as with every other precious resource, we all bear responsibility for its consumption,”
    2. To join the Privacy Shield Framework, a U.S.-based organization is required to self-certify to the Department of Commerce and publicly commit to comply with the Framework’s requirements. While joining the Privacy Shield is voluntary, the GDPR goes far beyond it.
    1. it would appear impossible to require a publisher to provide information on and obtain consent for the installation of cookies on his own website also with regard to those installed by “third parties**”
    2. Our solution goes a bit further than this by pointing to the browser options, third-party tools and by linking to the third party providers, who are ultimately responsible for managing the opt-out for their own tracking tools.
    3. You are also not required to manage consent for third-party cookies directly on your site/app as this responsibility falls to the individual third-parties. You are, however, required to at least facilitate the process by linking to the relevant policies of these third-parties.
    4. the publisher would be required to check, from time to time, that what is declared by the third parties corresponds to the purposes they are actually aiming at via their cookies. This is a daunting task because a publisher often has no direct contacts with all the third parties installing cookies via his website, nor does he/she know the logic underlying the respective processing.
    1. Decision point #2 – Do you send any data to third parties, directly or inadvertently? <img class="alignnone size-full wp-image-10174" src="https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart.png" alt="GDPR cookie consent flowchart" width="1451" height="601" srcset="https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart.png 1451w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-300x124.png 300w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-981x406.png 981w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-761x315.png 761w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-611x253.png 611w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-386x160.png 386w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-283x117.png 283w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-600x249.png 600w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-1024x424.png 1024w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-50x21.png 50w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-250x104.png 250w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-241x100.png 241w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-400x166.png 400w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-350x145.png 350w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-840x348.png 840w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-860x356.png 860w, https://www.jeffalytics.com/wp-content/uploads/7deb832d95678dc21cc23208d76f4144_Flowchart-1030x427.png 1030w" sizes="(max-width: 1451px) 100vw, 1451px" /> Remember, inadvertently transmitting data to third parties can occur through the plugins you use on your website. You don't necessarily have to be doing this proactively. If the answer is “Yes,” then to comply with GDPR, you should use a cookie consent popup.
    1. You must clearly identify each party that may collect, receive, or use end users’ personal data as a consequence of your use of a Google product. You must also provide end users with prominent and easily accessible information about that party’s use of end users’ personal data.
    1. GDPR introduces a list of data subjects’ rights that should be obeyed by both data processors and data collectors. The list includes: Right of access by the data subject (Section 2, Article 15). Right to rectification (Section 3, Art 16). Right to object to processing (Section 4, Art 21). Right to erasure, also known as ‘right to be forgotten’ (Section 3, Art 17). Right to restrict processing (Section 3, Art 18). Right to data portability (Section 3, Art 20).
    1. An example of reliance on legitimate interests includes a computer store, using only the contact information provided by a customer in the context of a sale, serving that customer with direct regular mail marketing of similar product offerings — accompanied by an easy-to-select choice of online opt-out.
    1. This is no different where legitimate interests applies – see the examples below from the DPN. It should also be made clear that individuals have the right to object to processing of personal data on these grounds.
    2. Individuals can object to data processing for legitimate interests (Article 21 of the GDPR) with the controller getting the opportunity to defend themselves, whereas where the controller uses consent, individuals have the right to withdraw that consent and the ‘right to erasure’. The DPN observes that this may be a factor in whether companies rely on legitimate interests.

      .