3,133 Matching Annotations
  1. Last 7 days
    1. NETGEAR is committed to providing you with a great product and choices regarding our data processing practices. You can opt out of the use of the data described above by contacting us at analyticspolicy@netgear.com

      You may opt out of these data use situations by emailing analyticspolicy@netgear.com.

    2. Marketing. For example, information about your device type and usage data may allow us to understand other products or services that may be of interest to you.

      All of the information above that has been consented to, can be used by NetGear to make money off consenting individuals and their families.

    3. USB device

      This gives Netgear permission to know what you plug into your computer, be it a FitBit, a printer, scanner, microphone, headphones, webcam — anything not attached to your computer.

    1. I like to think of thoughts as streaming information, so I don’t need to tag and categorize them as we do with batched data. Instead, using time as an index and sticky notes to mark slices of info solves most of my use cases. Graph notebooks like Obsidian think of information as batched data. So you have a set of notes (samples) that you try to aggregate, categorize, and connect. Sure there’s a use case for that: I can’t imagine a company wiki presented as streaming info! But I don’t think it aids me in how I usually think. When thinking with pen and paper, I prefer managing streamed information first, then converting it into batched information later— a blog post, documentation, etc.

      There's an interesting dichotomy between streaming information and batched data here, but it isn't well delineated and doesn't add much to the discussion as a result. Perhaps distilling it down may help? There's a kernel of something useful here, but it isn't immediately apparent.

      Relation to stock and flow or the idea of the garden and the stream?

    1. https://app.idx.us/en-US/services/credit-management

      Seems a bit ironic just how much data a credit monitoring wants to help monitor your data on the dark web. So many companies have had data breaches, I can only wonder how long it may be before a company like IDX has a breach of their own databases?

      The credit reporting agencies should opt everyone into these sorts of protections automatically given the number of breaches in the past.

  2. Aug 2022
    1. those provisions cannot be interpreted as meaning that the processing of personal data that are liable indirectly to reveal sensitive information concerning a natural person is excluded from the strengthened protection regime prescribed by those provisions, if the effectiveness of that regime and the protection of the fundamental rights and freedoms of natural persons that it is intended to ensure are not to be compromised.

      And here's the key element for indirect/inferred data. In order for Article 9 to matter, it must also include data that infers SCD.

    2. collecting and checking the content of declarations of private interests, of personal data that are liable to disclose indirectly the political opinions, trade union membership or sexual orientation of a natural person constitutes processing of special categories of personal data, for the purpose of those provisions.

      Second question: If you collect it, can you infer from it?

  3. Jul 2022
    1. AI text generator, a boon for bloggers? A test report

      While I wanted to investigate AI text generators further, I ended up writing a testreport.. I was quite stunned because the AI ​​text generator turns out to be able to create a fully cohesive and to-the-point article in minutes. Here is the test report.

    1. List management TweetDeck allows you to manage your Lists easily in one centralized place for all your accounts. You can create Lists in TweetDeck filtered by by your interests or by particular accounts. Any List that you have set up or followed previously can also be added as separate columns in TweetDeck.   To create a List on TweetDeck: From the navigation bar, click on the plus icon  to select Add column, then click on Lists  .Click the Create List button.Select the Twitter account you would like to create the List for.Name the List and give it a description then select if you would like the List to be publicly visible or not (other people can follow your public Lists).Click Save.Add suggested accounts or search for users to add members to your List, then click Done.   To edit a List on TweetDeck: Click on Lists  from the plus icon  in the navigation bar.Select the List you would like to edit.Click Edit.Add or remove List members or click Edit Details to change the List name, description, or account. You can also click Delete List.When you're finished making changes, click Done.     To designate a List to a column: Click on the plus icon  to select Add column.Click on the Lists option from the menu.Select which List you would like to make into a column.Click Add Column.   To use a particular List in search: Add a search column, then click the filter icon  to open the column filter options.Click the  icon to open the User filter. Select By members of List and type the account name followed by the List name. You can only search across your own Lists, or others’ public Lists.

      While you still can, I'd highly encourage you to use TweetDeck's "Export" List function to save plain text lists of the @ names in your... Lists.

    1. The documents highlight the massive scale of location data that government agencies including CBP and ICE received, and how the agencies sought to take advantage of the mobile advertising industry’s treasure trove of data.
    1. Location tracking is just one part of a panoply of data-collection practices that are now center stage in the abortion debate, along with people’s online search histories and information from period-tracking apps.
    1. Documentazione

      Il problema di questa sezione è derubricare i modelli dati come documentazione. Le ontologie di ontopia (parlo di modelli non tanto di dati come i vocabolari controllati) sono machine-readable. Quindi non è solo una questione di documentare la sintassi o il contenuto del dato. È rendere il modello actionable, ossia leggibile e interpretabile dalle macchine stesse. Io potrei benissimo documentare dei dataset con una bella tabellina in Github o con tante tabelline in un bellissimo PDF (documentazione), ma non è la stessa cosa di rendere disponibile un'ontologia per dei dati. Rendere i modelli parte attiva della gestione del dato (come per le ontologie) significa abilitare l'inferenza che avete richiamato sopra in maniera impropria per me, ma anche utilizzarli per explainable AI e tanti altri usi. Questo è un concetto fondamentale che non può essere trattato così in linee guida nazionali. Dovrebbe anzi avere un capitolo suo dedicato, vista l'importanza anche in ottica data quality "compliance" caratteristica di qualità dello standard ISO/IEC 25012.

    2. Nel caso a), il soggetto ha tutti gli elementi per rappresentare il proprio modello dati; viceversa, nei casi b) e c), la stessa amministrazione, in accordo con AgID, valuta l’opportunità di estendere il modello dati a livello nazionale.

      Tutta la parte di modellazione dati, anche attraverso il catalogo nazionale delle ontologie e vocabolari controllati, sembra ora in mano a ISTAT, titolare, insieme al Dipartimento di Trasformazione Digitale di schema.gov.it. Qui però sembra AGID abbia il ruolo di definire i vari modelli. Secondo me questo crea confusione. bisognerebbe coordinarsi anche con le altre amministrazioni per capire bene chi fa cosa. AGID al momento di OntoPiA gestisce solo un'infrastruttura fisica.

    3. Utilizzando il framework RDF, si può costruire un grafo semantico, noto anche come grafo della conoscenza, che può essere percorso dalle macchine risolvendo, cioè dereferenziando, gli URI HTTP. Ciò significa che è possibile estrarre automaticamente informazione e derivare, quindi, contenuto informativo aggiuntivo (inferenza).

      Non è che fate inferenza perché dereferenziate gli URI. Vi suggerisco di leggere bene le linee guida per l'interoperablità semantica attraverso i linked open data che spiega cosìè l'inferenza (e questa sì fa parte di un processo di arricchimento nel mondo linked open data). L'inferenza è una cosa più complessa che si può fare con ragionatori automatici e query sparql. Si possono dedurre nuove informazioni dati dati esistenti e soprattutto dalle ontologie che sono oggetti machine readable!

  4. Jun 2022
    1. Lastly, said datasheet should outline some ethical considerations of the data.

      I think this question speaks to one of the essential aspects of the data. In my interaction with the datasheet, I mostly focused on the absence of the data, but I think I have missed out on this key puzzle piece to the big picture of why the data is not there. I assumed what was responsible for the non-existence of the information without pondering on possible answers to this one key question. It is indeed crucial to look into the current condition of the item or/and collections including the item. If the artwork is not as much preserved as others, it can mean that more efforts need to be done to save it from lacking more data in the future.

    1. Another important distinction is between data and metadata. Here, the term “data” refers to the part of a file or dataset which contains the actual representation of an object of inquiry, while the term “metadata” refers to data about that data: metadata explicitly describes selected aspects of a dataset, such as the time of its creation, or the way it was collected, or what entity external to the dataset it is supposed to represent.

      This part is notably helpful for the understanding of differences that separate "metadata" from "data". I was writing a blog post for my weekly assignment. Knowing that data is the representation of the object and metadata describes information the data helps build the definition of the terms in my schema of knowledge. In many cases, metadata even provides resources that either give insights to how the data is collected or/and introduces possible perspectives as to how the data can be seen/utilized in the future. Data can survive without metadata, but metadata won't exist without the data. However, the data that lacks metadata may stay uncracked and ciphered, leading to the data potentially becoming useless in the fundamental and economic growth of human beings.

    1. Companies need to actually have an ethics panel, and discuss what the issues are and what the needs of the public really are. Any ethics board must include a diverse mix of people and experiences. Where possible, companies should look to publish the results of these ethics boards to help encourage public debate and to shape future policy on data use.

    1. Most of us are familiar with data visualization: charts, graphs, maps and animations that represent complex series of numbers. But visualization is not the only way to explain and present data. Some scientists are trying to sonify storms with global weather data. That could be easier to get a sense of interrelated storm dynamics by hearing them.

    1. È importante notare che nella pratica si ritiene a volte necessario passare da modelli di rappresentazione tradizionali come quello relazionale per la modellazione dei dati operando opportune trasformazioni per poi renderli disponibili secondo i principi dei Linked Open Data. Tuttavia, tale pratica non è necessariamente quella più appropriata: esistono situazioni per cui può essere più conveniente partire da un’ontologia del dominio e che si intende modellare e dall’uso di standard del web semantico per poter governare i processi di gestione dei dati.

      Non trovo utilità in quanto qui scritto onestamente. Molti più sistemi sono ormai linked open data nativi, quindi oltre al fatto che parlare di linked open data in arricchimento è sbagliato, direi di lasciar perdere questo periodo.

    2. utilizzano diversi standard e tecniche, tra cui il framework RDF

      rifraserei in "si basano su diversi standard, tra cui RDF, e spesso usano vocabolari controllati RDF per rappresentare terminologia controllata del dominio applicativo di riferimento"

    3. a formati di dati a quattro stelle come le serializzazioni RDF o il JSON-LD

      JSON-LD è una serializzazione RDF nel mondo JSON. Occhio che qui la traduzione in italiano del documento del publications office non è venuta fuori bene (loro dicono data format such as RDF or JSON-LD che sarebbe anche impreciso. RDF è un modello di rappresentazione del dato nel Web. Le serializzazioni RDF sono tipo Ntriple, RDF/Turtle, RDF/XML, JSON-LD). Tra l'altro nell'allegato tecnico sui formati per i dati aperti, testo preso dalla precedente linee guida, JSON-LD è indicato come serializzazione RDF.

    4. linked data

      Sono open o no?

    5. il linking è una funzionalità molto importante e di fatto può essere considerata una forma particolare di arricchimento. La particolarità consiste nel fatto che l’arricchimento avviene grazie all’interlinking fra dataset di origine diversa, tipicamente fra amministrazioni o istituzioni diverse, ma anche, al limite, all’interno di una stessa amministrazione”

      Qui c'è un problema di fondo proprio concettuale. Il problema è che il paradigma dei Linked Open Data è stato derubricato come arricchimento, che nelle linee guida che si cita qui era solo una fase di un processo generale per la gestione dei dati linked open data. Fare linked open data non vuol solo dire arricchire i dati, ma è possibile gestire un dato fin dalla sua nascita in linked open data nativamente. Questo era lo spirito delle linee guida qui citate. Estrapolando solo una parte avete snaturato un po' tutto. Consiglio di trattare l'argomento com'era trattato nelle precedenti linee guida. Peccato anche che sia sparita la figura della metropolitana che aiutava molto.

    6. Come detto, il collegamento (linking) dei dati può aumentarne il valore creando nuove relazioni e consentendo così nuovi tipi di analisi.

      Comunque, farei uno sforzo in più, con tutto quello che l'italia ha scritto sui linked open data, per scrivere frasi che non siano proprio paro paro la traduzione in italiano del documento in inglese.

    1. The reason these apps are great for such a broad range of use cases is they give users really strong data structures to work within.

      Inside the very specific realm of personal knowledge bases, TiddlyWiki is the killer app when it comes to using blocks and having structured, translatable data behind them.

    1. 80% of data analysis is spent on the process of cleaning and preparing the data

      Imagine having unnecessary and wrong data in your document, you would most likely have to experience the concept of time demarcation -- the reluctance in going through every single row and column to eliminate these "garbage data". Clearly, owning all kinds of data without organizing them feels like stuffing your closet with clothes that you should have donated 5 years ago. It is a time-consuming and soul-destroying process for us. Luckily, in R, we have something in R called "tidyverse" package, which I believe the author talks about in the next paragraph, to make life easier for everyone. I personally use dplyr and ggplot2 when I deal with data cleaning, and they are extremely helpful. WIthout these packages' existence, I have no idea when I will be able to reach the final step of data visualization.

    1. On a new clone of the Canva monorepo, git status takes 10 seconds on average while git fetch can take anywhere from 15 seconds to minutes due to the number of changes merged by engineers.
    2. Over the last 10 years, the code base has grown from a few thousand lines to just under 60 million lines of code in 2022. Every week, hundreds of engineers work across half a million files generating close to a million lines of change (including generated files), tens of thousands of commits, and merging thousands of pull requests.
    1. The goal is to gain “digital sovereignty.”

      the age of borderless data is ending. What we're seeing is a move to digital sovereignty

    1. nothing is permanent in the digital world

      Either ironic or maybe not the best advice when suggesting people might choose something like Notion or Evernote which could disappear with your data...

    Tags

    Annotators

    1. 23.0G com.txt # 23 gigs uncompressed

      23 GB txt file <--- list of all the existing .com domains

    Tags

    Annotators

    URL

    1. https://www.youtube.com/watch?v=bWkwOefBPZY

      Some of the basic outline of this looks like OER (Open Educational Resources) and its "five Rs": Retain, Reuse, Revise, Remix and/or Redistribute content. (To which I've already suggested the sixth: Request update (or revision control).

      Some of this is similar to:

      The Read Write Web is no longer sufficient. I want the Read Fork Write Merge Web. #osb11 lunch table. #diso #indieweb [Tantek Çelik](http://tantek.com/2011/174/t1/read-fork-write-merge-web-osb110

      Idea of collections of learning as collections or "playlists" or "readlists". Similar to the old tool Readlist which bundled articles into books relatively easily. See also: https://boffosocko.com/2022/03/26/indieweb-readlists-tools-and-brainstorming/

      Use of Wiki version histories

      Some of this has the form of a Wiki but with smaller nuggets of information (sort of like Tiddlywiki perhaps, which also allows for creating custom orderings of things which had specific URLs for displaying and sharing them.) The Zettelkasten idea has some of this embedded into it. Shared zettelkasten could be an interesting thing.

      Data is the new soil. A way to reframe "data is the new oil" but as a part of the commons. This fits well into the gardens and streams metaphor.

      Jerry, have you seen Matt Ridley's work on Ideas Have Sex? https://www.ted.com/talks/matt_ridley_when_ideas_have_sex Of course you have: https://app.thebrain.com/brains/3d80058c-14d8-5361-0b61-a061f89baf87/thoughts/3e2c5c75-fc49-0688-f455-6de58e4487f1/attachments/8aab91d4-5fc8-93fe-7850-d6fa828c10a9

      I've heard Jerry mention the idea of "crystallization of knowledge" before. How can we concretely link this version with Cesar Hidalgo's work, esp. Why Information Grows.

      Cross reference Jerry's Brain: https://app.thebrain.com/brains/3d80058c-14d8-5361-0b61-a061f89baf87/thoughts/4bfe6526-9884-4b6d-9548-23659da7811e/notes

    1. Expected to come into force on June 27, India's new data retention law will force VPN companies to keep users' data - like IP addresses, real names and usage patterns - for up to five years. They will also be required to hand this information over to authorities upon request. 

      Some draconian Indian data-retention laws are coming.

  5. May 2022
    1. Recognizing that the CEC hyperthreat operates at micro and macro scales across most forms of human activity and that a whole-of-society approach is required to combat it, the approach to the CEC hyperthreat partly relies on a philosophical pivot. The idea here is that a powerful understanding of the CEC hyperthreat (how it feels, moves, and operates), as well as the larger philosophical and survival-based reasons for hyper-reconfiguration, enables all actors and groups to design their own bespoke solutions. Consequently, the narrative and threat description act as a type of orchestration tool across many agencies. This is like the “shared consciousness” idea in retired U.S. Army general Stanley A. McChrystal’s “team of teams” approach to complexity.7       Such an approach is heavily dependent on exceptional communication of both the CEC hyperthreat and hyper-response pathways, as well as providing an enabling environment in terms of capacity to make decisions, access information and resources. This idea informs Operation Visibility and Knowability (OP VAK), which will be described later.  

      Such an effort will require a supporting worldwide digital ecosystem. In the recent past, major evolutionary transitions (MET) (Robin et al, 2021) of our species have been triggered by radical new information systems such as spoken language, and then inscribed language. Something akin to a Major Competitive Transitions (MCT) may be required to accompany a radical transition to a good anthropocene. (See annotation: https://hyp.is/go?url=https%3A%2F%2Fwww.frontiersin.org%2Farticles%2F10.3389%2Ffevo.2021.711556%2Ffull&group=world)

      If large data is ingested into a public Indyweb, because Indyweb is naturally a graph database, a salience landscape can be constructed of the hyperthreat and data visualized in its multiple dimensions and scales.

      Metaphorically, it can manifest as a hydra with multiple tentacles reach out to multiple scales and dimensions. VR and AR technology can be used to expose the hyperobject and its progression.

      The proper hyperthreat is not climate change alone, although that is the most time sensitive dimension of it, but rather the totality of all blowbacks of human progress...the aggregate of all progress traps that have been allowed to grow, through a myopic prioritization of profit over global wellbeing due to the invisibility of the hyperobject, from molehills into mountains.

    1. I explore how moves towards ‘objective’ data as the basis for decision-making orientated teachers’ judgements towards data in ways that worked to standardise judgement and exclude more multifaceted, situated and values-driven modes of professional knowledge that were characterised as ‘human’ and therefore inevitably biased.

      But, aren't these multifaceted, situated, and values-driven modes also constituted of data? Isn't everything represented by data? Even 'subjective' understanding of the world is articulated as data.

      Is there some 'standard' definition of data that I'm not aware of in the context of this domain?

    2. Recommended by Ben Williamson. Purpose: It may have some relevance for the project with Ben around chat bots and interviews, as well as implications for the introduction of portfolios for assessment.

    1. Each developer on average wastes 30 minutes before and after the meeting to context switch and the time is otherwise non-value adding. (See this study for the cost of context switching).
    1. For example, the idea of “data ownership” is often championed as a solution. But what is the point of owning data that should not exist in the first place? All that does is further institutionalise and legitimate data capture. It’s like negotiating how many hours a day a seven-year-old should be allowed to work, rather than contesting the fundamental legitimacy of child labour. Data ownership also fails to reckon with the realities of behavioural surplus. Surveillance capitalists extract predictive value from the exclamation points in your post, not merely the content of what you write, or from how you walk and not merely where you walk. Users might get “ownership” of the data that they give to surveillance capitalists in the first place, but they will not get ownership of the surplus or the predictions gleaned from it – not without new legal concepts built on an understanding of these operations.
    1. And it’s easy to leave. Unlike on Facebook or Twitter, Substack writers can simply take their email lists and direct connections to their readers with them.

      Owning your audience is key here.

    1. We believe that Facebook is also actively encouraging people to use tools like Buffer Publish for their business or organization, rather than personal use. They are continuing to support the use of Facebook Pages, rather than personal Profiles, for things like scheduling and analytics.

      Of course they're encouraging people to do this. Pushing them to the business side is where they're making all the money.

    1. Manton says owning your domain so you can move your content without breaking URLs is owning your content, whereas I believe if your content still lives on someone else's server, and requires them to run the server and run their code so you can access your content, it's not really yours at all, as they could remove your access at any time.

      This is a slippery slope problem, but people are certainly capable of taking positions along a broad spectrum here.

      The one thing I might worry about--particularly given micro.blog's--size is the relative bus factor of one represented by Manton himself. If something were to happen to him, what recourse has he built into make sure that people could export their data easily and leave the service if the worst were to come to happen? Is that documented somewhere?

      Aside from this the service has one of the most reasonable turn-key solutions for domain and data ownership I've seen out there without running all of your own infrastructure.

    2. First, Manton's business model is for users to not own their content. You might be able to own your domain name, but if you have a hosted Micro.blog blog, the content itself is hosted on Micro.blog servers, not yours. You can export your data, or use an RSS feed to auto-post it to somewhere you control directly, but if you're not hosting the content yourself, how does having a custom domain equal self-hosting your content and truly owning it? Compared to hosting your own blog and auto-posting it to Micro.blog, which won't cost you and won't make Micro.blog any revenue, posting for a hosted blog seems to decrease your ownership.

      I'm not sure that this is the problem that micro.blog is trying to solve. It's trying to solve the problem of how to be online as simply and easily as possible without maintaining the overhead of hosting and managing your own website.

      As long as one can easily export their data at will and redirect their domain to another host, one should be fine. In some sense micro.blog makes it easier than changing phone carriers, which in most cases will abandon one's text messages without jumping through lots of hoops. .

      One step that micro.blog could set up is providing a download dump of all content every six months to a year so that people have it backed up in an accessible fashion. Presently, to my knowledge, one could request this at any time and move when they wished.

    1. The ad lists various data that WhatsApp doesn’t collect or share. Allaying data collection concerns by listing data not collected is misleading. WhatsApp doesn’t collect hair samples or retinal scans either; not collecting that information doesn’t mean it respects privacy because it doesn’t change the information WhatsApp does collect.

      An important logical point. Listing what they don't keep isn't as good as saying what they actually do with one's data.

    1. The main thing Smith has learned over the past seven years is “the importance of ownership.” He admitted that Tumblr initially helped him “build a community around the idea of digital news.” However, it soon became clear that Tumblr was the only one reaping the rewards of its growing community. As he aptly put it, “Tumblr wasn’t seriously thinking about the importance of revenue or business opportunities for their creators.”
    1. Third, the post-LMS world should protect the pedagogical prerogatives and intellectual property rights of faculty members at all levels of employment. This means, for example, that contingent faculty should be free to take the online courses they develop wherever they happen to be teaching. Similarly, professors who choose to tape their own lectures should retain exclusive rights to those tapes. After all, it’s not as if you have to turn over your lecture notes to your old university whenever you change jobs.

      Own your pedagogy. Send just like anything else out there...

    1. And yes, some add-ons exist, but I just wish the feature was native to the browser. And I do not want to rely on a third party service. My quotes are mine only and should not necessary be shared with a server on someone's else machine.

      Ownership of the data is important. One could certainly set up their own Hypothes.is server if they liked.

      I personally take the data from my own Hypothes.is account and dump it into my local Obsidian.md vault for saving, crosslinking, and further thought.

    1. With Alphabet Inc.’s Google, and Facebook Inc. and its WhatsApp messaging service used by hundreds of millions of Indians, India is examining methods China has used to protect domestic startups and take control of citizens’ data.

      Governments owning citizens' data directly?? Why not have the government empower citizens to own their own data?

    1. The highlights you made in FreeTime are preserved in My Clippings.txt, but you can’t see them on the Kindle unless you are in FreeTime mode. Progress between FreeTime and regular mode are tracked separately, too. I now pretty much only use my Kindle in FreeTime mode so that my reading statistics are tracked. If you are a data nerd and want to crunch the data on your own, it is stored in a SQLite file on your device under system > freetime > freetime.db.

      FreeTime mode on the Amazon Kindle will provide you with reading statistics. You can find the raw data as an SQLite file under system > freetime > freetime.db.

    1. I tried very hard in that book, when it came to social media, to be platform agnostic, to emphasize that social media sites come and go, and to always invest first and foremost in your own media. (Website, blog, mailing list, etc.)
    1. Facebook provides some data portability, but makes an odd plea for regulation to make more functionality possible.

      Why do this when they could choose to do the right thing? They don't need to be forced and could certainly try to enforce security. It wouldn't be any worse than unveiling the tons of personal data they've managed not to protect in the past.

    1. Goodreads lost my entire account last week. Nine years as a user, some 600 books and 250 carefully written reviews all deleted and unrecoverable. Their support has not been helpful. In 35 years of being online I've never encountered a company with such callous disregard for their users' data.

      A clarion call for owning your own data.

    1. I like how Dr. Pacheco-Vega outlines some of his research process here.

      Sharing it on Twitter is great, and so is storing a copy on his website. I do worry that it looks like the tweets are embedded via a simple URL method and not done individually, which means that if Twitter goes down or disappears, so does all of his work. Better would be to do a full blockquote embed method, so that if Twitter disappears he's got the text at least. Images would also need to be saved separately.

    1. Common Pitfalls to Avoid When Choosing Your App

      What are the common pitfalls when choosing a note taking application or platform?

      Own your data

      Prefer note taking systems that don't rely on a company's long term existence. While Evernote or OneNote have been around for a while, there's nothing to say they'll be around forever or even your entire lifetime. That shiny new startup note taking company may not gain traction in the market and exist in two years. If your notes are trapped inside a company's infrastructure and aren't exportable to another location, you're simply dead in the water. Make sure you have a method to be able to export and own the raw data of your notes.

      Test driving many

      and not choosing or sticking with one (or even a few)<br /> Don't get stunned into inaction by the number of choices.

      Shiny object syndrome

      is the situation where people focus all attention on something that is new, current or trendy, yet drop this as soon as something new takes its place.<br /> There will always be new and perhaps interesting note taking applications. Some may look fun and you'll be tempted to try them out and fragment your notes. Don't waste your time unless the benefits are manifestly clear and the pathway to exporting your notes is simple and easy. Otherwise you'll spend all your time importing/exporting and managing your notes and not taking and using them. Paper and pencil has been around for centuries and they work, so at a minimum do this. True innovation in this space is exceedingly rare, and even small affordances like the ability to have [[wikilinks]] and/or bi-directional links may save a few seconds here and there, in the long run these can still be done manually and having a system far exceeds the value of having the best system.

      (Relate this to the same effect in the blogosphere of people switching CMSes and software and never actually writing content on their website. The purpose of the tool is using it and not collecting all the tools as a distraction for not using them. Remember which problem you're attempting to solve.)

      Future needs and whataboutisms

      Surely there will be future innovations in the note taking space or you may find some niche need that your current system doesn't solve. Given the maturity of the space even in a pen and paper world, this will be rare. Don't worry inordinately about the future, imitate what has worked for large numbers of people in the past and move forward from there.

      Others? Probably...

    1. Even with data that’s less fraught than our genome, our decisions about what we expose to the world have externalities for the people around us.

      We need to think more about the externalities of our data decisions.

    1. It's the feedback that's motivating A-list bloggers like Digg founder Kevin Rose to shut down their blogs and redirect traffic to their Google+ profiles. I have found the same to be true.

      This didn't work out too well for them did it?

    1. The European Commission has prepared to legislate to require interoperability, and it calls being able to use your data wherever and whenever you like “multi-homing”. (Not many other people like this term, but it describes something important – the ability for people to move easily between platforms

      an interesting neologism to describe something that many want

    1. the decentralised and open source nature of these systems, where anyone can host an instance, may protect their communities from the kinds of losses experienced by users of the many commercial platforms that have gone out of business over the last decades (e.g. Geocities, Wikispaces or Google + to name just a few).

      https://indieweb.org/site-deaths names a large number of others

    1. Subsidiarity, which uses “data cooperatives, collaboratives, and trusts with privacy-preserving and -enhancing techniques for data processing, such as federated learning and secure multiparty computation.”

      Another value of the data cooperative model might be that each individual might not have time to research and administer possible new data-sharing requests/opportunities, and it would be helpful to entrust that work to a cooperative entity that already has one's trust.

    1. A 20-year age difference (for example, from 20 to 40, or from 30 to 50 years old) will, on average, correspond to reading 30 WPM slower, meaning that a 50-year old user will need about 11% more time than a 30-year old user to read the same text.
    2. Users’ age had a strong impact on their reading speed, which dropped by 1.5 WPM for each year of age.
    1. Overall, having spent a significant amount of time building this project, scaling it up to the size it’s at now, as well as analysing the data, the main conclusion is that it is not worth building your own solution, and investing this much time. When I first started building this project 3 years ago, I expected to learn way more surprising and interesting facts. There were some, and it’s super interesting to look through those graphs, however retrospectively, it did not justify the hundreds of hours I invested in this project.I’ll likely continue tracking my mood, as well as a few other key metrics, however will significantly reduce the amount of time I invest in it.

      Words of the author of https://krausefx.com//blog/how-i-put-my-whole-life-into-a-single-database

      It seems as if excessive personal data tracking is not worth it

  6. Apr 2022
    1. ReconfigBehSci [@SciBeh]. (2021, October 1). @alexdefig against this survey data you might set actual uptake figures in France, various Canadian provinces, and Germany after the introduction of passports [Tweet]. Twitter. https://twitter.com/SciBeh/status/1443955929985159174

    1. ReconfigBehSci [@SciBeh]. (2021, October 1). @alexdefig and I didn’t say we should mandate them. I simply pointed out that when considering the impact of passports on uptake we should probably look at actual uptake in response to actual mandates in addition to survey data, which may or may not translate into action, no? [Tweet]. Twitter. https://twitter.com/SciBeh/status/1443958577173917699

    1. ReconfigBehSci [@SciBeh]. (2021, October 1). @alexdefig so, observational data has weaknesses- so does survey data, but it’s there and we should look at it. On your second point, yes, that is important, we should study that, if we have no data we can’t factor it into decision. Third is separate issue/factor to weigh. [Tweet]. Twitter. https://twitter.com/SciBeh/status/1443960096497627141

    1. The combined stuff is available to components using the page store as $page.stuff, providing a mechanism for pages to pass data 'upward' to layouts.

      bidirectional data flow ?! That's a game changer.

      analogue in Rails: content_for

      https://github.com/sveltejs/kit/pull/3252/files

    1. ReconfigBehSci. (2022, January 24). @STWorg @FraserNelson @GrahamMedley no worse- he took Medley’s comment that Sage model the scenarios the government asks them to consider to mean that they basically set out to find the justification for what the government already wanted to do. Complete failure to distinguish between inputs and outputs of a model [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1485625862645075970

    1. Jackie Parchem, MD [@jackie_parchem]. (2021, July 29). @MeadowGood @ACOGPregnancy Some of the docs who stepped up and got vaccinated early when we didn’t have the data we do now. What we all knew: Protecting moms protects babies! All have had their babies by now! @IlanaKrumm @anushkachelliah @gumbo_amando @emergjenncy @JuliaNEM33 https://t.co/h9UJo6h3fQ [Tweet]. Twitter. https://twitter.com/jackie_parchem/status/1420785474499645442

    1. For this reason, the Secretary of State set out a vision1 for health and care to have nationalopen standards for data and interoperability that are mandated throughout the NHS andsocial care.
    1. Nick Sawyer, MD, MBA, FACEP [@NickSawyerMD]. (2022, January 3). The anti-vaccine community created a manipulated version of VARES that misrepresents the VAERS data. #disinformationdoctors use this data to falsely claim that vaccines CAUSE bad outcomes, when the relationship is only CORRELATED. Watch this explainer: Https://youtu.be/VMUQSMFGBDo https://t.co/ruRY6E6blB [Tweet]. Twitter. https://twitter.com/NickSawyerMD/status/1477806470192197633

    1. Carl T. Bergstrom. (2021, August 18). 1. There has been lots of talk about recent data from Israel that seem to suggest a decline in vaccine efficacy against severe disease due to Delta, waning protection, or both. This may have even been a motivation for Biden’s announcement that the US would be adopting boosters. [Tweet]. @CT_Bergstrom. https://twitter.com/CT_Bergstrom/status/1427767356600688646

    1. ReconfigBehSci. (2021, February 1). @islaut1 @richarddmorey I think diff. Is that your first response seemed to indicate the evidence was the search itself (contra Richard) so turning an inference from absence of something into a kind of positive evidence ('the search’). Let’s call absence of evidence “not E”. 1/2 [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1356215051238191104

    1. The Lancet. (2021, April 16). Quantity > quality? The magnitude of #COVID19 research of questionable methodological quality reveals an urgent need to optimise clinical trial research—But how? A new @LancetGH Series discusses challenges and solutions. Read https://t.co/z4SluR3yuh 1/5 https://t.co/94RRVT0qhF [Tweet]. @TheLancet. https://twitter.com/TheLancet/status/1383027527233515520

    1. Dr Nisreen Alwan 🌻. (2020, March 14). Our letter in the Times. ‘We request that the government urgently and openly share the scientific evidence, data and modelling it is using to inform its decision on the #Covid_19 public health interventions’ @richardhorton1 @miriamorcutt @devisridhar @drannewilson @PWGTennant https://t.co/YZamKCheXH [Tweet]. @Dr2NisreenAlwan. https://twitter.com/Dr2NisreenAlwan/status/1238726765469749248

    1. Adam Kucharski on Twitter: "Interesting visualisation of COVID-related data sharing. (2021, March 26). Ttps://t.co/lOc1mzeiHt via @OYCar https://t.co/Im9SWlCA3Q [Tweet]. @AdamJKucharski. https://twitter.com/Adam