3,038 Matching Annotations
  1. Last 7 days
    1. WhatsApp has its own backup feature (actually, it has more than one way to do it.) WhatsApp supports end-to-end encrypted backups that can be protected with a password, a 64-digit key, and (more recently) passkeys. WhatsApp’s public docs are here and WhatsApp’s engineering writeup of the key-vault design is here. Conceptually, this is an interesting compromise: it reduces what cloud providers can read, but it introduces new key-management and recovery assumptions (and, depending on configuration, new places to attack). Importantly, even if you think backups are a mess — and they often are — this is still a far cry from the effortless, universal access alleged in this lawsuit.

      WhatsApp has its own backup feature, w additional key pairs etc. But this is not what is being claimed.

    2. If you use native device backup on iOS or Android devices (for example, iCloud device backup or the standard Android/Google backup), your WhatsApp message database may be included in a device backup sent to Apple or Google.

      backed-up decrypted message can be stored elsewhere when you do backups of your phone, e.g. Google or Apple depending on your device

    3. Several online commenters have pointed out that there are loopholes in WhatsApp’s end-to-end encryption guarantees. These include certain types of data that are explicitly shared with WhatsApp, such as business communications (when you WhatsApp chat with a company, for example.) In fairness, both WhatsApp and the lawsuit are very clear about these exceptions. These exceptions are real and important. WhatsApp’s encryption protects the content of your messages, it does not necessarily protect information about who you’re talking to, when messages were sent, and how your social graph is structured. WhatsApp’s own privacy materials talk about how personal message content is protected while other categories of data exist.

      The lawsuit is not about metadata, or WhatsApp use within a company which is not E2EE apparently (making it very unsuited for work situations I'd say)

    4. The most important thing to keep in mind here is that Meta’s encryption happens on the client application, the one you run on your phone. If the claims in this lawsuit are true, then Meta would have to alter the WhatsApp application so that plaintext (unencrypted) data would be uploaded from your app’s message database to some infrastructure at Meta, or else the keys would. And this should not be some rare, occasional glitch. The allegations in the lawsuit state that this applied to nearly all users, and for every message ever sent by those users since they signed up. Those constraints would tend to make this a very detectable problem. Even if WhatsApp’s app source code is not public, many historical versions of the compiled app are available for download. You can pull one down right now and decompile it using various tools, to see if your data or keys are being exfiltrated. I freely acknowledge that this is a big project that requires specialized expertise — you will not finish it by yourself in a weekend (as commenters on HN have politely pointed out to me.) Still, reverse-engineering WhatsApp’s client code is entirely possible and various parts of the app have indeed been reversed several times by various security researchers. The answer really is knowable, and if there is a crime, then the evidence is almost certainly* right there in the code that we’re all running on our phones.

      If the claim is correct, one could reverse engineer the app to see if true. Not a low hurdle but possible. 'the answer is knowable'

    5. In the case of WhatsApp, the application software is written by a team inside of Meta. This wouldn’t necessarily be a bad thing if the code was open source, and outside experts could review the implementation. Unfortunately WhatsApp is closed-source, which means that you cannot easily download the source code to see if encryption performed correctly, or performed at all. Nor can you compile your own copy of the WhatsApp app and compare it to the version you download from the Play or App Store. (This is not a crazy thing to hope for: you actually can do those things with open-source apps like Signal.)

      WhatsApp being closed source cannot be proven to work as advertised by outsiders. Unlike Signal

    6. Today WhatsApp describes itself as serving on the order of three billion users worldwide, and end-to-end encryption is on by default for personal messaging. They haven’t once been ambiguous about what they claim to offer. That means that if the allegations in the lawsuit proved to be true, this would be one of the largest corporate coverups since Dupont.

      Publicly WhatsApp has always maintained they do E2EE, the lawsuit says otherwise, that would be a major scandal. But also makes the claim hard to swallow

    7. The state of encryption on major messaging apps in early 2026. Notice that three of these platforms are operated by Meta.

      this is a sobering image. Signal at 70 million monthly active users. Apple imessage 1,3 billion Whatsapp 3 billion Instagram 2 billion FB Messenger 1 billion Telegram 1 billion Snapchat 900 million Discord 200million WeChat 1.3 billion Dingtalk 191million QQ 553 million no mention of Threema too tiny I suppose.

    8. should never be able to read the content of your messages.

      no mention here of the type of metadata WhatsApp holds: Signal only if account exists, and when last used. WhatsApp has contact lists and the date / time of every message between sender/receivers etc. That in itself is an issue imo.

    9. Beginning in 2014 (around the time they were acquired by Facebook), the app began rolling out end-to-end (E2E) encryption based on the Signal protocol.

      WhatsApp started rolling out E2EE around the time they were acquired by Meta. They use the Signal protocol

    10. The downside of vast scale is that apps like this can also collect data at similarly large scale. Every time you send a message through an app like WhatsApp, you’re sending that data first to a server run by WhatsApp’s parent company, Meta.

      The scale is the reason the collected data is an issue.

    11. In terms of scale, modern messaging apps are unbelievably huge. At the start of the period in the lawsuit, WhatsApp already had more than one billion monthly active users. Today that number sits closer to three billion. This is almost half the planet. In many countries, WhatsApp is more popular than phone calls.

      Scale of WhatsApp is close to 3 billion people.

    1. USA is leaving OGP under the Trump regime. The USA was a cofounder in 2011 and together w Brazil held a launching conference w civil society in July that year. I missed the OGP launch in Washington bc I thought the formal invitation by then Secretary of State Hillary Clinton I received was spam. [[Hoe ik Hillary Clinton spamfilterde en de oprichting van OGP miste]]

    1. https://web.archive.org/web/20260203103605/https://edition.cnn.com/2026/02/02/business/companies-worldwide-distance-ice-backlash

      Headline somewhat misleading, mostly covers that CapGemini is selling their US branch that works for ICE where bonuses directly correlate with immigrants located and detained. They're not ditching doing business with ICE, they're selling it to the next bidder who by def has no qualms doing business with ICE. And it remains to be seen how fast this sale will happen if at all. A few other examples that are smaller, like office leases. IBM-style behaviour still the norm imo. Vgl [[Comment le groupe français Capgemini aide la police fédérale américaine ICE à localiser les migrants]]

    1. SpaceX 'bought' xAI for 250 billion USD. xAI previously 'bought' Twitter in March 2025. SpaceX is expected to go public this or next year. Iow, Musk is combining everything to a) inflate overall value b) externalise all risks to investors c) get his biggest payday yet

      Grok will be live tweeting nudified astro-pics from every launch and starlink satellite burning up in the atmosphere

    1. De Duitse investering zorgt ervoor dat Tennet niet langer zelf het merendeel van de aandelen heeft van de Duitse activiteiten. De Nederlandse pensioenuitvoerder APG, een Noors staatsoliefonds en een Singaporees staatsinvesteringsfonds bezitten samen ongeveer 46 procent."TenneT Holding zal minimaal 28,9 procent van de aandelen in TenneT Duitsland behouden", schrijft het bedrijf. "Hierdoor beschikt TenneT Holding over volledige betrokkenheid bij belangrijke besluiten."

      The shares of Tennet Germany are: Tennet Holding 28,9% German state: 25% APG (Dutch pensionfund), Norwegian state oil fund, Singapore state investment fund (huh?), together the remaining 46%. This seems to leave TenneT a minority but still largest shareholder.

    1. Het genoemde AI Deltaplan is overigens geen deltaplan. Het is eigenlijk niet eens een plan. Het is meer een opsomming van wensen van mensen die zouden willen dat Nederland gewoon net zo zou worden als Silicon Valley. “Maar dan wel veilig”, zeggen ze. Ik schreef daar eerder over, als je in Europa wil innoveren moet je dat op een Europese manier doen. En niet kleine stukjes uit Amerika kopiëren en dan hopen dat dat hier gaat werken. Want dat doet het niet.

      word.

    2. “We richten een Nederlandse Digitale Dienst op: compact, deskundig en met doorzettingsmacht. Deze dienst ondersteunt de digitalisering Rijksbreed, stelt kwaliteitsstandaarden op en borgt goede ontwerpkeuzes. We verminderen afhankelijkheid van externe IT-leveranciers door meer IT-talent in dienst van het Rijk te nemen.”

      Te vormen 'nederlandse digitale dienst', soort Rijks ICT Gilde plus? Wederom 'rijksbreed'. Gericht op minder afhankelijkheden van externe IT leveranciers, meer IT kennis binnen het Rijk (wederom).

    3. De “call-in bevoegdheid” komt voort uit een initiatiefwetsvoorstel van GroenLinks-PvdA kamerlid Bushoff. De gedachte daarbij is dat bedrijfsovernames, ook als er nog geen gevaar is voor een echt monopolie, toch ongewenst kunnen zijn. Het voorstel noemt daarbij Co-Med, waar veel huisartsenpraktijken onder gingen vallen, met vervelende gevolgen. Deze bevoegdheid lijkt ook best gevolgen te kunnen hebben voor overnames van Solvinity-achtige bedrijven met lokaal zeer belangrijke posities.

      call-in bevoegdheid is een NL instrument, om in te kunnen grijpen bij bedrijfsovernames met grote externaliteiten voor de publieke zaak.

    4. Met de new competition tool (NCT), ook omschreven in het beroemde Draghi report (pagina 302), is het mogelijk bedrijven bindende aanwijzingen te geven als er effectief geen concurrentie mogelijk is. Als voorbeeld, als het praktisch onmogelijk is om te concurreren met Google of Microsoft voor email en agendabeheer kan je hiermee een bindende aanwijzing geven dat deze bedrijven het mogelijk moeten maken om als nieuwe speler mee te doen in hun ecosysteem. Zo’n verplichting is er nu soms ook al, maar alleen onder hele specifieke omstandigheden. Met de NCT kan het ook onder de algemene voorwaarde dat er van een niet-concurrerende markt sprake is.

      ACM krijgt beschikking over 'new competition tool' NCT a la Draghi , bindende aanwijzing nieuwe spelers toe te laten.

    5. rijksdienst nu werkelijk gemarineerd is in Amerikaanse technologie & dito arbeidsverleden. Ook consultants, big four, integrators en andere partijen waar de overheid op leunt komen uit die wereld. Als we weg willen komen bij “big tech” is het een hele uitdaging om dat te doen vanuit zo’n achtergrond/denkwereld. Ik zeg hier nadrukkelijk niet dat de ambtenaren voor Microsoft werken overigens, noch dat ze corrupt zijn! Desondanks vind ik de monocultuur in achtergrond en adviseurs een probleem. “Microsoft-tenzij” is op veel plekken uitgeschreven en officieel beleid.

      Goed punt: wie als deskundig geldt momenteel heeft een achtergrond in datgene waarvan men weg wil bewegen. Dus behoefte aan mensen die kennis hebben op het andere.

    6. We uniformeren de bedrijfsvoering binnen het Rijk onder leiding van het ministerie van BZK: van ICT en inkoop tot HR, onder andere door verplichte standaarden en gezamenlijke voorzieningen

      BZK als lead.

    7. “Digitale inkoop en aanbestedingen worden gestandaardiseerd en gecentraliseerd, gestuurd op security-by-design, zero-trust, soevereiniteit, open source en ketenveiligheid. De overheid benut haar marktmacht om veilige standaarden af te dwingen en stelt rijksbrede minimumeisen op voor security. Om voor financiering in aanmerking te komen moeten IT-projecten van de overheid (> €5 mln.) aan centrale IT-standaarden worden getoetst”

      is dit ook voor decentrale overheden? 'rijksbreed' is niet overheidsbreed. Wel goed dat uitsluitende voorwaarden lijken te gaan worden gesteld tav standaarden, security by design, zero-trust, soevereiniteit, open source en ketenveiligheid. Maar met 5 miljoen als ondergrens. Daarmee kijkt de VS nog vrolijk mee in alle documenten en zaaksystemen lijkt me. En grotere projecten zullen worden opgeknipt in fasen van elk net geen 5 miljoen. Maar is een begin.

    8. Ook zegt het akkoord niet veel over geld, en voor sommige dingen gaat een boel geld nodig zijn, want anders blijft het bij warme woorden hoe belangrijk we dingen vinden. Er is een budgettaire tabel gepubliceerd maar hier lijkt op het eerste gezicht niets te staan over digitalisering. Wel over BTW op sierteelt.

      Geen financiele paragraaf over alle digi voornemens

    9. Er komt een “digitaal bewindspersoon”, maar onbekend waar en hoe Geen woord over geld, is ook niet opgenomen in de “budgettaire tabel”!

      digitaal bewindspersoon wederom, maar zonder rijksbudget. Kansje voor '25% van de besparing in de eerste 2 jaar is het budget'?

    10. Geen enkele verwijzing naar de eerdere Nederlandse Digitaliseringsstrategie (NDS), of het daaruit voortvloeiende beloofde Overheidsbrede Cloudbeleid - opmerkelijk

      Dit is heel vreemd ja. NDS is er nog maar net. Vgl [[Nathan Ducastel p]] NDS gaat ook over het digihuishouden van de overheid (maar incl decentraal: is dat het verschil?)

    1. [[Martijn Aslander p]] is enthousiast over APIs aanspreken en verbinden met lokale workflows. Terecht. Ik realiseer me dat ik wel APIs gebruik om dingen te versturen (zoals schrijven in mijn feedreader), maar nog weinig om data naar me toe te halen binnen persoonlijke tools. APIs aanspreken voor banktransacties bijv, is een goede suggestie. Idem voor plaatsnamen of andere dingen in notities. Zowel in personal tools als in een template voor bijv Obsidian plugin.

    1. Ollama is automatically detected when running locally at http://127.0.0.1:11434/v1

      openclaw can detect presence of ollama if it is visible at this specific localhost address. basically if you have ollama running it will be detected. Meaning I could run openclaw fully locally.

    1. Setting context length Setting a larger context length will increase the amount of memory required to run a model. Ensure you have enough VRAM available to increase the context length.

      This setting is in ollama desktop interface. Does it set it for the terminal too? Or are these two separate instances?

    2. Context length is the maximum number of tokens that the model has access to in memory. The default context length in Ollama is 4096 tokens. Tasks which require large context like web search, agents, and coding tools should be set to at least 64000 tokens.

      Default ollama context length is 4k. Recommended minimum for websearch, agents and coding tools (like Claude Code or Open code) is 64k. I've seen 128k recommendations for Claude Code

    1. Clawdbot Realistic Costs:Software: Free (MIT licensed, forever)Hardware: VPS $4-5/month, or Raspberry Pi ~$50-100 upfront, or old laptop free, or Mac Mini ~$600AI Model: Claude Pro $20/month (casual) to Claude Max $200/month (heavy use like Viticci)Realistic minimum: ~$25/monthBut remember: that $300+ in 2 days user is real. Heavy agentic use burns through tokens fast.

      Assuming cloud based models. Why? You could drop up to 200 month on a VPS and be really self sufficient, but probably wouldn't need a VPS that heavy?

    1. Clawdbot is built for something completely different. Think of it as a personal executive assistant that lives inside your messaging apps. Just like you’d email instructions to a real assistant who then handles the work while you focus on other things, Clawdbot receives natural language requests through WhatsApp, Telegram, Slack, or Discord and then executes actual tasks on your computer.

      Description of Clawdbot/ now named openclaw, as a PA

    2. Clawdbot uses AI APIs on every interaction and every background task. Costs accumulate.

      need to understand better where it is getting its models from. Can be local too? Then costs are your own compute only. Why is everyone always assuming cloud models against fee?

    3. Clawdbot requires technical setup. You need to deploy it on a server or local machine, configure messaging platform integration, manage API keys, and set permission boundaries thoughtfully. It’s not a consumer app. You can’t open an app store, tap install, and have it ready. That’s a real friction point that’s worth acknowledging.

      run on a dedicted machine?

    4. Use Clawdbot when your primary bottleneck is the accumulation of small tasks across your digital life.

      clawdbot for 'small tasks across your digital life', vgl [[Aazai 2025 aantekeningen]] wrt such tasks.

    5. Clawdbot’s power to access messaging platforms means anyone with a security compromise at any layer could potentially impersonate you to the agent. A prompt injection through a web page it’s browsing, a malicious message in a group chat, or a crafted email could theoretically redirect it toward unintended actions. Proper sandboxing and permission boundaries mitigate this, but they require genuine technical discipline

      clawdbot as additional attack surface

    6. Clawdbot requires elevated system permissions to do what it does. It needs to read and write files, execute shell commands, access your terminal, connect to services on your behalf. Running an always-on agent with access to your credentials, your messaging platforms, and your file system creates security surface area that’s worth understanding.

      security concerns wrt clawdbot bc it executes actions you'd normally do.

    7. Every conversation you have with it, every preference you state, every decision you make gets stored in a markdown file that evolves over time. Future requests pull relevant context from that history automatically. You don’t have to remind it of things.

      Full context maintained in a md file.

    8. Clawdbot is built for async, long-running, continuous tasks that exist across your entire digital life. It’s the tool that monitors your inbox at midnight, processes information while you sleep, and sends you a briefing in the morning. It’s designed to be always on, always learning your preferences, always available through the messaging apps where you already live

      clawdbot is persistent across time / tasks.

    9. It maintains persistent memory across conversations, so it remembers your preferences, past decisions, and ongoing projects. It can monitor scheduled tasks, send you proactive notifications, and continuously work on long-running tasks even when you’re not actively messaging it.

      clawdbot maintains context over time In a .md log of sorts?

    10. Clawdbot connects to dozens of services by default: Gmail, Google Calendar, Todoist, GitHub, Spotify, even smart home devices. When it needs capabilities it doesn’t have built in, it can request them, and with proper guidance from you, it can expand those capabilities itself.

      'by default' or can do it out of the box, if switched on?

    1. Why This Matters: From Probabilistic Guesses to Deterministic AnswersGrounding your AI in a Knowledge Graph delivers three non-negotiable enterprise advantages:1. Accuracy: Answers are derived from an explicit model of your business, not from statistical correlations in text. You eliminate both factual and relational hallucinations.2. Explainability: Every answer comes with a query that shows exactly how it was derived, which entities were connected and which rules were applied. This turns the AI from a black box into a transparent tool.3. Architectural Stability: The semantic layer, the ontology, remains stable even as the underlying systems change. When you migrate your CRM, you simply update the mapping to the ontology. Your AI, analytics, and dashboards continue to work without interruption. This is agility where it counts.

      again this is arriving at database queries you already had, but through a way more convoluted way. When 'reasoning' is deterministic then you don't need a probabilistic layer at all, no?

    2. 2. A Schema-RAG system using a Knowledge Graph operates differently:The AI first consults the ontology to understand the question’s components.It finds that Support Ticket is a class linked via a property referencesProduct to the Product class.It discovers that the Product class has a property called productType, and that ‘Connected Service’ is a specific instance of that type.Armed with this understanding of the relationships, it constructs a precise, formal query (SPARQL) to retrieve only the tickets that conform to this logic.

      Congrats you just recreated a pre-existing tab in your existing support ticket system, by vibecoding a sparql query that was likely already in your system's manual even.

    3. Consider a critical business question: “Show me all open support tickets for our ‘Connected’ services.”

      another telling example. Is the number of 'open support tickets' 'critical' to your business?. Most companies would say no. The article is way too vague in specifying which organisations are the mental audience here. Or worse, thinking all companies are like it. Why would you need AI for this even? Any ticketing system will have a tab for this.

    4. 1. A Formal Ontology: This is the rulebook. Built using standards like RDF, OWL, and SHACL, the ontology is an intentional, explicit contract of meaning. It defines the classes of things that matter (Customer, Contract, Product), their properties (hasName, hasValue), and the relationships between them (coversProduct, assignedTo). This is where you declare, unambiguously, that a Debitor in SAP is, in fact, the same as a Client in Salesforce.

      ontology here is an information management ontology, mapping variables and their properties across applications ('domains' used above is a confusing term then). The mention of SAP here is a tell, in the sense of what type of scale of enterprise we're talking about.

    5. This grounds a probabilistic LLM in a deterministic model of your business, transforming a clever chatbot into a reliable reasoning engine for mission-critical decisions.

      bit hyperbolic phrasing. deterministic layers in front of an llm makes sense. Also a form of prompt engineering in a way. Seem to see that regularly: better inputs to generate better outputs. At what point are you spending so much on inputs, you might as well make the output yourself that way? Is the assumption most companies can actually formulate their processes in terms of 'business logic'? What would an example look like?

    6. The winning architecture uses Schema-RAG: retrieving meaning from a formal ontology before any data query is formed. This approach grounds the AI in verifiable business logic, ensuring accuracy and full explainability

      schema-rag as improvement (formal ontology added up front before a query for rag is formulated).

    7. Knowledge Graphs provide the semantic context, constraints and explicit relationships that LLMs lack. This enables true reasoning, like navigating a map of your business, instead of just text retrieval.

      knowledge graphs represent semantic context and relationships / constraints. K-graphs are a 1980s thing, I know we added them into the reference architecture for systems of digital twins I cowrote. But have no understanding more recent than the 1990s. - [ ] spend #30mins collecting current state of the art on #knowledgegraphs #pkm

    8. Standard Retrieval-Augmented Generation (RAG) over documents is a good first step, but it fails when faced with complex, cross-domain enterprise questions. It finds text that looks similar, which isn’t the same as finding facts that are related.

      criticism of retrieval augmented generatio (RAG): fails in cross domain settings, finds similar text not relations between facts or meaning

    9. Julius Hollmann

      author selfdescribed as "CEO @ digetiers I Building d.AP - the semantic foundation for real enterprise AI. Passionate about turning fragmented data into contextual, usable knowledge." so has a business interest in the topic. What's 'real enterprise AI' and what is his def of enterprise (99% of all companies are SME)

    1. Definities Volgens de Van Dale is soevereiniteit een synoniem van autonomie en zelfbestuur, maar de opstellers van een nieuw rapport zien dat toch anders. In hun optiek verwijst digitale autonomie naar het vermogen van de Rijksoverheid om onafhankelijk en zelfvoorzienend te opereren in het digitale domein. Een digitaal autonome overheid heeft controle over haar eigen digitale infrastructuren, systemen en gegevens zonder buitensporige afhankelijkheid van externe partijen, met name buitenlandse entiteiten. Digitale soevereiniteit gaat volgens de opstellers van het rapport nog een stap verder. Het verwijst naar de volledige onafhankelijkheid en de afdwingbare jurisdictie van de Rijksoverheid over haar digitale domein, zonder de mogelijkheid voor buitenlandse entiteiten om controle of invloed uit te oefenen.

      This is the definition of autonomy / sovereignty that MinBZK uses (and me). Source Strategische Verkenning Digitale Autonomie en de Rijksoverheid. Vgl Actieagenda Digitale Autonomie

    1. while this page is highly irritatingly designed wrt readability, it asks a good question wrt the basic layout of feedreaders. Vgl [[Mijn ideale feedreader 20180703063626]] en Fraidycat w its sparklines. I'd like heatmaps across communities etc.

      Vgl [[Claude code workshop Frank]] last Friday where I started implementing some things

    1. German initiative for digital autonomy / sovereignty, called Digital Independence Day, short DI Day (pronuncation d-day). Every 1st sunday of the month is a DI Day, where people are encouraged, and assisted, to switch to more privacy oriented and open services/applications away from existing silos.

    1. Your assistant. Your machine. Your rules. Unlike SaaS assistants where your data lives on someone else’s servers, OpenClaw runs where you choose—laptop, homelab, or VPS. Your infrastructure. Your keys. Your data.

      you run openclaw yourself. I think I saw [[Martijn Aslander p]] use it on a VPS yday.

    2. OpenClaw is an open agent platform that runs on your machine and works from the chat apps you already use

      openclaw an agent platform you interact with through regular chat apps.

    3. Two months ago, I hacked together a weekend project. What started as “WhatsApp Relay” now has over 100,000 GitHub stars and drew 2 million visitors in a single week.

      Openclaw started as a side project in #2025/11

    1. He talks about the intended audience for his work, whom he is speaking to. “My work addresses black [American] people, everybody else gets to listen in.” How having a limited imagined audience for your work is not an exclusionary act. How it is something that breathes life into the work, injects meaning and passion into a creative expression, impossible to achieve if that work would have been imagined for everyone. After which that life, meaning and passion can be appreciated as well by anyone outside those that are originally being addressed.

      n:: Imagined/intended audience as means to creative expression, not an exclusion: n:: there's a diff between imagined audience, and actual audience after creation. vgl [[Assumed audience definieren 20211113212257]] ref the video interview by [[Arthur Jafa c]]

    2. He talks about the spectrum between good and bad, hence the title of the interview ‘not all good, not all bad’. That us typically wanting to pigeonhole someone or something as either good or bad, before finding out an aspect about it that doesn’t fit, should not lead to then fully switching to labeling it the opposite. You have to get comfortable with the discomfort of juggling different and opposite notions about someone or something at the same time.

      vgl [[Holding questions 20091015123253]] mbt ongemak / vgl complexiteit

    1. Digitale soevereiniteit gaat over controle op drie niveaus: geografisch, juridisch en operationeel. Als data en kritische bedrijfsprocessen in datacentra binnen Nederland of Europa staan, dan ben je geografisch en juridisch in control, omdat deze datacenters vallen onder Nederlandse en Europese jurisdictie. Dat is echter niet voldoende, want voor digitale soevereiniteit is ook operationele controle nodig.

      Dit klopt niet: je bent juridisch niet in control als andere mogendheden tegelijkertijd ook jurisdictie hebben. Hier wordt geografische control gelijkgesteld aan juridische, en dat is precies wat niet het geval is (en wel zou moeten zijn)

      voor soevereiniteit zegt het hier ook operationale control nodig.

    2. Het lijkt in het huidige debat alsof de hele overheid voor haar digitalisering moet overgaan naar autonome of soevereine toepassingen, en wel meteen. Dat is echter niet haalbaar en ook niet nodig. Het begint met het maken van heldere keuzes voor welke data en systemen digitale autonomie en soevereiniteit nodig zijn, en voor welke niet. Zodat vervolgens in een realistisch tempo en haalbare stappen de overgang ingezet kan worden.

      Deze zinnen hebben alleen betekenis als je de definitie van soevereiniteit hierboven als operationele control volgt. (operationeel heeft te maken met autonomie juist). Dat meteen is een flauw trucje om 'onhaalbaar' te kunnen zeggen. digitale autonomie en soevereiniteit wordt hier gekoppeld aan 'data en systemen' die dat nodig hebben. Maar het hangt aan de overheid als instituut, dwz je hebt het altijd nodig. Hooguit is iets minder ondermijnend dan andere dingen. (Digid, vs verkeerslichtbesturing)

    3. Daarom is voor digitale soevereiniteit ook operationele controle nodig. Dat realiseer je door waarborgen in te bouwen, die je in staat stellen om de controle te houden. Dit wordt bepaald door de keuze en een specifieke implementatie van de componenten waaruit de cloud is opgebouwd. Software van niet-Europese partijen kan binnen die veilige kaders nog steeds gebruikt worden. Nog beter is het om toepassingen te gebruiken van Nederlandse of Europese aanbieders. Ook dan zijn waarborgen nodig, zoals afspraken over verkoop aan ongewenste partijen.

      Dit klopt alleen binnen bovenstaande definities. Omdat ze de crux, een ander land claimt jurisdictie naast de onze, hierboven hebben weggemoffeld.

    4. Digitale autonomie gaat over het zelfstandig keuzes kunnen maken in het digitale domein. Het gaat over het behouden van beslissingsbevoegdheid, zonder volledige onafhankelijkheid. Digitale soevereiniteit gaat een stap verder. Het betekent volledige controle en zeggenschap over data, digitale infrastructuur en technologieën.

      Hmm. Twee andere definities weer: digi-autonomie 'zelfstandig keuzes kunnen maken in het digitale domein' (het houden v beslissingsbevoegdheid) digi-soevereiniteit 'gaat stap verder', als zeggenschap over data, digi infra en technologieen.

      Ik zie dat beiden als autonomie, en soevereiniteit dat niemand je kan beinvloeden via je digi tooling.

      Dit is een opinie van Centric en Uniserver.

    1. The choice is ours. We simply need to choose whom we admire. Whom we want to recognize as successful. Whom we aspire to be when we grow up. We need to sing the praises of our true heroes: those who contribute to our commons.

      Key point here is that bigtech is the outcome of a specific definition of succes (centralised growth vs spreading) (not mentioned that funding vc style necessitates growth / extraction)

    1. Supporting Forgejo with work on dependency features would help too. The goal would be feature parity with GitHub and GitLab so self-hosted forges work with the same security tooling.

      a call for direct funding of Forgejo to reach feature parity w Github / gitlab. Forgejo is fork of Gitea (like Codeberg too).

    2. Procurement requirements could include open supply chain tooling. If an agency requires SBOMs, they could also require that generation doesn’t depend on proprietary services. If they require vulnerability scanning, the scanner could consume open advisory databases. Germany’s ZenDiS and openCode.de initiatives are relevant here. Connecting them with existing open solutions would be more efficient than starting fresh.

      Add (kick-out!) requirements to procurement specs. This is a way ensure open source and standards get adopted. Mentions ZenDiS, openCode.de as relevant examples. - [ ] return to look at ZenDiS and opencode.de

    3. The strategy is to unbundle the parts of a package manager and standardize them individually. Registry APIs, dependency graphs, vulnerability feeds, update notifications. Each piece can be commodified without replacing entire systems. Eat the elephant one bite at a time.

      yes, this is also how you tackle all the other silos. Deconstruct and recombine along different principles

    4. Treat dependency intelligence as infrastructure worth funding directly. The Sovereign Tech Fund model applies: direct funding to open source projects that serve as foundations. Ecosyste.ms, VulnerableCode, OSV, PURL implementations, CycloneDX/SPDX tooling, Forgejo’s dependency features all fit this category.

      suggests see Dependency intelligence as infrastructure, and fund directly, as through the [[Legal Information Sovereign Tech Agency]] fund.

    5. The gap between these columns is where standardization would reduce switching costs. Not building a European deps.dev, but defining a common dependency graph API. Not building a European Dependabot, but standardizing how dependency updates get proposed. A protocol for package management could let different implementations compete on the same interfaces. GitHub and GitLab bundle dependency features into their platforms: dependency graphs, vulnerability alerts, automated updates. A self-hosted Forgejo or Gitea instance doesn’t have equivalent tooling. But if those features were built on open standards and open data sources, switching forges wouldn’t mean losing supply chain visibility. The dependency intelligence could come from any provider that implements the same interfaces, rather than being locked to the forge vendor. Some gaps need new standards rather than adoption of existing ones. There’s no good specification for package version history across registries. Codemeta describes a package at a point in time, not its release history. PkgFed proposes using ActivityPub to federate release announcements, similar to how ForgeFed handles forge events.

      This points to where standards can reduce friction. a common dependency graph API standard for proposing dependency update protocol for package management dependency features based on open standards / open data so that dependency intelligence is not a lock-in element.

      New standards need for package version history across registries. Mentions PkgFed en Forgefed vgl [[PkgFed ActivityPub for Package Releases]]

    6. Most standards work in this space focuses on compliance artifacts: SBOMs for the Cyber Resilience Act, attestations for procurement requirements. Less attention goes to the underlying tools developers actually use. The dependency graph that feeds the SBOM generator, the metadata lookup that powers vulnerability scanning, the notification when a new version ships.

      Says standards in this topic are aimed at compliance. SBOMs for the Cyber Resilience Act e.g. [[Cyber Resilience Act CRA EU 20231026123507]]

    7. Other areas don’t, which keeps switching costs high. Dependency graph APIs vary by platform, vulnerability scanning integration is proprietary per forge, Dependabot and Renovate each have their own config format, and package metadata APIs differ across registries.

      Areas that do not have (de facto) standards, meaning high switching costs: dependency graph APIs vulnerability scanning integration package metadata APIs are all different.

    8. some areas have formal specifications. PURL provides a standardized way to reference packages across ecosystems. OSV and OpenVEX let advisory data flow between systems. CycloneDX and SPDX handle SBOMs. SLSA, in-toto, and TUF cover provenance. OCI standardizes container images.

      Formal standards exist for some areas only. Mentions Purl for references to packages OSV OpenVEX for advisory data flows Cyclone DX and SPDX for SBOMs, SLSA in-toto, TUF on provenance OCI for container images. How many of these are indeed formal standards? Does he mean documented ?

      • [ ] return to search if these are formal standards, and by which standards body, plus links
    9. Eaves’s commodification argument depends on standards to reduce switching costs. In the package management landscape, some de facto standards have emerged. Git is nearly universal for source hosting. Semver is the dominant versioning scheme, even if ecosystems interpret it differently. Lockfile formats vary by ecosystem, but they’ve become standards in practice: every dependency scanning company builds the same set of parsers to extract dependency information from all of them. Syft, bibliothecary, gemnasium, osv-scalibr, and others all parse the same formats. I made a dataset covering manifest and lockfile examples across ecosystems, and a similar collection of OpenAPI schemas for registry APIs. These are what made git-pkgs come together quickly.

      In package management there are a range of de facto standard modes of operation. These are not formal standards, just emerged in practice.

    10. Dries Buytaert extended this to procurement: governments buy from system integrators who package and resell open source, but that money doesn’t reach the maintainers who build it. If procurement scoring rewarded upstream contributions, money would flow differently. Open source is “the only software you can run without permission” and therefore useful for sovereignty, but it needs funding to work.

      See [[Funding Open Source for Digital Sovereignty]]

    11. Ploum made a related point: Europe doesn’t need a European Google. The European contribution to software has been infrastructure that serves as collective commons: the web, Linux, Git, VLC, OpenStreetMap.

      [[Why there’s no European Google]]

    12. The security and metadata tooling built on top of these registries tends to be US-based regardless of where the registry itself is hosted. A European company running Forgejo for code hosting still typically uses US services for dependency updates, vulnerability scanning, license compliance, and SBOM generation. Self-hosting the forge doesn’t change the intelligence layer.

      common situation. You stack tools, and may only have one of them in the EU or selfhosted

    13. The package registries follow a similar pattern, with a few European exceptions: Registry Owner Country npm Microsoft US PyPI Python Software Foundation US RubyGems Ruby Central US Maven Central Sonatype US NuGet Microsoft US Crates.io Rust Foundation US Go module proxy Google US Docker Hub Docker Inc US Conda/Anaconda Anaconda Inc US CocoaPods CocoaPods US Pub.dev Google US CPAN Perl Foundation US Homebrew Homebrew US Hex.pm Six Colors AB Sweden Packagist Private Packagist Netherlands CRAN R Foundation Austria Clojars Clojars Germany

      package registries, names 4 EU based ones. Hex.pm, packagist, cran, clojars. Are these aimed at different things or comparable?

    14. Most git forges are US-based: Forge Owner Country GitHub Microsoft US GitLab GitLab Inc US Gitea Gitea Ltd US HuggingFace Hugging Face Inc US

      It says most are US, but lists none other. I think it should say Gitea Ltd as UK? bc of the Ltd? Gitea has been forked into Forgejo (and Codeberg). Gitea is open source, GitLab has an open source community version, but otherwise closed.

    15. The same logic applies to the software supply chain, though that layer gets less attention in sovereignty discussions than cloud and storage.

      This article applies the same logic as David Eaves in [[The Path to a Sovereign Tech Stack is Via a Commodified Tech Stack]] to the 'software supply chain' here interpreted as: git forges, dependency intelligence layer on top of them, and package registries (like npm etc).

    16. Europe shouldn’t try to build its own AWS. Instead, governments should use procurement power to enforce interoperability standards.

      (David Eaves argument recap:) The answer is not in replicating similar type of organisations (like AWS) but in interoperability. I tend to agree. The problem w hyperscalers is the hyper and the scale. We don't do that for internet infrastructure either. Vgl [[The Path to a Sovereign Tech Stack is Via a Commodified Tech Stack]]

    17. If governments required that kind of compatibility as a condition for contracts, smaller providers could compete. Sovereignty through standards rather than state-owned infrastructure.

      Premisse: if public procurement demanded interoperability / compatibility, it would allow more competition. 'Sovereignty through standards'. I agree that interoperability is the way to go (vgl [[SEMIC Conf woensdag 20251126092049]]. However in public procurement often the ask is for the type of integration of 1 provider, which precludes a tapestry of interoperable elements. So there's more to it than this.

    1. Mais en novembre, le groupe français a répondu à un nouvel appel d’offres, cette fois pour identifier et localiser des étrangers. Cela s'appelle du skip-tracing, et une urgence pour l'ICE. Capgemini rafle la plus grosse part du marché, avec jusqu'à 365 millions de dollars à la clé. C'est écrit noir sur blanc : plus la société française localisera de migrants, plus elle pourra empocher d'argent. Les bonus financiers, en effet, sont basés sur le taux de réussite dans la vérification des adresses des étrangers.

      End of 2025 Capgemini entered into a new contract w ICE wrt skip-tracing. 365M USD, but Capgemini earns more when they locate more foreigners. --> this is worse than 'being IBM' bc now they have a direct financial stake in locating additional people.

    2. Le champion français des services informatiques compte 350 000 collaborateurs dans le monde, et une filiale américaine installée près de Washington. Celle-ci travaille avec plusieurs agences gouvernementales : ministère de la Santé, des Anciens Combattants et, depuis plus de quinze ans, le département de la Sécurité intérieure. Des contrats que nous avons consultés sur les bases de données publiques. Pour l'ICE, Capgemini gère par exemple un standard téléphonique réservé aux victimes de crimes commis par des étrangers. Une création de Donald Trump.

      Capgemini's US branch obv has many contracts w branches of the US admin. DHS has been a client for 15yrs. ICE has contracted a hotline to report crime by 'foreigners' to Capgemini eg

  2. Jan 2026
    1. Freedom internet provider on the difficulty of blocking Russian state media sites. - there is a court order to block sanctioned websites - however there is no official list of sanctioned websites - there are several lists from different MS and branche organisations with suggested sites, but this is never an official list or effort. - one of those lists, the one from Lithuania contains sites that are clearly not Russian state media (an Indian social media platform, and a generic video sharing site)

    1. Dutch ABP (a 500 billion pensionfund) dropped a third of their US treasury bills, 10 billion of 29 billion (March '25) to now 19 billion (Sept '25). The money was reinvested in Dutch and German bonds.

    1. he Commission has extended its ongoing formal proceedings opened against X in December 2023 to  establish whether X has properly assessed and mitigated all systemic risks, as defined in the DSA, associated with its recommender systems, including the impact of its recently announced switch to a Grok-based recommender system.

      The existing investigation of X under the DSA wrt recommender systems is extended in scope to include the recommender functions that Grok is announced to provide

    2. The new investigation will assess whether the company properly assessed and mitigated risks associated with the deployment of Grok's functionalities into X in the EU. This includes risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material.

      A new investigation under the DSA wrt Grok and the production/dissemination of illegal incl sexualised imagery and CSAM

    1. Ranking of cities by how well cycling is organised / accomodated. In the European ranking (which is the same top 5 as the global one), Utrecht and Amsterdam, plus obv CPH itself, Gent and now Paris.

    1. In 5 yrs Paris has doubled cycling's modal share, and become 5th ranking city in the Copenhagenize urban cycling index. Last summer ([[Paris Versailles 2025]]) we noticed the difference (compared to [[Paris 2021]] but also thought a lot still looked improvised not embedded yet. So probably some way until it's truly ingrained

    1. blogger Fabrizio Ferri Benedetti on their 4 modes of using AI in technical writing. - watercooler conversations, to get code explained - text suggestions while writing/coding (esp for repeating patterns in your work - providing context / constraints / intent to generate first drafts, restructure content, or boilerplate commentary etc. - a robotic assembly line, to do checks, tests and rewrites. MCP/skills involved.

      Not either/or but switching between modes

    1. OpenHands demonstrated strong capabilities, particularly for complex refactoring tasks. With better configuration and more explicit instructions about development workflows, it could likely match Copilot's reliability. The open-source nature also makes it attractive, since the entire system can be self-hosted and confugred for every team's or project's needs.

      openhands useful but likely needs more explicit instructions than others

    2. OpenHands: Capable but Requiring InterventionI connected my repository to OpenHands through the All Hands cloud platform. I pointed the agent at a specific issue, instructing it to follow the detailed requirements and create a pull request when complete. The conversational interface displayed the agent's reasoning as it worked through the problem, and the approach appeared logical.

      Also used openhands for a test. says it needs intervention (not fully delegated iow)

    3. When an agent doesn't deliver what you expected, the temptation is to engage in corrective dialogue — to guide the agent toward the right solution through feedback. While some agents support this interaction model, it's often more valuable to treat failures as specification bugs. Ask yourself: what information was missing that caused the agent to make incorrect decisions? What assumptions did I fail to make explicit?This approach builds your specification-writing skills rapidly. After a few iterations, you develop an intuition for what needs to be explicit, what edge cases to cover, and how to structure instructions for maximum clarity. The goal isn't perfection on the first try, but rather continuous improvement in your ability to delegate effectively.

      don't iterate for corrections. Redo and iterate the instructions. This is a bit like prompt engineering the oracle, no? AI isn't the issue, it's your instructions. Up to a point, but in flux too.

    4. A complete task specification goes beyond describing what needs to be done. It should encompass the entire development lifecycle for that specific task. Think of it as creating a mini project plan that an intelligent but literal agent can follow from start to finish.

      A discrete task description to be treated like a project in the GTD sense (anything above 2 steps is a project). At what point is this overkill, as in templating this project description may well lead to having the solutions once you've done this.

    5. The fundamental rule for working with asynchronous agents contradicts much of modern agile thinking: create complete and precise task definitions upfront. This isn't about returning to waterfall methodologies, but rather recognizing that when you delegate to an AI agent, you need to provide all the context and guidance that you would naturally provide through conversation and iteration with a human developer.

      What I mentioned above: to delegate you need to be able to fully describe and provide context for a discrete task.

    6. The ecosystem of asynchronous coding agents is rapidly evolving, with each offering different integration points and capabilities:GitHub Copilot Agent: Accessible through GitHub by assigning issues to the Copilot user, with additional VS Code integrationCodex: OpenAI's hosted coding agent, available through their platform and accessible from ChatGPTOpenHands: Open-source agent available through the All Hands web app or self-hosted deploymentsJules: Google Labs product with GitHub integration capabilitiesDevin: The pioneering coding agent from Cognition that first demonstrated this paradigmCursor background agents: Embedded directly in the Cursor IDECI/CD integrations: Many command-line tools can function as asynchronous agents when integrated into GitHub Actions or continuous integration scripts

      A list of async coding agents in #2025/08 github, openai, google mentioned. OpenHands is the one open source mentioned. mentions that command line tools can be used (if integrated w e.g. github actions to tie into the coding environment) - [ ] check out openhands agent by All Hands

    7. You prepare a work item in the form of a ticket, issue, or task definition, hand it off to the agent, and then move on to other work.

      compares delegation to formulating a 'ticket'. Assumes well defined tasks up front I think, rather than exploratory things.

    8. While interactive AI keeps you tethered to the development process, requiring constant attention and decision-making, asynchronous agents transform you from a driver into a delegator.

      async means no handholding, but delegation instead. That is enticing obviously, but assumes unattended execution can be trusted. Seems a big if.

    9. why asynchronous agents deserve more attention than they currently receive, provides practical guidelines for working with them effectively, and shares real-world experience using multiple agents to refactor a production codebase.

      3 things in this article: - why async agents deserve more attention - practical guidelines for effective deployment - real world examples

    10. asynchronous coding agents represent a fundamentally different — and potentially more powerful — approach to AI-augmented software development. These background agents accept complete work items, execute them independently, and return finished solutions while you focus on other tasks.

      Async coding agents is a diff kind of vibe coding: you give it a defined more complex tasks and it will work in the background and come back with an outcome.

    1. Further ReadingI’m not gonna pretend to be an expert here (any more than I’m an expert Obsidian plugin developer :p) but here are some resources that helped me figure out Claude CodeKent writes a lot about how he uses Obsidian with Claude Code.This is an incredible hub of resources for using Claude Code for project management, by someone who also uses Obsidian.This take on Claude Code for non-developers helped solidify my understanding of how it all works; it hallucinates less, for one thing.Eleanor Berger has fantastic tips for working with asynchronous coding agents and is incredibly level-headed about the LLM landscape.This article does a great job of breaking down all the nitty-gritty of how Claude Code works.Damian Player has a step-by-step guide on using Claude Code as a non-technical person that goes into more depth.Here’s a tutorial from a pro that breaks down best practices for using Claude Code, like the importance of planning and thinking things through, and exactly why a good CLAUDE.md file matters.

      Links w further reading wrt Claude Code and Obsidian. Most of these are links to X. Ugh.