666 Matching Annotations
  1. Jul 2021
    1. the STARD guidelines were updated in 2015 and are a set of 30 essential items that should be part of any diagnostic accuracy study; from sample size calculations to cross tabulation of results. The meta-analysis found that only 24/273 studies mentioned adherence to guidelines (interestingly the authors don't say if they actually were adherent or not) or contained a STARD flow diagram.

      Note that this doesn't mean that the studies were inaccurate, or that authors are deceiving readers. It only means that we can't be super confident in the findings and conclusions.

    2. Unfortunately the findings from the second point almost completely undermine the first, and so that's what I'll be focusing on.

      If authors don't report their methods accurately, we can't have confidence in the findings and conclusions.

    1. A deep learning model was used to render a prediction 24 hours after a patient was admitted to the hospital. The timeline (top of figure) contains months of historical data and the most recent data is shown enlarged in the middle. The model "attended" to information highlighted in red that was in the patient's chart to "explain" its prediction. In this case-study, the model highlighted pieces of information that make sense clinically.

      This kind of articulation of "reasoning" is likely to help develop trusting relationships between clinicians and AI.

    2. An “attention map” of each prediction shows the important data points considered by the models as they make that prediction.

      This gets us closer to explainable AI, in that the model is showing the clinician which variables were important in informing the prediction.

    3. We emphasize that the model is not diagnosing patients — it picks up signals about the patient, their treatments and notes written by their clinicians, so the model is more like a good listener than a master diagnostician.

      This sounds a lot like a diagnosis to me. In what way is this not a diagnosis?

    4. Before we could even apply machine learning, we needed a consistent way to represent patient records, which we built on top of the open Fast Healthcare Interoperability Resources (FHIR) standard as described in an earlier blog post.

      FHIR is what enabled scalability.

    1. No matter an AI's final Turing test score, a script built to imitate human conversation or recognize patterns isn't something we'd ever describe as being truly intelligent. And that goes for other major AI milestones: IBM's Deep Blue is better at chess than any human and Watson proved it could outsmart Jeopardy world champions, but they don't have any consciousness of their own.

      The writer is conflating intelligence (in the context of AI) with consciousness. No-one is suggesting that algorithms are conscious or sentient. And as for the throwaway statement: "truly intelligent" what does that even mean? What does it mean for something to be "truly intelligent"? To display human-level intelligence? There's no reason to think that there is anything special about human-level intelligence, and in fact, we're already far behind machines in many areas (e.g. recall, storage, calculation, pattern recognition, etc.)

    2. With Ex Machina, the directorial debut of 28 Days Later and Sunshine writer Alex Garland, we can finally put the Turing test to rest. You've likely heard of it -- developed by legendary computer scientist Alan Turing (recently featured in The Imitation Game), it's a test meant to prove artificial intelligence in machines. But, given just how easy it is to trick, as well as the existence of more rigorous alternatives for proving consciousness, passing a test developed in the '50s isn't much of a feat to AI researchers today.

      This is not true. Turing never said anything about "consciousness". He actually asked, “Can machines communicate in natural language in a manner indistinguishable from that of a human being?” The Turing test is not a test of artificial intelligence. And it's definitely not a test aimed at "proving" consciousness.

    3. As originally conceived, the Turing test involves a natural language conversation between a machine and human conducted through typed messages from separate rooms. A machine is deemed sentient if it manages to convince the human that it's also a person; that it can "think."

      This is not even wrong.

    1. That's why we will always stay smarter than AI.

      This is such confused writing that does a disservice to the reader by using terms and phrases inconsistently, straw man arguments, and cherry-picked examples.

    2. People will always be faster to adjust than computers, because that's what humans are optimized to do

      This is another different context. Now you're talking about being able to "adjust" to different contexts; your title talks about being "smarter" than computers. This is sloppy writing that's all over the place.

    3. For Booking.com, those new categories could be defined in advance, but a more general-purpose AI would have to be capable of defining its own categories. That's a goal Hofstadter has spent six decades working towards, and is still not even close.

      This is true. AGI is a long way away. But that's not the point. AI and machine learning are nonetheless making significant advances in narrowly constrained domains.

    4. perception is far more than the recognition of members of already-established categories — it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction
      1. This is true. Hofstadter is talking about perception while the developer in the previous example is simply talking about recognition or identification. It's painful having to keep track of how often you're doing this bait-and-switch.
    5. This concept of context is one that is central to Hofstadter's lifetime of work to figure out AI

      This is about creating artificial general intelligence i.e. a general intelligence that's analogous to human intelligence. But that's not what machine learning, Google Translate, or image processing is about.

      This is another straw man; swap out narrowly constrained machine learning tasks for generalised intelligence, and then explain why we're not even close to machine general intelligence.

    6. They may identify attributes such as 'ocean', 'nature', 'apartment', but Booking.com needs to know whether there's a sea view, is there a balcony and does it have a seating area, is there a bed in the room, what size is it, and so on. Dua and his colleagues have had to train the machines to work with a more detailed set of tags that matches their specific context.

      Context is important. But that dataset that's now been labelled with additional context is now capable of automating the processing of every new image it comes across. This is why machine learning is so impressive. That algorithm the developer was talking about needed a lot of work to do what they wanted it to. But now it can. And it will never get worse at identifying what's in a picture.

    7. A friend asked me whether Google Translate’s level of skill isn’t merely a function of the program’s database. He figured that if you multiplied the database by a factor of, say, a million or a billion, eventually it would be able to translate anything thrown at it, and essentially perfectly. I don’t think so. Having ever more 'big data' won’t bring you any closer to understanding, since understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today.

      This is exactly what I just said. You've swapped out "translate" and made it "understand", then argued for why Translate will never "understand". This is terrible writing.

      The fact is, Translate will get to the point where it's translations are essentially perfect for 99% of the use cases thrown at it. And that all depends on having more data. And while it's true that "more data" on it's own may not get us to machines understanding human language, that's simply not what anyone is suggesting Translate actually does.

    8. The bailingual engine isn’t reading anything — not in the normal human sense of the verb 'to read'. It’s processing text. The symbols it’s processing are disconnected from experiences in the world. It has no memories on which to draw, no imagery, no understanding, no meaning residing behind the words it so rapidly flings around.

      This is true. Machines don't "understand" us. Who cares? Google isn't making the claim that Translate is capable of being your friend. Google is saying that Translate can help you move between languages. This is a bullshit straw man argument. You're swapping out what the system does for something else, and then attacking the "something else".

    9. Clearly Google Translate didn’t catch my meaning; it merely came out with a heap of bull. 'Il sortait simplement avec un tas de taureau.' 'He just went out with a pile of bulls.' 'Il vient de sortir avec un tas de taureaux.' Please pardon my French — or rather, Google Translate’s pseudo-French.

      This is a bit like making fun of a 5 year old for how poorly she speaks. But 5 years ago Translate was much worse. In 5 years time these mistakes will be solved. And Translate will be helping millions of people every day. Why would we make fun of that?

    10. Humans are optimized for learning unlimited patterns, and then selecting the patterns we need to apply to deal with whatever situation we find ourselves in

      I don't think that this is true. I think it's more likely the case that we have the capacity to learn lots of patterns (not "unlimited") that we can generalise to many scenarios. We can extrapolate what we've learned in one context to many others. Algorithms will get there too.

    11. Computers are much better than us at only one thing — at matching known patterns.

      This is weird because there's a ton of what we do that's just pattern matching. In fact, you could probably make a decent argument that most of what we do that we call "intelligence" is "just" pattern matching. If this is the only thing that computers are better at, then I'd say that's pretty close to saying that the game is over.

    12. underestimate our own performance because we rarely stop to think how much we already know

      This, at least, is true. But you seem to be making the argument about what ought to be, based on what is, which doesn't work. Yes, we're very good at some things (walking around a room and not bumping into anything, for example) that we don't even think about, and that machines find very difficult to do. Things that are easy, are hard.

    13. apparent successes

      How is the fact that I can talk to my phone ("OK Google, take me to my appointment") and it responds by giving me turn-by-turn directions to the place, an "apparent success"?

    14. Machine intelligence is still pretty dumb, most of the time. It's far too early for the human race to throw in the towel.

      This is very different from the "always" claim in the title. Disingenuous writing.

    15. apparently

      No, it's just "impressive". We've gone from machines being unable to do things like translate human language, to being able to do it "reasonably well". That's like going from not being able to fly, to being able to fly poorly. Pretty impressive.

    16. Not for the first time in its history, artificial intelligence is rising on a tide of hype.

      Not so; it's rising in importance because it's accomplishing real things in the real world. Yes, there's some hype around what it will be able to do, but the fact is that the hype around what it can already do isn't hype, it's just stating the facts.

      It's not often that a writer establishes their bias in the first sentence of a piece.

    1. The researchers started with 140,000 hours of YouTube videos of people talking in diverse situations. Then, they designed a program that created clips a few seconds long with the mouth movement for each phoneme, or word sound, annotated. The program filtered out non-English speech, nonspeaking faces, low-quality video, and video that wasn’t shot straight ahead. Then, they cropped the videos around the mouth. That yielded nearly 4000 hours of footage, including more than 127,000 English words.

      The time and effort required to put together this dataset is significant in itself. So much of the data we need to train algorithms simply doesn't exist in a useful format. However, the more we need to manipulate the raw information, the more likely we are to insert our own biases.

    2. They fed their system thousands of hours of videos along with transcripts, and had the computer solve the task for itself.

      Seriously, this is going to be how we move forward. We don't need to understand how it works; only that it really does work. Yes, it'll make mistakes but apparently it'll make fewer mistakes than the best human interpreters. Why would you be against this?

    1. Easy reading only makes you informed; hard reading makes you competent. Easy reading prevents you from being ignorant; hard reading makes you smarter.

      Anecdotally, I'd agree with this. There are a few books where simply reading a few paragraphs has changed my worldview. Those books took a long time to work my way through.

    2. reading easy texts exclusively is prone to confirmation bias and exacerbates our blind spots. After all, if you’re ignorant (partially or entirely) of an opposing view, surely you wouldn’t think you’re “objective”?

      Mortimer Adler's concept of syntopical reading goes some way to address the issue, in that it actively encourages you to provide multiple perspectives on the topic of interest.

    3. Pocket is also free, but I never really got into using it. (Doesn’t fit my workflow)

      I use Pocket to save almost all of what comes across my feed. I read it first in Pocket and if it deserves more attention, I share it to an Inbox in Zotero for later processing. Granted, this isn't as frictionless as capturing and processing in Notion, for example, but I actually want some friction for my workflow because it slows me down enough to ask if something really is worth keeping.

    4. Because that keeps us paying attention. That’s right, we’re still paying for “free” information.

      Nothing is free. Even using open source software isn't truly free because there's an opportunity cost of not using something else.

    5. The third rule is you should always make what you read your own.

      This is linked to elaboration i.e. rewriting concepts in your own words without reference to the source material.

    6. to solve information overload — or more appropriately, attention overload — we need to create a reading workflow

      We may just be using different words but I imagine this to be more broad than simply reading. Since it also note-taking and elaboration (see a few paragraphs later), I would put this under something like "managing information" rather than "reading".

    7. we should do three things: Manage what we pay attention toManage how we pay attentionProcess them deeply

      I usually think of filtering incoming information, capturing information that matters to me, processing that information as part of creating new value, and then sharing the resulting output.

    1. Ultimately, my dream—similar to that of Bush’s—is for individual commonplace books to be able to communicate not only with their users in the Luhmann-esqe sense, but also communicate with each other.

      What does "communicate" mean here? I pull in pieces of other texts (similar to transclusion) or more like an API that my PLE interacts with and manipulates? What advantages do we each get from this that I don't have now?

    2. IndieWeb friendly building blocks like Webmention, feeds (RSS, JSON Feed, h-feed), Micropub, and Microsub integrations may come the closest to this ideal.

      I've experimented with some aspects of the IndieWeb, trying to incorporate it into my blog but I still find it too complicated. Maybe that's just me though.

    3. The idea of planting a knowledge “seed” (a note), tending it gradually over time with regular watering and feeding in a progression of Seedlings → Budding → Evergreen is a common feature.

      Just the idea of managing the tags and icons of this process feels exhausting.

    4. Mike Caulfield’s essays including The Garden and the Stream: A Technopastoral

      Such a great read.

    5. Second brain is a marketing term

      Indeed. After having spent some time going through posts and videos produced by this crowd, I realised that none of them use their 'systems' for anything other than telling people about their systems; Forte's Second brain is a product.

    6. one might consider some of the ephemeral social media stream platforms like Twitter to be a digital version of a waste book

      I like the idea of your Tweets being 'captured' in a space that you control, but not of them becoming a fixed part of it. Maybe an archive of your short notes and bookmarks of things you've shared. Would also be interesting to analyse over time.

    7. They have generally been physical books written by hand that contain notes which are categorized by headings (or in a modern context categories or tags. Often they’re created with an index to help their creators find and organize their notes.

      Describes the kind of physical notebooks I kept when I was younger; quotes, pictures, passages of text, etc. Anything that caught my attention.

    1. Some of the examples you describe – the extraordinary variance seen in sentencing for the same crimes (even influenced by such external matters as the weather, or the weekend football results), say, or the massive discrepancies in insurance underwriting or medical diagnosis or job interviews based on the same baseline information – are shocking. The driver of that noise often seems to lie with the protected status of the “experts” doing the choosing. No judge, I imagine, wants to acknowledge that an algorithm would be fairer at delivering justice?The judicial system, I think, is special in a way, because it’s some “wise” person who is deciding. You have a lot of noise in medicine, but in medicine, there is an objective criterion of truth.

      Sometimes. But in many cases everyone can do exactly the right thing and the outcome is still bad. In other cases, the entire team can be on the wrong track and the patient can improve, despite their interventions. Trying to establish cause and effect relationships in clinical practice is hard.

    1. If we look at the arc of the 20th century, heavier than air flight transformed our world in major ways.

      In other words, deep learning techniques, while insufficient to achieve human level AI, will nonetheless have a massive impact on society.

    1. Don’t be afraid to take courses first.

      This is a really good idea if you have the time, especially since some courses will include the relevant readings and main concepts. However, if the course rolls out over 3 months on a schedule, and you have a 2 month deadline, it may not be useful.

    2. the goal is not to get every fact and detail inside your head, but to have a good map of the area so you know where to look when you need to find it again

      And IMO, this is exactly what a zettelkasten gives you.

    3. Read everything you can, including making highlights of sections you think you may need to revisit later. If I finish a book or longer paper, I’ll often make a new document where I’ll pull notes and quotes from my original reading, as well as do my best to summarize what I read from memory. The goal here is partly to practice retrieval and understanding, but also partly to give yourself breadcrumbs so you can find things more easily later.

      I think that this is the issue right here. If you're reading "just-in-case" i.e. reading everything you can, it may not make sense to spend the extra effort in converting the highlights to permanent notes, since you may never come back to them. However, once you've decided that the highlights have value, you'll return to the source and review them as part of working on the project.

    4. I don’t find those methods very helpful, but it’s possible that I’m simply inept at them.

      I've spent about a year developing a zettelkasten, and now that I'm approaching 2000 individual notes on discrete concepts, I can say that I'm only starting to see some of the benefits. My point is, it might take a long time with a lot of effort, before the system starts paying off.

    5. it’s better to follow a breadth-first rather than depth-first search, since you can easily spend too much time going down one rabbit hole and miss alternate perspectives

      It's better to get an overview first so that you can identify promising concepts that need more attention.

    6. When following citations, I look for two factors: frequency and relevance. Works that are cited frequently are more central to a field.

      Google Scholar will provide a reasonably accurate citation count for works, although it means searching for each source separately.

    7. After reading about two dozen Kindle previews for the most relevant seeming ones

      Use book reviews and summaries to get a sense of what books are worth reading. A book-related interview with the author is another way to get some good insights before deciding on whether or not to read the whole book. Sometimes the answer you're looking for might be in the interview.

    8. Literature Review, Meta-Analysis and Textbooks

      These usually provide a broad overview of a topic, although a meta-analysis might only be relevant for certain kinds of research e.g. randomised controlled trials or other experimental designs. Scoping reviews are increasingly popular for broad overviews that don't necessarily drill down into the details.

    9. Wikipedia is usually a good starting point, because it tends to bridge the ordinary language way of talking about phenomena and expert concepts and hypotheses. Type your idea into Wikipedia in plain English, and then note the words and concepts used by experts

      The key is that Wikipedia provides structure on multiple levels, from a short article summary, to seb-sections of more fine-grained information, to key concepts, to reference lists of canonical works.

    10. Open-ended activity often languishes from a lack of completeness

      Unless it's something like learning in general, which is never complete.

    11. Setting Up Scope and Topic

      You need to establish boundaries with respect to what you want to learn, otherwise you'll keep going towards whatever catches your attention in the moment.

  2. Jun 2021
    1. A brief overview of predictive processing.

    2. If your predictions don’t fit the actual data, you get a high prediction error that updates your internal model—to reduce further discrepancies between expectation and evidence, between model and reality. Your brain hates unfulfilled expectations, so it structures its model of the world and motivates action in such a way that more of its predictions come truer.

      Does the high prediction error manifest as surprise? How do we perceive this prediction error?

    3. Your brain runs an internal model of the causal order of world that continually creates predictions about what you expect to perceive. These predictions are then matched with what you actually perceive, and the divergence between predicted sensory data and actual sensory data yields a prediction error.

      Why does it do this? Does this reduce cognitive workload or something?

    4. If your brain is Bayesian, however, it doesn’t process sensory data like that. Instead, it uses predictive processing (also known as predictive coding)2 to predict what your eyes will see before you get the actual data from the retina.

      Mental.

    5. Your brain is a prediction machine.

      See also Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.

    1. There are obvious benefits to AI systems that are able to automatically learn better ways of representing data and, in so doing, develop models that correspond to humans’ values. When humans can’t determine how to map, and subsequently model, values, AI systems could identify patterns and create appropriate models by themselves. However, the opposite could also happen — an AI agent could construct something that seems like an accurate model of human associations and values but is, in reality, dangerously misaligned.

      We don't tell AI systems about our values; we let it observe our behaviour and make inferences about our values. The author goes on to explain why this probably wouldn't work (e.g. the system makes us happy by stimulating pleasure centres of our brains) but surely a comprehensive set of observations would inform the system that humans also value choice and freedom, and that these might compete with other preferences? We might also value short-term pain for long-term benefits (e.g. exercising to increase cardiorespiratory fitness).

    2. Sometimes humans even value things that may, in some respects, cause harm. Consider an adult who values privacy but whose doctor or therapist may need access to intimate and deeply personal information — information that may be lifesaving. Should the AI agent reveal the private information or not?

      This doesn't seem like a good example. How is saving a life potentially harmful?

      Maybe a better example would be someone who wants to smoke?

    3. A thermostat, for example, is a type of reflex agent. It knows when to start heating a house because of a set, predetermined temperature — the thermostat turns the heating system on when it falls below a certain temperature and turns it off when it goes above a certain temperature. Goal-based agents, on the other hand, make decisions based on achieving specific goals. For example, an agent whose goal is to buy everything on a shopping list will continue its search until it has found every item. Utility-based agents are a step above goal-based agents. They can deal with tradeoffs like the following: Getting milk is more important than getting new shoes today. However, I’m closer to the shoe store than the grocery store, and both stores are about to close. I’m more likely to get the shoes in time than the milk.” At each decision point, goal-based agents are presented with a number of options that they must choose from. Every option is associated with a specific “utility” or reward. To reach their goal, the agents follow the decision path that will maximize the total rewards.

      Types of agents.

    4. As data-driven learning systems continue to advance, it would be easy enough to define “success” according to technical improvements, such as increasing the amount of data algorithms can synthesize and, thereby, improving the efficacy of their pattern identifications. However, for ML systems to truly be successful, they need to understand human values. More to the point, they need to be able to weigh our competing desires and demands, understand what outcomes we value most, and act accordingly.

      Are we good at this? Maybe on a personal level this might be true (e.g. I may prefer speed over safety but only up to a certain point, after which my preference would switch to safety). But at a social level? How do you weigh the competing interests and values of cultures or religions?

    1. objective function that tries to describe your ethics

      We can't define ethics and human values in objective terms.

    2. The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.

      We do better with algorithms where the utility function can be expressed mathematically. When we try to design for utility/goals that include human values, it's much more difficult.

    3. many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs

      And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.

    1. focusing on more conventional issues, since they’ll be what you’re most likely to come across. But these are unlikely to be your highest-impact options

      Optimise your decision-making to privilege high impact.

    1. Another much-debated question has been, ‘How does the agent’s choice of macro-block placements survive subsequent steps in the chip-design process?’ As mentioned earlier, human engineers must iteratively adjust their floorplans as the logic-circuit design evolves. The trained agent’s macro-block placements somehow evade such landmines in the design process, achieving superhuman outcomes for timing (ensuring that signals produced in the chip arrive at their destinations on time) and for the feasibility and efficiency with which wiring can be routed between components.

      You'd expect that the placement needs to be adjusted later on, as the design process unfolds and other blocks are added. It seems as if the algorithm is looking into the future and predicting what will need to go where, which enables it to place blocks now that won't need to be adjusted later.

    2. Mirhoseini et al. estimate that the number of possible configurations (the state space) of macro blocks in the floorplanning problems solved in their study is about 102,500. By comparison, the state space of the black and white stones used in the board game Go is just 10360.

      Again, just crazy complexity.

    3. Modern chips are a miracle of technology and economics, with billions of transistors laid out and interconnected on a piece of silicon the size of a fingernail. Each chip can contain tens of millions of logic gates, called standard cells, along with thousands of memory blocks, known as macro blocks, or macros. The cells and macro blocks are interconnected by tens of kilometres of wiring to achieve the designed functionality.

      Insane. I had no idea there was this much going on in a modern chip.

    1. Persistent Identifiers (PIDs) beyond the DOI

      Research other persistent identifiers besides DOI.

    2. To get closer to attaining coveted “rich” metadata and ultimately contribute to a “richer” scholarly communication ecosystem, journals first need to have machine-readable metadata that is clean, consistent, and as interoperable as possible.

      What WordPress plugins provide structured metadata functionality for OpenPhysio.

    1. To see this in action, visit somebody’s Open Ledger and recognize them for a scholarly contribution. For example, you could visit my Open Ledger: https://rescognito.com/0000-0002-9217-0407 and click the “Recognize” button at the top of the page to recognize me for a scholarly activity such as “Positive public impact”.

      Is Rescognito a service that authors will use independently of journals? If so, how does it reduce the cost of publishing? I think that I can see some value in using the service but it appears to be free and, even though the assertions being captured on the website are interesting, they're not obviously linked to journals, which means that the systems aren't connected and journals will keep having to create the assertions (which I'm made clear I don't believe they actually do). Is the idea that Rescognito will create a database of assertions that publishers will subscribe to, thus alleviating them from yet another role that we supposedly pay for?

    2. We also make it possible to generate and store assertions about activities that have no proximate physical or digital corollary, such as “mentoring” and “committee work”

      Now this looks interesting.

    3. No data transformations, no XML manipulation, no data synchronization, no DTD updates, no transfer between vendors, no coordination, no training, no management time! In addition, the assertion outputs are superior in granularity, provenance, specificity, display, and usability.

      No disagreement here. I'm just not convinced that this updated workflow justifies the expense of publishing. If anything, it should be further argument for the fact that publishing should be free.

    4. Another benefit of structured assertions is that they can be accessed via APIs

      True, but none of the assertions is actually generated by the publisher/journal, so why wouldn't authors simply be able to do this themselves?

    5. Because contributors are verifiably and unambiguously identified by their ORCID iD

      Again, a process that has nothing to do with the journal, other than asking authors for their ORCID links.

    6. The entire process has to be coordinated, managed, and synchronized — often over multiple continents

      Sure, but this coordination is almost always done by unpaid editorial staff and reviewers. Where is the expense for the journal?

    7. have the assertions be made by trusted, branded gatekeepers to guarantee their provenance

      I imagine that the increase in data embedded in various workflows will soon eliminate the need for even this very tenuous claim. Soon, when I publish something there will be embedded metadata (linked data) that verifies who I am. My institutional affiliation will be the same. Or something like blockchain could also take over this role. In addition, changes to the content will all be tracked over time and cryptographically signed to confirm that what was originally published has not been changed.

    8. Think of it this way: the value is not in the content, it is in the assertions — scholarly publishers don’t publish content, they publish assertions. This provides a coherent model to explain why publishers add value even though the source content and peer review are provided for free. Journals incur legitimate costs generating and curating important and valuable assertions.

      This is just bullshit. Most of the "assertions" from above are generated by unpaid peer reviewers, or include processes coordinated by unpaid editorial staff. What exactly are we paying publishers for?

    9. The authors have these conflicts of interest

      Again, simply a statement made by the person submitting the article. The journal does no verification.

    10. The findings are supported by the data”, “The work is novel”, “The work builds-on (i.e., cites) other work”

      This is all the work done by (unpaid) peer reviewers.

    11. The document was peer reviewed by anonymous reviewers

      The journal editorial staff do send the work out for review, but in the case of most journals the editorial staff are academics who aren't paid either.

    12. It was not plagiarized

      Peer reviewers again.

    13. The statistics are sound

      The peer reviewers do this and as you've already said, they're not paid.

    14. Who funded the work

      Again, this is a statement made by the person submitting. What verification does the journal do? Nothing.

    15. When was it released

      OK fine, the journal adds a publication date to the article.

    16. Where was the work done

      Same thing; the submitter basically tells the journal where they're from. The journal adds nothing here.

    17. Who created the document

      Publishers don't do any kind of identity verification, so "who created the document" is whoever the submitter says they are.

    18. Publisher costs usually include copyediting/formatting and organizing peer review. While these content transformations are fundamental and beneficial, they alone cannot justify the typical APC (Article Publication Charge), especially since peer reviewers are not paid.

      But peer reviewers are largely responsible for generating the assertions you talk about in the next paragraph, and which apparently, justify the cost of publishing.

    1. Journals like Science and Nature are financially viable and they create a kind of club. However, this is not a knowledge community in any meaningful sense. The authors of an article on the genome of an organism are not producing knowledge in concert with those of an article on the formation of stars. In these cases the “good” being produced is prestige, or brand value. Rather than being knowledge clubs, they are closer to “social network markets”, in which the choices that individuals make, such as where to seek to publish, are driven by the actions of those with higher prestige in the network. Such markets are effective means for extracting resources out of communities.

      I wonder if the profit margin of a journal ("community") could be used as a proxy indicator of the value that it creates for the community. Too much and it's focus is on making money. Is the ideal that the journal/community is breaking even?

    2. we propose that the value of a well-run journal does not lie simply in providing publication technologies, but in the user community itself. Journals should be seen as a technology of social production and not as a communication technology.

      Such a powerful shift.

    3. social life of journals and the knowledge communities they sustain

      Moves the emphasis from the article/PDF to the people themselves.

    1. You’ll be unlikely to stick with and excel in any path in the long term if you don’t enjoy it and it doesn’t fit with the rest of your life.

      You probably shouldn't strive to have an impactful career if it means derailing everything else you care about. Burning out is also something that you want to avoid.

    2. the main components of a satisfying job are:A sense of meaning or helping othersA sense of achievementEngaging work with autonomySupportive colleaguesSufficient ‘basic conditions’ such as fair pay and non-crazy working hours

      I'm fairly lucky in that I've found all of these in academia. Although I know that I'm in the minority here.

    3. your long-term goal is to maximise the product of these three factors over the remainder of your career

      So, increase your focus on the problem, take advantage of the opportunities you can find, and ensure that the domain is a good personal fit.

    4. answers to the following three questions

      It's largely because of these questions that I decided to shift my research slightly, from educational technology in general, to AI and machine learning in higher and professional education. I think that AI and ML is an important and pressing problem, and while it's not neglected in technology circles, it is not even on the radar for most health professionals and educators. I think that I'm in a good position to take advantage of the opportunities in an academic and research career to move the problem forward. And my experience to date makes this a good fit.

    5. we mean increasing wellbeing in the long term (the ‘positive impact’ part), and treating everyone’s interests as equal, no matter their gender, background, where they live, or even when they live (that’s the ‘impartial’ part).

      Bringing "time" into the discussion is especially impactful for me; it helps me to shift my thinking into the long-term so rather than planning for what might be good today, or this year, or the next 10 years, I'm trying to think further out than that.

    6. it’s hard to agree on what the terms ‘help people’, ‘positive impact’, and ‘personally fulfilling’ actually mean — let alone craft goals that will help you achieve these things

      It has to be about what these phrases mean to you.

    1. We think that very often, much of what matters about people’s actions is the difference they make over the long term — basically because we think that the welfare of those who live in the future matters; indeed, we think it matters no less than our own.

      If we consider that our species may exist for many hundreds of thousands of years into the future (if we don't completely stuff it up now), then future people may matter more than us because there will be so much more potential for well-being.

    2. What do we mean by “considered impartially”? In short, we mean that we strive to treat equal effects on different beings’ welfare as equally morally important, no matter who they are — including people who live far away or in the future, and including non-humans.

      I'm drawn to this aspect as it really broadens the scope of how we might think about this. I'm especially interested in the idea of future people and non-humans.

    3. most of the ideas for what welfare might consist in — for instance happiness or getting to live a life of your choosing– are tightly correlated with one another

      The specifics of the definition aren't very important for decision-making as the details are closely related anyway.

    4. we think it makes sense to use the following working definition of social impact: “Social impact” or “making a difference” is about promoting welfare, considered impartially, over the long term — without sacrificing anything that might be of comparable moral importance.

      I wonder if it makes sense to try and limit the scope of whose welfare we should consider. For example, I can potentially have a bigger impact on those in my community (and "community" can be quite broadly defined) than if I try to have an impact beyond that.

    1. When people value their attention and energy, they become valuable

      Related to the idea of career capital, which is the set of knowledge and skills that makes you hard to replace.

    1. the advertising-driven ones

      We want everything to be free but someone has to pay. Can we convince each other that good journalism is worth paying for? That social networks are worth paying for? That search is worth paying for? Someone has to pay and it feels like we've decided that we're OK with advertisers paying.

    2. Our world is shaped by humans who make decisions, and technology companies are no different…. So the assertion that technology companies can’t possibly be shaped or restrained with the public’s interest in mind is to argue that they are fundamentally different from any other industry

      We are part of sociotechnical systems.

  3. May 2021
    1. Career decision making involves so much uncertainty that it’s easy to feel paralysed. Instead, make some hypotheses about which option is best, then identify key uncertainties: what information would most change your best guess?

      We tend to think that uncertainties can't be weighted in our decision-making, but we bet on uncertainties all the time. Rather than throw your hands up and say, "I don't have enough information to make a call", how can we think deliberately about what information would reduce the uncertainty?

    2. One of the most useful steps is often to simply apply to lots of interesting jobs.

      Our fear of rejection may limit this path. One way to get over the fear of rejection may be to put yourself into the position where you're getting rejected a lot.

    3. by looking at how others made it

      Who is currently doing the job that I want to be doing?

    4. think of your career as a series of experiments designed to help you learn about yourself and test out potentially great longer-term paths

      I wonder if there's a connection here to Duke, A. (2019). Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts. Portfolio.

      I haven't read the book but it's on my list.

    5. The returns of aiming high are usually bigger than the costs of switching to something else if it doesn’t work out, so it’s worth thinking broadly and ambitiously

      You’ve got to think about big things while you’re doing small things, so that all the small things go in the right direction. - Alvin Toffler

    6. ask what the world needs most

      Focuses attention on the fact that this isn't fundamentally about you. This is an act of service.

    7. Being opportunistic can be useful, but having a big positive impact often requires doing something unusual and on developing strong skills, which can take 10+ years.

      Academics (and other knowledge workers) tend not to focus too much attention on getting better. Skills development happens in an ad hoc way rather than a structured and focused approach to improvement.

    8. career capital

      You must first generate this capital by becoming good at something rare and valuable. It is something that makes you hard to replace and is therefore the result of putting effort into developing skills that differentiate you from others.

      Newport, C. (2016). Deep Work: Rules for Focused Success in a Distracted World (1 edition). Grand Central Publishing.

    9. what helps the most people live better lives in the long term, treating everyone’s interests as equal

      I wonder if it's problematic to focus your attention on the community closest to you. For example, I'm a lecturer in physiotherapy at a university. Should I be trying to make a near-insignificant difference to "the most people",or should I be trying to make a bigger positive difference to my little community?

    10. We’d encourage you to make your own definition of each.

      This needs to be a personally meaningful plan, so asking participants to create their own definitions is useful.

    1. Why educational technologies haven't transformed the college experience

      Interesting that Larry Cuban was saying something similar in 1992.

    1. ABSTRACT

      From the author on Twitter: "The main promises of data-driven education (real-time feedback, individualised nudges, self-regulated learning) remain incompatible with the entrenched bureaucratic & professional logics of mass schooling...at the moment we have 'schoolified' data rather than 'datafied' schools

    2. promises of digital “dataism” are thwarted by the entrenched temporal organisation of schooling, and teacher-centred understandings of students as coerced subjects

      The structures of schools, and beliefs of teachers, undermine attempts to use technology in ways outside of this frame.

    3. Using sociological theorisation of institutional logics

      The authors view the logic of institutions through a social theory lens.

    1. Perhaps for everyone, a moment or occasion of leadership will emerge, reveal itself, and call to us with the painful, necessary task of speaking up, patiently asking for alternatives, insistently rocking the boat

      Leaders - and teachers - must recognise those moments when we're called to do something courageous.

      And we must find or create opportunities for our students to do the same.

    2. Ivan Illich, no fan of schooling or authoritarian structures of any kind, writes movingly about the role of the true, deep teacher. So does George Steiner, using language of “master” and “disciple” that would make many open-web folks cringe–or worse. Yet even the great and greatly democratic poet Walt Whitman salutes his “eleves” at one point. And I have experienced and been very grateful for the wisdom of those teacher-leaders who brought me into a fuller experience and understanding of my own responsibilities as a leader.
    3. leading is risky business

      As is teaching.

    1. but is about writerly choices

      We don't often realise that writing is a creative act, and that all creative acts are about making choices.

    2. Getting to grips with structure means keeping your reader in mind.

      Always write with the reader in mind. Good writing isn't a vanity project i.e. it's not about you. If you can't get your message across clearly then you're letting down your reader.

    3. it’s more accurate to say that readers notice the absence of structures, and/or when we shift the logics of one structure to another mid-stream, without saying anything.

      I often see this in my undergraduate and postgraduate students; they make a conceptual move without signalling it to the reader, which leaves the reader feeling discombobulated.

    1. The more I used Roam, the more valuable my notes became. This increase in value made me realize that I needed a little more trust in the developers' approach to release management. It wasn't always clear how changes would affect my workflow.

      I have a principle when it comes to choosing software: the more time I spend using a tool, the lower the switching costs need to be.

    1. I also related old notes on similar topics to the Kanban concepts. In some cases, I saw a detail from Getting Things Done in a new light and took note about that

      This is why some people avoid the term, permanent note; it creates the impression that the notes are somehow fixed whereas they are constantly undergoing refinement and connection.

    2. What was your reading intent and how can you capture it best?

      You need to know why you're reading.

    1. Judgments made by different people are even more likely to diverge. Research has confirmed that in many tasks, experts’ decisions are highly variable: valuing stocks, appraising real estate, sentencing criminals, evaluating job performance, auditing financial statements, and more. The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow.

      As educators (and disciplinary "experts") we like to think that our judgements on student performance are objective. As if our decisions are free from noise. I often point out to my students that their grades on clinical placements may be more directly influenced by their assessors relationship with their spouse, than by the actual clinical performance.

    1. rationale for the decision taken

      Wait, I thought this was "decision support" and not "decision making"?

    2. particular benefit to authors for whom English is not a first language

      Indeed.

    3. an AI tool which screens papers prior to peer review could be used to advise authors to rework their paper before it is sent on for peer review.

      This seems reasonable; rather than using the AI to make a decision, it's being used to make a suggestion to authors, highlighting areas of potential weakness, and giving them an opportunity to rework those areas.

    4. more inclined to reject papers based on this negative first impression derived from what are arguably relatively superficial problems.

      When you train machine learning systems on humans, you're definitely building in our biases.

    5. One possible explanation for the success of this rather simplistic model is that if a paper is presented and reads badly, it is likely to be of lower quality in other, more substantial, ways, making these more superficial features proxy useful metrics for quality.

      This seems to assume that authors have English as a first language. If you're using "reads badly" as a proxy indicator of quality, aren't you potentially missing out on good ideas?

    1. most journals still insist on submissions in .docx format.

      We work within an ecosystem and it's hard to change your own behaviour when so much is determined by other nodes in the network.

    1. The most common way to stage an argument in the thesis goes something like this: Here is a puzzle/problem/question worth asking. If we know more about this puzzle/problem/question then something significant (policy, practice, more research) can happen.Here is what we already know about the puzzle/problem/question. I’ve used this existing knowledge (literatures) to help: my thinking and approach; my research design; make sense of my results; and establish where my scholarly contribution will be. Here is how I designed and did the research in order to come up with an “answer”.Here’s the one/two/three clusters of results.Missing stepNow here’s my (summarised) “answer” to the puzzle/problem/question I posed at the start. On the back of this answer, here’s what I claim as my contribution(s) to the field. Yes I didn’t do everything, but I did do something important. Because we now know my answer, and we didn’t before I did the research, then here are some possible actions that might arise in policy/practice/research/scholarship.
    1. we must shed our outdated concept of a document. We need to think in terms of flexible jumping and viewing options. The objects assembled into a document should be dealt with explicitly as representaions of kernel concepts in the authors' minds, and explicit structuring options have to be utilized to provide a much enhanced mapping of the source concept structures.

      This seems like the original concept that Microsoft's Fluid document framework is based on. And Apple's earlier OpenDoc project.

    2. It really gets hard when you start believing in your dreams.

      It's hard because of the emotional investment and subsequent pain when you see your dreams not being realised.

    3. Draft notes, E-mail, plans, source code, to-do lists, what have you

      The personal nature of this information means that users need control of their information. Tim Berners-Lee's Solid (Social Linked Data) project) looks like it could do some of this stuff.

    4. editor-browser tool sets

      This hasn't happened yet, and is unlikely to happen anytime soon. We seem to be moving away from a read/write web, with authors only being able to edit content they've created on domains that they control. The closest I've seen to this is the Beaker Browser.

    5. Many years ago, I dreamed that digital technology could greatly augment our collective human capabilities for dealing with complex, urgent problems. Computers, high-speed communications, displays, interfaces — it's as if suddenly, in an evolutionary sense, we're getting a super new nervous system to upgrade our collective social organisms. I dreamed that people were talking seriously about the potential of harnessing that technological and social nervous system to improve the collective IQ of our various organizations.

      And yet here we are, with the smartest computer scientists in the world spending all their time trying to figure out how to make us watch more videos so that we can be shown more ads.

    1. we miss the deep understanding that comes from dialogue and exploration.

      Knowledge emerges from interaction.

    1. The medium should allow people to think with their bodies, because we are more than fingers and hands.

      Embodied cognition.

    2. “With no one telling me what to work on, I had to decide for myself what was meaningful in this life. Because of how seriously I took my work, this process was very difficult for me,”

      A blank canvas can feel overwhelming. Some structure is better than no structure. Scaffolding is important for novice learners.

    3. most professional programmers today spend their days editing text files inside an 80-column-wide command line interface first designed in the mid-1960s.

      For a longer discussion of this concept, see Somers (2017, September 26). The Coming Software Apocalypse. The Atlantic.

    4. Commercial apps force us into ways of working with media that are tightly prescribed by a handful of people who design them

      We're constrained by the limits of the designers.

    5. “Every Representation of Everything (in progress),” showcasing examples of different musical notation systems, sign languages, mathematical representations, chemistry notations.

      Sounds a bit like Mathematica. See here for a basic overview of Mathematica in the context of academic publishing.

    6. Bret Victor, the engineer-designer who runs the lab, loves these information-rich posters because they break us out of the tyranny of our glassy rectangular screens.

      Seems odd. Glassy rectangular screens can also be "information rich". I also like posters but not because of the information density.

    1. “When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.

      The flexibility of software, relative to hardware, adds many layers of complexity, moving the capability of software beyond our capacity to understand.

    1. Leibniz’s notation, by making it easier to do calculus, expanded the space of what it was possible to think.

      See Bret Victor's presentation on Media for thinking the unthinkable, which expands on this idea.

    2. As science becomes more about computation, the skills required to be a good scientist become increasingly attractive in industry. Universities lose their best people to start-ups, to Google and Microsoft. “I have seen many talented colleagues leave academia in frustration over the last decade,” he wrote, “and I can’t think of a single one who wasn’t happier years later.”

      Well, this sucks to read (I'm an academic).

    3. Basically, the essay is about the difference between Wolfram Alpha / Mathematica, and Jupyter notebooks. One is a commercial product that's really complex and centrally designed; the other is open source, chaotic, and cobbled together from bits and pieces. But the scientific community seems to be moving towards open (i.e. Jupyter notebooks).

    4. maybe computational notebooks will only take root if they’re backed by a single super-language, or by a company with deep pockets and a vested interest in making them work. But it seems just as likely that the opposite is true. A federated effort, while more chaotic, might also be more robust—and the only way to win the trust of the scientific community.

      It's hard to suggest that scientific publishing move under the ultimate control of an individual.

    5. “Frankly, when you do something that is a nice clean Wolfram-language thing in a notebook, there’s no bullshit there. It is what it is, it does what it does. You don’t get to fudge your data,” Wolfram says.

      Although this clearly only works with a specific type of data.

    6. “The place where it really gets exciting,” he says, “is where you have the same transition that happened in the 1600s when people started to be able to read math notation. It becomes a form of communication which has the incredibly important extra piece that you can actually run it, too.”

      You can see Bret Victor describing this idea in more detail here.

    7. What I’m studying is something dynamic. So the representation should be dynamic.”

      Related to Victor's Dynamicland project, as well as his thoughts on a "dynamic medium. A note about “The Humane Representation of Thought.” Worrydream. http://worrydream.com/TheHumaneRepresentationOfThought/note.html )".

    8. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”

      In a similar way we started using the internet (and HTML) to mimic the user interfaces of CDs and DVDs. We still use HTML to create faux book interfaces for magazines, complete with page flip animations (although thankfully these are less common than they used to be).

    9. Papers may be posted online, but they’re still text and pictures on a page.

      We still call them "papers".

    10. There was no public forum for incremental advances.

      I've never thought of the academic paper as a format that enabled the documentation of incremental progress.

    1. The pandemic has forced everyone to become video editors and generally not very good ones.

      Although some teachers, like Michael Wesch, have made video content production their side-gig. We constantly expect our students to expand on their skill-set so is it unreasonable to expect teachers to do the same?

    2. providing context rather than content

      What if you use video that you've created as part of creating this context?

    3. focus on creating communities

      OK, so teachers shouldn't be expected to create video content, but it's reasonable to expect them to create communities? That seems weird.

    4. Teachers feel bound by tradition to deliver content and the students expect the teacher to deliver content and it's very hard to escape from this mindset.

      We're trapped in our traditions.

    1. My assertion is based on the observation that a great deal of learning does take place in connective environments on the world wide web, that these have scaled to large numbers, and that often they do not require any institutional or instructional support.
    1. community

      A community is not the same thing as a collection.

    2. Offense, insult, and hurt feelings are not particularly important

      Not only is it not important, you do not have the right to be offended.

      See here (Salman Rushdie), here (John Cleese), here (Jordan Petersen), here (Stephen Fry), and...well, you get the point.

    3. life contains more suffering than happiness

      An argument for anti-natalism.

    4. Some things get better and some things get worse.

      Exactly. Both can be true at the same time.

    5. generally up to you

      Although you can probably behave in ways that influence what others say about you.

    6. You can’t always get what you want or deserve

      Or, as the Rolling Stones said, sometimes you might just get what you need.

    7. only partly self-determined

      Unless you don't sign up to the common conception of free will, in which case, none of your life is self-determined.

  4. Apr 2021
    1. keeping a reading journal to write to yourself about how the reading and your own thinking and purposes/plans are connecting and speaking to one another.

      The benefit of a reading journal is that it's all in one place; the citation, your thoughts on the topic, links to other readings and thoughts.

      For me, it goes like this:

      1. Read things (web, ebook, audio book, physical book, PDF, etc.) and make notes through annotation.
      2. Capture those things in Zotero (extracts metadata automatically, add tags, relate items, add literature notes).
      3. Iteratively add what I learn into Obsidian.
    1. We might think of this as citation(1)=writer steers.

      The writer is adding their own voice to the text.

    2. One of the most prominent ways that authority is signalled in an academic text is via citation.

      This seems counter-intuitive; we demonstrate our authority by referring to the writing of others. I suppose it shows our understanding and command of the body of literature surrounding the topic.

    1. the most complex object in the universe, the brain.

      It's a bit presumptuous to assume that the most complex object in the universe just happens to be in our heads.

    1. Now that more work is being done online, many people face global competition. Mark Ritson tells about his wife’s yoga instructor, Gary, who went online during a pandemic lock-down. He previously offered at-home personalized instruction, driving to his mostly rural clientele. When the lock-down was over he decided to only offer his services online and avoid all the time and expense of driving. But Mark’s wife now realized that she was no longer limited to Gary. “In the old, pre-Covid world of yoga my wife was limited to Gary or an elderly woman who creaked a lot and smelled of cheese. But with the opening of this new virtual yoga window, she now has a dizzying array of practitioners keen to work with her from all corners of the globe.” By moving his business online, Gary had unknowingly increased his competition to the entire world

      We're going to see the same issues when health professionals start offering their services online.

    1. The core problem here is that we really don’t know exactly how the brain learns information or skills. And for what we do know, we don’t have the ability to directly observe when it is happening in the brain. That would be painful and dangerous. So we have to rely on something external to the brain serving as evidence that learning happened.

      What we call assessment is really an attempt to create a proxy indicator for what we call learning.

      It seems weird to think of it that way; we don't really understand learning so we create tasks for students to complete in the hope those tasks somehow give us some insight into the thing that we don't really understand.

    2. quizzes, tests, exams, assignments – none of those can measure learning or skill mastery. Not directly at least.
    3. exams or tests themselves are not essential
    4. If you think that most of the students in your course would probably cheat

      What would it say about you if this is what you think of your students?

    5. The main reason for all of this confusion about the research is that there is little consistency in the time frame, the definition of what counts as “cheating,” and how the frequency of cheating is measured.

      This is in keeping with the trend of poorly designed - and poorly reported - educational research.

    6. there is the claim that McFarland makes that “a separate, peer-reviewed research paper published in May of 2020 in the Journal of the National College Testing Association also confirmed the link between online classes and dishonesty.” But that is not what the paper said at all. That paper looks at the differences between proctored and unproctored exams, and makes a lot of claims about how online learning has the potential for more dishonesty. But it does not confirm a link between dishonesty and online courses, because it was not looking for that.

      I hate it when people do this; link to marginally articles in support of their dodgy claims.

    7. In fact, the research is actually all over the place. You will see numbers anywhere between 2% and 95%. As one research paper puts it: “the precipitating factors of academic misconduct vary across the literature … The research of academic integrity is often unsystematic and the reports are confusing.”

      I wonder how much of this pivots on the definition of cheating. If students talk to each other about their assignments over lunch, is that "cheating"? Obviously not. But if this is OK then why can't they collaborate in other ways?

    8. they can get feedback until they know they are going to score what they want

      This is the idea behind contract grading.

    9. This article is ostensibly a response to the use of proctoring software in higher education.

      But in order to do that properly the author has also delved into learning and assessment.

      It's a well-written piece that questions some of our taken-for-granted assumptions around assessment.

    10. How many of us sit around answering test questions (of any kind) all day long for our jobs? If you look to the areas of universal design for learning and authentic assessment, you can find better ways to assess learning. This involves thinking about ways to create real world assignments that match what students will see on the job or in life.

      And this is often far more challenging for students to do well than simply memorising the content for the test.

    1. This post articulates a lot of what I've been thinking about for the past 18 months or so, but it adds the additional concept of community integration.

      Interestingly, this aligns with the early, tentative ideas around what the future of In Beta might look like as a learning community, rather than a repository of content.

    2. The potential to build community-curated knowledge networks remains largely untapped. There are reasons to be optimistic; the economic feasibility of paid communities, a renewed interest in curation, a slow move away from big social, and an improved understanding of platform incentives. All combined, this will lead to communities that are more sustainable, aligned, and intentional.

      I agree with all of this.

    3. given the chat-based nature of these platforms, it’s easy to miss the best content

      It can't be sorted by topic, though. If it's a not a chronological stream then what is it?

      Mike Caulfield introduced (to me anyway) the concept of streams and gardens, which I've found to be a valuable way of thinking about my own curation practices.

      What would this "garden" look like? A place where you could serendipitously find something interesting.

    4. diagram

      I know that these kinds of diagrams can't include every tool but I'm surprised that Obsidian isn't in the list of knowledge management tools.

    5. relation

      In the table below, I'm not sure how you can say that we have poor search today; any basic search engine is pretty good even at natural language queries.

    6. The conversation around curation thus far has focused too much on reducing the amount of information

      This isn't completely true. There's also been a big emphasis on increasing the quality of the information you consume.

    7. we should be able to reference it if we’re building a company in the design tools space

      You don't need to read and process everything that's relevant now; you only need to have it on hand for when you need it.

      But why wouldn't you just search for what you need, when you need it? Maybe the value of this approach is that you've already got a small set of high value, information dense, useful resources that you - and your community - have curated.

    8. The architecture of digital platforms encourage us to consume information because it’s in front of us, not because it’s relevant

      But this can change with a different algorithm.

    9. the goal is not to consume more information

      My goal is filter from a smaller number of sources that provide "better" information.

      Better = information that is more closely aligned with achieving goals that are important to me.

    10. “how do we collect, store, and contextualize the information we consume?”

      This is the essence of the personal knowledge management movement that's been growing in the last 2-3 years.