889 Matching Annotations
  1. May 2022
    1. Stephen Downes commented on this post:

      According to the authors, "there are skills that AI cannot master: strategy, creativity, empathy-based social skills, and dexterity." The article goes on to explain why, and to describe how humans will apply these skills in order to work with AI. My perspective is that we would be very short-sighted if we assume AI will not be able to master these skills. AIs already display more dexterity than humans in some cases. They are also demonstrating interesting forms of creativity. A lot of what appear to be human-only skills are the sorts of automatic non-cognitive abilities we demonstrate. But that's precisely where AI will excel, since these are based on pattern recognition and response. An AI won't feel things the way we do. But it's not al all clear that feeling a certain way is essential for any of these skills.

    1. I explore how moves towards ‘objective’ data as the basis for decision-making orientated teachers’ judgements towards data in ways that worked to standardise judgement and exclude more multifaceted, situated and values-driven modes of professional knowledge that were characterised as ‘human’ and therefore inevitably biased.

      But, aren't these multifaceted, situated, and values-driven modes also constituted of data? Isn't everything represented by data? Even 'subjective' understanding of the world is articulated as data.

      Is there some 'standard' definition of data that I'm not aware of in the context of this domain?

    2. Frequent testing to monitor children’s ‘expected progress’ through a tightly defined curriculum reflects a limited view of how children learn, in which children are seen as “functional machines” who should all automatically progress at the same rate (Llewellyn, 2016).

      This seems like an over-reach. There's nothing about testing that inherently implies that students 'should' progress at the same rate.

    3. data has become a primary mode of governing education

      Is this bad? What is the alternative to using data to inform decision-making? How is 'data' being defined here?

    4. Recommended by Ben Williamson. Purpose: It may have some relevance for the project with Ben around chat bots and interviews, as well as implications for the introduction of portfolios for assessment.

    1. Bret Victor shared this post when making the point that some of the 'best minds' in Silicon Valley may not be the best people to be working on the problems we care about.

      Together with the post, titled, "I saw the best minds of my generation...", he shared a screenshot of a quote: "In the beginning, EA [Effective Altruism] was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence-provoked apocalypse."

    1. Bret Victor shared this post to make the point that we shouldn't be worrying about sentient AI right now; that the melting ice caps are way more of a threat than AGI. He linked to this article, saying that corporations act like a non-human, intelligent entity, that has real impacts in the world today, that may be way more consequential than AI.

    1. Ben Williamson shared this post on Twitter, saying that it's a good idea to remove the words 'artificial intelligence' and 'AI' from policy statements, etc. as a way of talking about specific details of a technology. We can see loads of examples of companies using 'AI' to obfuscate what they are really going.

  2. Feb 2022
    1. Life is about solving problems and the existential sensibility we adopt to this constitutes the experience we have of life.

      In other words, don't aim for a life devoid of problems. Instead, look at problems as opportunities for meaningful engagement with your life.

    2. reading Oliver Burkeman’s 4000 weeks

      See also Burkeman's conversation with Sam Harris on the Making Sense podcast.

      Harris, S. (2021). Deep Time (No. 269). Retrieved February 16, 2022, from https://www.samharris.org/podcasts/making-sense-episodes/269-deep-time

  3. Jan 2022
    1. regarding their satisfaction with the experience

      This is a report of student satisfaction so it's irrelevant if you care about whether or not the product improves learning.

    2. inability to multitask

      An important consideration when attempting to emulate real-world clinical contexts.

    1. Students in the experimental group had significantly higher knowledge scores than students in the control group.

      A + B is always greater than A.

      Another variant on research design that proves nothing is the A versus A + B design. It arises frequently in studies of gizmos like iPhone apps, or PDAs or simulations, where one group gets the standard instruction and the other group also gets to listen to heart sounds or view ECGs on their phones on the way to class. Just as we need not prove that something is bigger than nothing, we also do not need to prove that something + something else is greater than something alone.

      From Norman, G. (2014). Data dredging, salami-slicing, and other successful strategies to ensure rejection: Twelve tips on how to not get your paper published. Advances in Health Sciences Education, 19(1), 1–5. https://doi.org/10.1007/s10459-014-9494-8

    1. a massive amount of fascinating information.

      We should not be gathering information because it's interesting.

    2. Some students get upset when their professors tell them they’re using the honor system

      Evidence that this is the case?

    3. f someone has a shaky internet connection, they can be disconnected for up to two minutes and return to the exam

      Two minutes to resolve a connection issue seems like not enough time. Also, knowing this will cause more anxiety.

    4. The system will just flag the interruption for a faculty member to review later.

      Simply knowing that this is the case is enough cause for additional anxiety.

    5. In response to such concerns, the proctoring companies have argued that doing away with their tools will cause widespread cheating.

      There isn't any evidence of this either (i.e. of widespread cheating). You have to question the motives of companies selling a product that relies on this being widely believed.

    6. Critics complain that using such software signals to students that faculty members don’t trust them. Some students also say the possibility of being flagged for “suspicious” activity adds to the stress of taking a test, sometimes causing panic attacks.

      Again, we do this all the time with F2F assessments i.e. there are exam invigilators in the room to watch for cheating. Why have there been no complaints that "faculty members don't trust them"?

    7. The software, which faculty members can customize, typically scans students’ rooms, locks their computer browsers, and monitors eye and head movements through their webcams as they take tests.

      Compare this, in principle, to what an exam invigilator does with F2F assessments; 1) they ensure that students aren't communicating with anyone else, 2) ensure that students aren't reviewing banned material in the exam, 3) monitor where students are looking during the exam. No-one has ever seriously complained about that. If the assessment took place in a venue that wasn't a students' room (i.e. the process removed the concerns that someone is looking at their bedrooms), would everything be OK?

    8. Students have bought in to the technology, she said, because they played a role in developing it and felt they were in control of the data it was collecting. When that’s not the case, and students suspect that their personal lives are being probed by companies more concerned about profit than their well-being, they’re likely to rebel.

      Students have to be involved in the development and policies around how the technology and data are being used. And above all, the process must be transparent.

    9. no other way to safeguard the integrity of exams. But critics say that’s a cop-out.

      You can change the nature of your assessment tasks so that you don't need the software. See Killam, L. (2020, April 6). Exam Design: Promoting Integrity Through Trust and Flexibility for a great example.

    10. students have circulated petitions demanding that online proctoring systems be kicked out of their classrooms

      I certainly agree with this statement but I wonder why students have never resisted F2F proctoring during assessments. We've always had invigilators looking over the shoulders of students writing exams. Why is it different when the proctoring system is remote? And note that this isn't about the potentially biases AI that informs the systems; this article is (so far) only pushing back against the presence of the system and not it's accuracy.

    11. and having a proctoring service require you to scan your bedroom before a test for cheat sheets or open books

      The article is a bit confusing wrt to what it's actually about. The question around wearing a biobutton to track possible Covid-related factors is one thing, but remote proctoring for assessment is another. I get that the article is about 'surveillance' more broadly but the focus of attention is scattered.

    12. s it too much to ask to share your heart rate or temperature?

      This isn't the problem. The problem is when the button is normalised and the programme expanded. Even if no-one has any plans for this right now, it will absolutely be the case that whoever is paying for the system will want to see it used for other purposes.

    13. handing over health information is a relatively small price to pay if it means halting the spread of a virus that has ravaged the nation.

      Again, the framing of the situation is key. As is situating the process within a transparent policy that 1) students contributed to in the initial development, and 2) has a built-in sunset clause that causes the policy and process to stop unless it is re-evaluated and renewed by all stakeholders.

    14. Along with wearing masks and social distancing, students living on campus would be expected to wear a coin-size “BioButton” attached to their chests with medical adhesive. It would continuously measure their temperature, respiratory rate, and heart rate, and tell them whether they’d been in close contact with a button wearer who’d tested positive for Covid-19. In conjunction with a series of daily screening questions, the button would let them know if they were cleared for class.

      How the situation is framed makes all the difference. Many would look at this and think it seems quite reasonable, given what's going on with Covid atm.

    15. the close-contact alerts were based on Bluetooth recognition, not GPS location tracking

      Much like Apple's AirTags.

    16. even seemingly secure government and business systems can be hit by sweeping cyberattacks

      I'm more concerned about 'feature creep'; once the infrastructure is in place (at significant cost), no-one is simply going to dismantle it. Instead they're going to keep adding features to the system that sees it become increasingly integrated into school and social life.

    17. Some students, required to flash Covid-free badges to enter classrooms or rotate their laptops for online test proctors to scan their bedrooms, have grown weary of feeling watched. And some are leery of how the information that’s being collected will be used, whether it could leak out, and whether there’s a process to destroy it when the pandemic is over.

      I agree with all of these concerns. But I also wonder if these same people use Facebook? If they willingly post photos of themselves and friends in their bedrooms? And if they worry about social media companies building up profiles of them and their relationships that are never deleted?

  4. Dec 2021
    1. when you reach the end of a chapter, you pause for few minutes to author one or two paragraphs of text summarizing your understanding and reflections on the key concepts discussed in the previous chapter. Beyond the two paragraphs, it is also a good practice to articulate questions that come to mind, for which you will seek answers in the upcoming chapters, or that you want to research beyond the book.
    2. Your aim should very rarely be to record conversations verbatim or to copy long stretches of quotes from text you are reading.

      Formal meetings should have administrators present who are responsible for capturing the content of discussions and outcomes of decisions. And nowadays, why not just record every meeting? And then use automated speech-to-text transcription to get a searchable set of notes.

    3. Even if you never user your notes again, the act of taking notes will improve your thinking and contribution.

      Good to reflect on. Even if you never use the note again, you should probably make the effort to create it in the moment when it first seemed interesting. Because you never know what the future will bring, you never know if the note will be useful or not.

      This means that you need to strike a balance between going down the rabbit hole of trying to capture everything you read (just in case it's useful), and capturing nothing (because maybe it will never be used again).

    4. you build the capacity to consider a higher number of ideas at the same time

      I'm not 100% sure I agree. You can still only keep a limited number of items in your working memory. Writing notes - for me - is about having access to a collection of ideas that I've had over time. It's more about a repository of information that's important to me.

    5. action you take based on thinking will be more productive than action based on ignorance

      Every intellectual endeavour starts with a note. - Sönke Ahrens (2017).

    6. Taking notes is writing

      I used to think that taking notes was an important but relatively small part of my work. But now I think of note-taking as my actual work.

    1. “Don’t feel guilty if you spend the first 90 minutes of your day drinking coffee and reading blogs,” Nate Silver once advised young journalists. “It’s your job. Your ratio of reading to writing should be high.”
    2. Stephen King says he writes all morning and reads all afternoon.
    1. “What is your definition of success?” I hemmed and hawed a bit, until I finally said, “I suppose success is your days looking the way you want them to look.”
    1. The warrant for claims generally includes: (1) the research design – was the research in a credible tradition? did the researcher position themselves explicitly so that readers could see the situatedness of the study? did the methods allow the question to be answered? was the sample sufficient? (note that the emphases in these questions and their weighting will vary by discipline.) (2) the conduct of the research – was the data generated in an appropriate manner? was the analysis thorough and defensible? was the research conducted according to current ethical standards? (3) the logic between the claims and the research – do the claims made follow from the actual results? have the results and analysis been presented in a logical sequence so that the steps in the argument are clear? (4) an awareness of what the research can and cannot do – has the researcher considered the limitations of their research? has the researcher demonstrated reflexivity?
    2. A ‘research warrant’ thus refers to the ways in which our data supports the claims that we make.
    3. If an action is warranted then there is a sound rationale, cause or basis for it. An action that is warranted is one which has good grounds.
    1. A Theory of Zoom Fatigue

      See also: Bailenson, J. N. (2021). Nonverbal overload: A theoretical argument for the causes of Zoom fatigue. Technology, Mind, and Behavior, 2(1). https://doi.org/10.1037/tmb0000030

    1. “looking again, in a digital context, at once new, provisional, provocative but largely analogue forms like the essay, the pamphlet, and the manifesto”

      What does the 'digital first' version of these publications look like?

    2. The professionalisation of academic writing has forced us “to substitute the more writerly, discoursive forms, such as the essay, for the more measured and measurable –largely unread and unreadable – quasi-scientific journal article”

      I wonder if it would be useful to distinguish between research and scholarship, where formal research is but one type of scholarly practice?

      If we look at a journal as a channel for promoting scholarship then there's no reason that we can't include essays as a category of writing.

    1. This book, then, is an analysis of physiotherapy through the lens of the sociology of the professions. It explores what the professions are, what they do in society, what’s good about them, and what’s bad. It applies these ideas to physiotherapy so that we can better understand the issues the profession is now facing. And the book concludes with some challenging and, some might say, heretical recommendations for where physiotherapy goes next, as we all adapt to the post-professional era.

      The aim of the book.

      I'm intrigued. I don't see an easy route from where we are to a society giving up on its professionals.

    2. The End of Physiotherapy

      See here for more information.

    3. with medicine being the paradigm case

      And possibly law.

    Tags

    Annotators

    1. “It’s appealing to people who wouldn’t otherwise come to your subject,” says Stevens, and particularly at conferences – where they’ve turned the comic into posters

      Such a natural fit.

    2. research findings in as minimalistic a way as possible

      Because the pictures convey some of the 'feeling' of what the participants shared.

    3. At first, the idea of a social care comic can seem incongruous, in danger of trivialising a sensitive and sometimes heavy subject.

      This is probably more a function of how society treats comic books i.e. they're for children. However, graphic novels and comics (illustrated stories) are no less of a communication medium than academic papers, and are certainly more appealing to read.

    4. Funding was available for creative methods of presenting research

      This is important; in order to this kind of presentation and dissemination to be done well, it needs to be funded.

    1. we see the book as a resource that those in the sector can use when they need to persuade others (such as funders) of the challenges faced by homeless people and the services that help them

      A graphic novel is probably more likely to be read by funders than another academic article.

    2. people working in the homelessness sector are unlikely to spend much time reading the latest journal articles and, moreover, will not pay for them given the limited resources available to homelessness services; we also knew that informing more people of the real life stories behind homelessness would require something to draw the eye. We needed to do something a bit different. We published our findings as a graphic novel.

      This isn't only a creative way of sharing findings for a general audience but is also aligned to the needs of a very specific audience who would benefit the most from reading the study findings.

    3. People who have experienced homelessness have led diverse, captivating, heartbreaking, and inspiring lives.

      We are hard-wired to pay attention to stories that emphasise the human condition in all it's variations. Story is a format/communication channel that we may have evolved to privilege over other forms of information sharing.

    4. 100 life story interviews with people experiencing homelessness or vulnerable housing

      Does the fact that the data themselves were already in the form of stories, lend itself to sharing findings as a graphic novel? Could an RCT also be published in a similar format?

  5. Nov 2021
    1. Too often we feel like we need to reply to every email.

      Stop replying with things like, "Thanks for this".

    2. Work your way from top to bottom, one email at a time. Open each email and dispose of it immediately. Your choices: delete, archive (for later reference), reply quickly (and archive or delete the message), put on your to-do list (and archive or delete), do the task immediately (if it requires 2 minutes or less — then archive or delete), forward (and archive or delete). Notice that for each option, the email is ultimately archived or deleted.

      The ultimate purpose of any email message is to be archived or deleted.

    3. Have an external to-do system.

      Email is a chaotic task list organised by other people. When you think of it like that, it makes sense to get those tasks out of your email inbox and into another system that you control.

    1. Courses should prioritize flexibility and experimentation.A course should be designed as a living structure, and be constantly tuned to the ongoing experience, adjusted based on what each participant brings and needs.

      Similar ideas discussed here: Ellis, B., & Rowe, M. (n.d.). Guided choice-based learning (No. 4). Retrieved November 29, 2021, from https://inbeta.uwc.ac.za/2018/02/09/4-guided-choice-based-learning/

    2. a map, but one that can change to better serve its purpose

      I'm not sure that I agree. Do we want our students to be able to change the map, or to choose different pathways through it?

    3. collaboration within a community of people: diverse perspectives, active engagement

      Similar ideas here: Stephen Downes (2015). Design Elements in a Personal Learning Environment. Invited talk, Guadalajara, Mexico. https://www.slideshare.net/Downes/design-elements-in-a-personal-learning-environment-52303224

      What makes an 'online course' different to an 'online learning community'?

    4. Online learning shouldn't try to formalize, but make informal contexts richer, and more connected to formal ones.

      Everyone's PLE should have the capacity to integrate with formal (institutional?) learning environments. But, will every formal learning environment be able to link to informal ones?

    5. a system that lets people compose structure

      We all think in different ways, so a good system for learning should enable every learner to structure their learning environment uniquely.

    6. Units of action in a learning environment should be the actual activity of the task at hand

      The best way to learn about something is to do it.

    7. people reading the same book at the same time, exploring the same ideas…Norms around signalling you're interested in something, and the extent of your interest, would go far

      How do we find the connections we don't know we're looking for?

    1. Axios treats email as the primary product

      It's interesting to think that we've gone through maybe a decade of startups (e.g. Slack) telling us that we need to get away from the chaos of email, and now we're being pitched the idea that email is a quiet, more intimate space.

    2. Newsletters are a great retention tool and a superb distribution tool, with almost no algorithms standing in the way of the audience.
    3. “scannable emails”

      Scannable subject lines in emails: "your subject line should tell readers something specific and valuable. Summarize 2–3 of the biggest items in your update and separate them with a comma or em dash so ideas stand out at a glance."

    4. Both of the companies are providing podcasters with options to put their audio content behind a paywall and in effect giving them the ability to build up a recurring revenue stream.

      As much as I like the idea of putting your content out for free, I get that people need to make money if this is the business model. Schools and universities are probably under less pressure to do make a profit but still need to cover basic costs.

    5. The recipe for starting a new media venture in 2021 seems to be straightforward: blog, newsletter, podcast. From there you scale up and start adding additional verticals, like events (both virtual and in-person as more people get vaccinated), discussion forums (like a Discord server for paying subscribers), a YouTube channel and so on.

      Not only new media; this would probably also work for schools, or any learning community.

    1. I agree with everything in this post; I want to listen to more academic work in audio formats but find the process quite unsatisfying.

      Either I find my attention drifting, or I’m switching between apps to try and capture the essence of something I’d like to come back to later.

      I’m hoping that things like Momento help get us closer to the ability to capture information from audio sources, but this would need to be built into ebook readers or the operating system itself, in order to be more broadly useful.

      It’d also need to be more reliable with respect to the quality of the machine learning transcription. At the moment it’s just useable, and requires a bit of interpretation.

    1. Karnofsky suggests that the cost/benefit ratio of how we typically think of reading may not be as simple as we intuitively expect i.e. we think that 'more time' = 'more understanding'.

      If you're simply reading to inform yourself about a topic, it may be worth reading a couple of book reviews, and listening to an interview or two, rather than invest the significant amount of time necessary to really engage with the book.

      A few hours of skimming and reviews/interviews may get you to 25% understanding and retention, which in many cases may be more than enough for your needs of being basically informed on the topic. Compared to the 50 - 100 hours necessary for a deep, analytical engagement with the text, that would only get you to 50% understanding and retention.

      That being said, if your goal is to develop expertise, both Karnofsky and Adler ('How to read a book') suggest that you need a deep engagement with multiple texts.

    1. Pretty much anything that can be remembered can be cracked. There’s still one scheme that works. Back in 2008, I described the “Schneier scheme”: So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence — something personal.

      Good advice on creating secure passwords.

    1. I have no problem with publishers making a profit, or with peer reviewers doing their work for free. The problem I have is when there is such an enormous gap between those two positions.

      If publishers make billions in profit (and they do), while at the same time reviewers are doing a billion dollars worth of work for free, that seems like a broken system.

      I think there are parallels with how users contribute value to social media companies. In both cases, users/reviewers are getting some value in return, but most of the value that's captured goes to the publisher/tech company.

      I'd like to see a system where more of the value accrues to the reviewers. This could be in the form of direct payment, although this is probably less preferable because of the challenges of trying to convert the value of different kinds of peer review into a dollar amount.

      Another problem with simply paying reviewers is that it retains the status quo; we keep the same system with all of it's faults and redistribute profits. This is an OK option as it at least sees some of the value that normally accrues to publishers moving to reviewers.

      I also don’t believe that open access - in it’s current form - is a good option either. There are still enormous costs associated with publishing; the only difference is that those costs are now covered by institutions instead of the reader. The publisher still makes a heart-stopping profit.

      A more elegant solution, although more challenging, would be for academics to step away from publishers altogether and start their own journals, on their own terms.

    1. If you (like I am at times) are an event organiser, is it necessary to plan ahead for a ‘back-channel experience‘ taking into account accessibility, avoiding silo’s and tracking, with which to add to what it is like to attend your event? Or will the idea of a back-channel be let go entirely, reducing all of us to more singular members of an audience?

      I like the idea of designing the types of conference experiences that I'd like to have, rather than what participants have come to expect.

    2. I don’t see something else naturally taking its place either.

      I like the idea of Discord as a backchannel but it suffers from the problem that it's a relatively niche app, and no-one is going to install and learn how to use it just for a conference.

      I think that Discord would work well for a learning community though.

  6. Oct 2021
    1. Especially going through presentations is a rich source of notes

      For me, presentations represent a well-organised set of concepts that I'm familiar with, and that are relatively well-connected to other ideas.

    1. interfaces are queries

      I have a sense of what this might mean but I have a hard time conceptualising how I might implement it.

    1. switch between workspaces at will

      This is something I still haven't got around to in Obsidian. The only use case I can think of that might help with my workflow is when I'm doing a weekly or monthly review. But it only takes me a couple of seconds to open the notes I need for that anyway.

    1. there’s no mobile Obsidian app, and there’s no need for it either, any plain text editor will do after all.

      Obsidian has since released a mobile app, with a paid-for sync service. I was using Dropbox to sync my notes between devices but had paid for the Obsidian sync service to test it. It works well but I hardly do any work on my phone, so it's not clear to me what the use case is. I probably won't renew my subscription and will go back to syncing with Dropbox.

    2. being able to quickly switch between the task list and the resources needed for a task

      I tried Todoist, which aims to bring the resources into the to do list. It started as a to do list that you can organise into projects, and then added notes and attachments to items. I decided to go all in for a year, paid for the premium service, and put everything into Todoist. But it didn't work for me because my to do list was still separated from the rest of the work I was trying to do.

    1. Collating is done by transcluding 7 day logs into one note.

      I use transclusion in Obsidian for my weekly review. The weekly review note (also defined with a template), has a section that gives me an overview of my week.

    2. During the day I add activities to the log as I’m doing them. I also mention thoughts or concerns, how I think the day goes etc. I link/mention the notes corresponding to activities, e.g. things I wrote down in a project meeting. I started keeping day logs last April, and they are useful to help me see on days that seem unfocused what I actually did do, even if it felt I didn’t do much.

      I only ever diaries for keeping track of appointments, but the concept of a daily note (that I define with a template) has been enormously helpful for me to structure my day.

    3. Each project e.g. has a ‘main’ note stating the projects planned results, to which goal(s) it contributes, main stakeholders, budget and rough timeline.

      My project 'overview' note includes headings like: Admin (meeting dates, etc.), people involved deadlines, related notes. Each of these other parts is a separate page, linked to with wikilinks.

    4. Within each area there are projects, specific things I’m working on. Projects all have their own folder in an Area. Some of the projects may have subfolders for (sub)projects taking place within the context of a client assignment for instance.

      I have Project pages in Obsidian, with sub-projects that are separate pages, linked to the main project page with wikilinks. I do it that way so that I don't need to worry about folders. But now I'm wondering if a folder/project would be a way to compartmentalise things.

    1. An overview of how Ton Zijlstra uses Obsidian as part of his workflow

      Use of tools has ‘gone back’ to what he used in the 90s, which is to say, he’s moved away from things like Evernote (silo'd) towards using mostly text with Obsidian (note-taking) and Tinderbox (on the Mac, for writing).

      Didn’t try to bring all of his Evernote notes into Obsidian, because it was a massive dump of content - an unorganised mess - and would have just created a lot of noise. Instead, he mined his blog posts for the most useful information. Still has the Evernote notes as markdown files, so still available when necessary. So he started using Obsidian with a blank slate.

      Uses Obsidian as a viewer of his text/markdown files. Plain text files make it easy to maintain your notes across different platforms and systems, while allowing creativity and innovation to happen as a layer on top of the files.

      Tools change. Every tool is temporary, so avoid having tools that drift too far away from the core feature set. If you're going to install plugins that provide additional, more powerful functionality, that functionality isn't going to be present when you look at your notes in plaintext. Avoid plugins that are going to make it difficult to view the content of your notes in other tools.

      You need to be able to move on to other tools with low/no friction.

      Uses folders to organise information, mostly related to projects, with an ‘overview’ page that links to other parts of the project.

      Finding a new tool provides a certain level of fascination. But, we need to ask if the tool is genuinely useful with respect to the work we do, or is it a shiny new hammer to hit nails with?

      He has replaced several tools with Obsidian (notes, to do, outliner). He says that Obsidian has replaced about 2/3s of the steps in his system. This is my problem at the moment…I’m happy with the number of steps in my system, but not that each step needs it’s own tool.

      Don't write about the tool. Write about how you use the tool to do real work. You see this in the YouTube channels of people who spend hours talking about the tools they but they don’t show themselves doing real work with the tool.

      Still not happy with the relatively high friction of turning notes notes into output. Works in Obsidian a lot to generate notes but still struggles to push out artifacts. I also want to figure out how to move my notes and reflections into output, with less friction.

      There’s a tension between writing for yourself and writing for an audience. Do you want to publish your daily notes? Would this lead to self-censorship? Would you be more self-conscious?

      Use blogs as part of an obligation to explain. If you need to search for information to do something, maybe you should blog that process to try and make it easier for others to do things. I like the idea that we have an ‘obligation to explain’ because we have all benefited from others who have explained things for us.

      Tools will come and go but your process should be more stable over time.

    1. I do not use plugins that are supposed to help you create notes (e.g. the existing Zettelkasten and Day log plugin), because they make assumptions about how to create notes (how to name them, which links to create in them). I created my own workflow for creating notes to avoid functionality lock-in in Obsidian: day logs are created manually by keyboard shortcuts using Alfred (previously TextExpander), as are the timestamps I use to create unique file names for notes.

      I agree with this in principle. However, the process of using the core Obsidian plugins to create Daily notes with templates is so simple that you could easily swap out the plugin with a manual plain text alternative.

    2. On the tool side of that evaluation, I want to get rid of Evernote (as a silo and single point of failure) since some years.

      I've been thinking about this under the frame of, "The more time I spend with a tool, the easier it should be to leave it". What I mean by this is, the more I invest in something (time, cognitive energy), the more reliable it needs to be. Since nothing is completely reliable, I need to choose a tool that allows me to move to another tool without any friction. I don't want the tool I'm using today to be bought out and shuttered by another company, so I have to work in formats that make it easy to adapt.

  7. Sep 2021
    1. I had no idea of the number of monitoring services that were available and actively implemented in schools. The tone of the article suggests that these services are successful at reducing the risk of harm to children but provides no evidence other than anecdotes from those responsible for implementing the service. Hardly convincing. There's a halfhearted attempt to show the other side of the story, from critics, although the airtime given to the 'successes' makes it clear which side the writer comes down on .

    2. It’s hard to argue against efforts to save kids’ lives. But privacy and mental-health experts say such surveillance can be a slippery slope, especially if it ends up being used for reasons other than harm prevention.

      What happens when other keywords get added to the algorithms search criteria?

    3. flagged a Google chat that a student in Dr. Megert’s district had with a suicide hotline, as well as chats another student had with peers about plans for self-harm. In both cases, the school contacted the students’ families and arranged for mental-health services.

      If this is such an amazing service, why aren't we rolling it out to everyone, including adults? How would those parents respond if they knew that private companies were monitoring them, in the interests of their own safety?

    4. “I have mixed feelings about it, but if we’re going to err on one side it has to be on the side of safety.”

      Again, the language suggests that this kind of surveillance is actually making kids safer. Which I doubt.

    5. “I don’t want to be sneaky about it, but if we were really obvious about it, students might not use their school devices,”

      So, you're going to be sneaky about it because you know it's wrong.

    6. School administrators say such surveillance is more important than ever as students return to the classroom after 18 months of pandemic-related stress, uncertainty and loss. Critics say it raises questions about privacy, misuse and students’ ability to express feelings freely or search for answers.

      This is a bad idea.

    7. Many school districts have used monitoring software over the past three years to prevent school shootings

      Have these tools really prevented anything? Or is prevention the aim of the tools? This is written as if these tools are actually achieving this, which is hard to believe.

    1. “I regard it as a criminal waste of time to go through the slow and painful ordeal of ascertaining things for one’s self if these same things have already been ascertained and made available by others.”—Thomas Edison

      Absolutely. If someone else, in a similar context to you, has been through the hard process of evaluating information to already make a decision that you're facing, why wouldn't you try and learn from them?

    2. Not all decisions matter.

      So it's really important to identify which decisions are important, and which really don't matter. Spending time on decisions that don't matter is a waste of time.

    3. found some mentors. I watched them, asked them annoying questions, and tried to learn as much as I could from them

      This is a good step. However, even in higher education where mentorship is reasonably common, few mentors will actually invite a mentee to watch them work. How may mentors will say, "I'm about to spend an hour working through emails. Would you like to come and watch?" And yet, managing email as part of workflow is a significant part of the administrative role of a professor in higher education.

    4. There is no class called “decision making.”

      And yet we have an expectation that our leaders, health care professionals, teachers, etc. are all able to make good choices.

    1. International summer school run from the University of Bologna in July 2020, with the theme of AI in medicine and medical education.

      Students attended lectures and interactive sessions, and then worked in groups on a project to develop an AI tool and literature review that identified a real-world need.

      Potentially useful format for generating new ideas and projects, and providing a baseline foundation in basic AI concepts.

      The use of a pre- and post-test survey design baffles me though. Who cares about student satisfaction re. self-assessed competency and quality of lectures, etc.? If you really want to know how useful the programme was, you get an external panel to evaluate the quality of the students' projects.

      Some questions they could consider in the evaluation might include:

      1. Does the project identify a real-world clinical or medical education problem that has a high impact on patient outcomes or physician efficacy, for example?
      2. Is the project likely to make some significant progress in moving that problem forward?
      3. Is the solution elegant i.e. what variables are being included in the model? How technically simple is the model?
      4. Are there any blind spots in the students' perspectives, especially re. patient data, bias, etc.?
    1. There’s an idea that’s important, but it hasn’t been really refined to the completed version of that idea. I think it’s very common for there to be bad notations, or just bad definitions of things that make things more complicated, and all these things make it harder to go and understand the topic.

      Compelling idea; Instead of working for years to make the mountain of knowledge bigger (and therefore harder for others to climb) you could spend years making it easier for others to get to the top.

    2. going and pair programming with people is immensely valuable

      There is no equivalent of this in academia; where someone with less experience gets to shadow someone in terms of their daily work. A simple example might be to watch how a successful professor manages email.

    3. write down a list of problems that you think might be important to work on, and then have somebody else, ideally your mentor, go and just rate them one to 10.

      What scale should this list be at? "Climate change" is an important problem that people should work on but it's so big that it's practically useless. Or maybe the list is so granular that the connection to the larger problem isn't clear. What's a good scale to think about problems?

    4. it’s often helpful to divide being a good researcher into two parts. One is taste. So your ability to go and pick good problems and go and pick good avenues to attack those problems, and things like this. The second you might call technique, or execution.

      Choose good problems and develop expertise in the skills needed to explore those problems.

    5. what are the skills that they’re cultivating, and what do we think the Pareto frontier with regards to these skills looks like? Do we think that there’s places where, rather than going and becoming the world’s best at one skill, they can produce a lot of value by being at an intersection of skills that other people don’t have?

      Focus on skills, not careers.See Wiblin, R. (2021). Holden Karnofsky on building aptitudes and kicking ass.

      Holden prefers to focus on ‘aptitudes’ that you can build in all sorts of different roles and cause areas, which can later be applied more directly. Even if the current role or path doesn’t work out, or your career goes in wacky directions you’d never anticipated (like so many successful careers do), or you change your whole worldview — you’ll still have access to this aptitude.

    6. very plausibly, for a while, I was the person in the world who was the best of the intersection of machine learning and drawing.

      It's really valuable to work at the intersection of 2-3 domains; you don't need to be the best at any one of them but you may be the best where they intersect.

    7. Chris thinks we’re nowhere near communicating existing knowledge as well as we could. Incrementally improving an explanation of a technical idea might take a single author weeks to do, but could go on to save a day for thousands, tens of thousands, or hundreds of thousands of students, if it becomes the best option available.Despite that, academics have little incentive to produce outstanding explanations of complex ideas that can speed up the education of everyone coming up in their field.

      This is such a compelling idea; explain difficult concepts well in an attempt to help others progress faster in their learning.

  8. Aug 2021
    1. “A table should be as simple as the material allows and understandable on its own; even a reader unfamiliar with the material presented should be able to make general sense of a table”

      See Victor, B. (2014, December 22). The Humane Representation of Thought, for an insightful discussion of how we can think differently about information presentation.

    2. Ideally each “table ‘tells’ a worthwhile and intelligible story” (Einsohn and Schwartz 2019, 247).

      The choice of the table itself must convey additional information; what is it about your data that is highlighted through the presentation method?

    3. It’s sometimes difficult to decide between a table, a chart (or other type of visualization), or a simple list.

      It's not only the information presented in the table, but the table itself, that must serve a purpose. You can often present the same information in a different format, so when should you choose a table over something else?

    1. A lot of people don’t start with a chosen theory at all, they come to it as they are working out how to make more sense of their analysis. But they present the theory near the start of the thesis because they aren’t writing about the “journey”, as most often their thesis is a final text about where they are at the end. But because they do end up with a theory (or combination) they generally then use their newly arrived at theoretical lens to rewrite their literatures work and report their methods.

      Interesting to note that this is exactly how things emerged from my thesis; I obviously had some ideas about theory early on (and I wrote about the theory early in the thesis), but I really 'arrived' at my theoretical position almost at the chronological end of my PhD process. And then I had to go back and rework substantive parts of the thesis.

    2. The “theory” is separated out from the texts that are about your substantive topic

      But then is 'theory' really being used i.e. if you can separate out the theory into a standalone section/chapter, is it really part of the work you're doing?

    3. you have to read your chosen theory/ies a lot, so that you know it/them really intimately and can talk about it/them in your own words. And by the end probably talk about it/them in your sleep.

      Understanding theory well can give you deep insights into the work you're doing.

    4. Some people don’t have A Theory, but Theories – multiples. And perhaps these are theories from different disciplines. Constructing a new or underused multi/trans theoretical approach, and showing what it can/can’t do, is a significant contribution. If innovative use of theory is one of your anticipated contributions, then you’d have to set that up early. You’d need to say at the very start that your thesis will use a novel combination of theories, and you’d have to justify this. Create the warrant for your novel theoretical approach by saying what is going to be helpful, insightful, productive, generative about the theory/ies.

      You have to do something with theory; it's a couple of paragraphs that you add to satisfy a committee or journal checklist.

    5. theory is often conflated with “conceptual”. So people talk about having a theoretical or conceptual framework.

      See Varpio, L., Paradis, E., Uijtdehaage, S., & Young, M. (2020). The Distinctions Between Theory, Theoretical Framework, and Conceptual Framework. Academic Medicine: Journal of the Association of American Medical Colleges, 95(7), 989–994.

    6. A lot of people won’t understand the” theory chapter” as A Thing at all, let alone a problem. That’s because a lot of disciplines don’t use “theory” in the way that this question does. I imagine someone in Philosophy, or someone doing a theoretical inquiry in Politics, looking at this question and being mystified. Not to mention someone in some of the Sciences. For different reasons, the “theory chapter” is a non-issue for many people.

      This is why I have a problem with university research committees that have now added the 'theoretical framework' requirement to proposals, regardless of whether it is indicated or not.

    1. Schedule time to read and review these notes

      Unless you schedule this time to review your notes and work with them, they'll just scroll off your list.

    2. As you are reading a book, write your chapter summary right at the end of the chapter. If your reading session is over, this helps synthesize what you just read

      I have no doubt that this would be valuable but it assumes that you're sitting at your desk and can make these summaries. But if I'm reading away from the computer (i.e. a lean-back experience) it's really inconvenient and impractical to do this.

    3. The blank sheet method is effective because it primes your brain and shows you what you’re learning.

      I use a similar approach with some of the writing tasks I give to my students:

      1. Start by writing a short reflection on what you think about topic.
      2. Read a couple of mainstream news sources and update your initial reflection.
      3. Read a selected paper (or more) on the topic. Update your writing again.
      4. Send your draft to two peers for feedback on what's missing and how to improve what's there.
      5. Update your piece again.
      6. Submit your final version for grading.
    4. How does the book relate to topics you’re already familiar with?

      If I'm not familiar with the topic I tend to read for information i.e. I'm just looking for the big ideas. I tend not to add my own annotations to these texts and only highlight. My purpose is to extract information. If I know the topic relatively well, then I'll engage with the text through annotation and more critical reading.

    5. Active reading is thoughtfully engaging with a book at all steps in the reading process. From deciding to read right through to reflection afterwards, you have a plan for how you are going to ingest and learn what’s in the book.

      'Reading' doesn't start and end with the first and last pages.

    1. I run away to Twitter because the article is challenging me and causing me to experience uncomfortable emotions, and Twitter promises the opposite.

      We'll always be inclined to take the path of least resistance.

    2. [make progress] on the important stuff first

      Set aside your most productive time of the day ('productive' in terms of when your cognitive energy is highest) and move an important project forward. In the moment it may not feel like a big thing but over time those small movements get you closer to achieving something of value.

    3. focusing on a few things that count

      And this is a value-based judgement, not an economic one.

    4. you feel on top of everything

      You can never feel on top of everything because there are always more things.

    5. every commitment we make to a person, place, or line of work rules out countless others that may fulfill us

      Which is why it really makes sense to think carefully about our commitments. What is the opportunity cost of taking on this new thing? What does it prevent me from doing?

    6. a fairly modest six-figure number of weeks—310,000—is the approximate duration of all human civilization since the ancient Sumerians of Mesopotamia

      I'm often taken aback at how poorly we (or maybe just me) understand time over the longer term.

    1. Despite the numerous discoveries of the method (the paper even explicitly mentions David Parker and Yann LeCun as two people who discovered it beforehand) the 1986 publication stands out for how concisely and clearly the idea is stated. In fact, as a student of Machine Learning it is easy to see that the description in their paper is essentially identical to the way the concept is still explained in textbooks and AI classes.

      You don't necessarily have to be the first to describe an idea; you just have to describe it in a way that lots of people can understand it. There's some value to publishing 'explanations' of difficult ideas.

    1. answers to the following three questions

      It's largely because of these questions that I decided to shift my research slightly, from educational technology in general, to AI and machine learning in higher and professional education. I think that AI and ML is an important and pressing problem, and while it's not neglected in technology circles, it is not even on the radar for most health professionals and educators.

    1. One AI designed to play games such as Tetris, for instance, found that if it paused the game, it would never lose—so it would do just that, and consider its mission accomplished. You might accuse a human who adopted such a tactic of cheating. With AI, it’s simply an artifact of the reality that machines don’t think like people.There’s still tremendous upside in letting AI software teach itself to solve problems. “It’s not about telling the machine what to do,” says research staff member Nicholas Mattei. “It’s about letting it figure out what to do, because you really want to get that creativity . . . [The AI] is going to try things that a person wouldn’t maybe think of.” But the less software thinks like a human, the harder it becomes to anticipate what might go wrong, which means that you can’t just program in a list of stuff you don’t want it to do. “There’s lots of rules that you might not think of until you see it happen the way you don’t want it,” says Mattei.

      Algorithms don't tackle problems using the same logic that we do. This is a good thing if we're interested in exploring the solution space for difficult problems, but possibly a bad thing if we want it to drive us to the airport.

    1. The position of the leader is ultimately an intensely solitary, even intensely lonely one. However many people you may consult, you are the one who has to make the hard decisions. And at such moments, all you really have is yourself.

      Leadership may be lonely.

    2. thinking out loud, discovering what you believe in the course of articulating it

      Sometimes you don't know what you think until you try to explain it.

    3. Introspection means talking to yourself, and one of the best ways of talking to yourself is by talking to another person. One other person you can trust, one other person to whom you can unfold your soul.

      Talking to a trusted friend can reflect your thinking and ideas back to you, in a different, nuanced way that you may struggle to do by yourself. It's another way to think about how you think.

    4. most books are old. This is not a disadvantage: this is precisely what makes them valuable.

      The fact that they're still around suggests that there's something in them that has stood the test of time. Compare this to most books that, even if they're popular today, are forgotten by tomorrow. What are the lessons you can learn from books that have been around for a hundred years?

    5. Marlow believes in the need to find yourself just as much as anyone does, and the way to do it, he says, is work, solitary work. Concentration. Climbing on that steamboat and spending a few uninterrupted hours hammering it into shape. Or building a house, or cooking a meal, or even writing a college paper, if you really put yourself into it.

      Cal Newport talks about the idea of craft. Of mastering a difficult skill through sustained commitment to challenging work.

    6. you will find that the answers to these dilemmas are not to be found on Twitter or Comedy Central or even in The New York Times. They can only be found within—without distractions, without peer pressure, in solitude.

      The answers to the most important questions that confront us are not to be found in distraction.

    7. It seems to me that Facebook and Twitter and YouTube—and just so you don’t think this is a generational thing, TV and radio and magazines and even newspapers, too—are all ultimately just an elaborate excuse to run away from yourself. To avoid the difficult and troubling questions that being human throws in your way. Am I doing the right thing with my life? Do I believe the things I was taught as a child? What do the words I live by—words like duty, honor, and country—really mean? Am I happy?

      See Postman, N. (2006). Amusing ourselves to death: Public Discourse in the Age of Show Business (20th anniversary ed.). Penguin Books.

      We use media to distract ourselves from more useful and interesting pursuits.

      Also see Huxley, A. (1946). Brave new world. Harper & brothers, which confirms that this is not a new phenomenon.

    8. You simply cannot do that in bursts of 20 seconds at a time, constantly interrupted by Facebook messages or Twitter tweets, or fiddling with your iPod, or watching something on YouTube.

      These other activities make you feel like you're busy when you're just running on the spot.

    9. Thinking means concentrating on one thing long enough to develop an idea about it.

      Cal Newport's concept of Deep work.

    10. people do not multitask effectively. And here’s the really surprising finding: the more people multitask, the worse they are, not just at other mental abilities, but at multitasking itself.

      Context-switching comes at a significant cognitive cost.

    11. moral courage, the courage to stand up for what you believe

      Is this 'moral courage'? I would think of this as 'conviction', which is different. Also, given the scandal in his personal life, maybe holding him us an an exemplar of morality isn't ideal.

    12. he has the confidence, the courage, to argue for his ideas even when they aren’t popular

      This needs a commitment to a vision.

    13. the changing nature of warfare means that officers, including junior officers, are required more than ever to be able to think independently, creatively, flexibly.

      Not just in warfare; these characteristics are necessary for leading in modern society.

    14. People who can formulate a new direction: for the country, for a corporation or a college, for the Army—a new way of doing things, a new way of looking at things.

      "Managers manage within the context of a prevailing paradigm. Leaders take us from one paradigm to another." - Joel Bankeur

    15. People, in other words, with vision.

      Leaders have a vision.

    16. What we don’t have are leaders. What we don’t have, in other words, are thinkers.

      Leaders are thinkers.

    17. you will find yourself in environments where what is rewarded above all is conformity

      After all, this is what basic education trains us to be.

    18. picking a powerful mentor and riding his coattails until it’s time to stab him in the back

      This seems especially harsh. I'm not sure that this is a reasonable default position.

    19. excellence isn’t usually what gets you up the greasy pole. What gets you up is a talent for maneuvering

      But we've also said that 'excellence' (actually, 'success') isn't the same thing as leadership.

    20. the head of my department had no genius for organizing or initiative or even order, no particular learning or intelligence, no distinguishing characteristics at all. Just the ability to keep the routine going

      I've often said that universities are big machines that keep going, more from momentum than from anything that any of us does. But this adds a bit of nuance. Yes, there's momentum but even momentum runs out eventually. You need people to keep the momentum going. These people, while in positions of authority, tend not to be leaders.

    21. need to know how bureaucracies operate, what kind of behavior—what kind of character—they reward, and what kind they punish

      All systems have frameworks that you need to understand if you're going to navigate through them. When you don't understand how the system works, you're less likely to make progress.

    22. My students, like you, were energetic, accomplished, smart, and often ferociously ambitious, but was that enough to make them leaders? Most of them, as much as I liked and even admired them, certainly didn’t seem to me like  leaders. Does being a leader, I wondered, just mean being accomplished, being successful?

      Being successful doesn't necessarily prepare you to lead, but does that also mean that good leaders need not be successful? Is there any kind of relationship between these ideas, or are they completely separate?

    23. Any goal you set them, they could achieve. Any test you gave them, they could pass with flying colors.

      The measures we use to determine 'success' are not the same measures we should use to determine 'leader'.

    24. trained to be world-class hoop jumpers.

      Exactly. These are not necessarily leaders; but they do seem an awful lot like conformists, or followers.

    25. Well, it turned out that a student who had six or seven extracurriculars was already in trouble. Because the students who got in—in addition to perfect grades and top scores—usually had 10 or 12.

      This doesn't seem reasonable. There's no way that all of those extra-curriculars had anything to do with what the student was applying for. It's just a signal that this is someone who puts everything else aside to build up their CV. Is that a good thing?

    1. One way to enhance learning is to revisit material in different contexts over time, perhaps using different types of active learning exercises, which will give learners additional associations to help them organize and retrieve what they learn

      Distributed practice, Retrieval practice, Interleaved practice, and Elaboration.

    2. you should also direct them to look for common problems and to correct those problems when they try again later.

      ideally, you'd like the students to identify the differences between their own performance, and the standard or reference goal.

    1. There are statistical techniques for compensating for fragmentary and heterogeneous data – they are difficult and labor-intensive, and work best through collaboration and disclosure, not secrecy and competition.

      Again, an entirely reasonable conclusion to reach; a far cry from the "algorithms are racist" and "data kills people" tone of the article.

    2. The researchers involved likely had the purest intentions, but without the discipline of good science, they produced flawed outcomes – outcomes that were pressed into service in the field, to no benefit, and possibly to patients' detriment.

      This is a far more nuanced and reasonable conclusion than almost the whole of the preceding text suggests.

    3. the data and methods were often covered by nondisclosure agreements with medical "AI" companies. This foreclosed on the kind of independent scrutiny that might have caught these errors.

      We need different rules for AI models used in healthcare; if you want to use ML in healthcare, maybe you don't get to keep your algorithm proprietary.

    4. It also pitted research teams against one another, rather than setting them up for collaboration, a phenomenon exacerbated by scientific career advancement, which structurally preferences independent work.

      Perverse incentives in academia.

    5. And then they killed people with drones based on the algorithm's conclusions.

      This is a serious claim. It'd be good to link to something more suitable than an Ars article with the headline, "...may be killing thousands of people". Disappointing.

    6. "data scientists"

      This isn't good writing. They're not "data scientists"; they're data scientists.

    7. algorithms can be racist. The dingbat rebuttal goes, "Racism is an opinion. Math can't have opinions. Therefore math can't be racist."

      Isn't it more like "Algorithms can be trained on biased data that encodes racist behaviour among people"? Racism isn't coded into the algorithm. People are racist and the algorithms reflect that back to us. If the data were different then the algorithm would reflect that alternative to us.

    1. “In order to process speech today, we rely on complex algorithms that include multiple machine learning models. One model maps incoming sound bytes into phonetic units. Another one takes and assembles these phonetic units into words. And then a third model predicts the likelihood of these words in a sequence.”

      One of the challenges of ML is knowing what models to use, when to use them, how to use them, and what to train them on.

    1. These applications also rely on sending a large amount of information to the cloud, which causes a new set of problems. One regards the sensitivity of the information. Sending and storing so much information in the cloud will entail security and privacy challenges. Application developers will have to consider whether the deluge of information they’re sending to the cloud contains personally identifiable information (PII) and whether storing it is in breach of privacy laws. They’ll also have to take the necessary measures to secure the information they store and prevent it from being stolen, or accessed and shared illegally.

      See federated machine learning for a discussion on how we might avoid some of these challenges.

    2. They have intelligence at the edge.

      This doesn't seem to be a great analogy. A better one would be that we do have centralised processing, it's just mobile. When we do consult the internet or the library, we're still not going to a centralised source...those things are themselves decentralised. I'm not sure that this is helpful.

    1. Short overview of federated machine learning.

    2. But if a company wants to train machine learning models that involve confidential user information such as emails, chat logs, or personal photos, then collecting training data entails many challenges.

      This is exacerbated when dealing with patient information, which is regulated even more than other kinds of personal information.

    3. it would be preferable for the data to stay on the user’s device instead of being sent to the cloud.

      It's more efficient and reduces the risk of data being shared.

    4. Every time the application wants to use the machine learning model, it has to send the user’s data to the server where the model resides.

      This is especially problematic when sensitive data (e.g. patient health records) need to be sent to centralised datasets.

    5. Gathering training datasets for machine learning models poses privacy, security, and processing risks that organizations would rather avoid.

      It's also very expensive.

    1. One way to spot a poor thinker is to see how many of their decisions boomerang back to them.

      How many times do you have to redo something that you didn't get right the first time?

    2. Hard thinking is understanding the problem, understanding the variables and the nuances, thinking through the second and third-order effects, and often understanding that a little pain now will make the future a lot easier.

      Understanding takes time.

    1. Given the costs of having an experienced advisor regularly available to students, it’s not always realistic. But AI could be the experienced advisor, powered by learnings from big data.

      This reminds me of the "illustrated primer" in Stephenson, N. (1995). The diamond age. Although that's an AI-based system entirely, with no human interaction necessary.

    2. When associate’s degree students were paired with an experienced advisor who met with them on a regular basis, drop-out rates were cut in half.

      So, not a 'teacher' who provides access to specialised information but someone who can guide and advise.

    3. “The real power of artificial intelligence for education is in the way that we can use it to process vast amounts of data about learners, about teachers, about teaching and learning interactions,” said Rose Luckin, a professor of learning-centered design at University College London. “[It can] help teachers understand their students more accurately, more effectively.”

      I'd like to see teachers working as coaches, reviewing learning dashboards and helping students interpret the data they're getting back from the learning system.

    4. Now all of those students can sit in the same classroom, with the same teacher, and learn at their own pace

      What's the teacher doing there?

    5. These systems wouldn’t replace teachers.

      Everyone says this. Now it just feels like 'the thing we need to say so as to not upset anyone'. It's almost like a polite in the right direction.

    1. We work within regulatory constraints in different jurisdictions, but even those constraints are often much more flexible than we give them credit for and there is much reimagining that we can do, even without changing them.

      The challenge of professional programme accreditation and regulation is significant. If you're lucky enough to have a programme accreditor that allows flexibility, that's great. However, many/most professional regulators are conservative by nature and tend to push back against the kinds of teaching and learning suggested here.

    2. We need to ask ourselves what we want students to know when they have completed our programs; what do we want them to be able to do, what questions do we want them to be asking, and what sorts of contributions should they be able to make.

      Again, make undergraduate learning look more like the learning that professionals do.

    3. A move back to exclusive in-person learning will put those barriers back up, for no defensible reason

      Here's a reason: Many students around the world, maybe the majority, in fact, simply don't have constant, cheap, and stable internet connections.

    4. Teaching can and should happen in various modes

      I don't understand why it's so hard for teachers to recognise that this is what their own learning looks like. Why shouldn't we design learning spaces for students that reflect how we learn?

    5. Ask students. They will tell you. The lectures are for reviewing in fast forward when you are ready, or if you need to, or for replaying a segment a few times for clarity. The instructor is for learning how to evaluate and work with the information, for exploring it, and for learning about key insights. We need to more clearly separate the material from the teaching.

      Is this anecdote? The experience of the author?

    6. First, it has made it impossible to deny that university teaching is no longer about content delivery.

      I'd like to agree with this but my experience has been that this has been the dominant response by far i.e. the delivery of content to students.

    1. These effects should be anticipated. This means that a suite of indicators is always preferable — a single one will invite gaming and goal displacement (in which the measurement becomes the goal).

      Also known as Goodhart's Law.

    2. Simplicity is a virtue in an indicator because it enhances transparency. But simplistic metrics can distort the record (see principle 7). Evaluators must strive for balance — simple indicators true to the complexity of the research process.

      We should balance the use of both quantitative and qualitative data to inform decision-making re. research impact.

    3. Review may be based on merits relevant to policy, industry or the public rather than on academic ideas of excellence. No single evaluation model applies to all contexts

      Recognise that citation count, impact factors and other quantitative metrics aren't necessarily aligned with broader contexts.

    4. informed judgement

      Odd phrasing language. What do you think informs judgements, if not data? There's this weird thing that happens when people conflate 'data' with 'numbers', which is just wrong.

    5. However, assessors must not be tempted to cede decision-making to the numbers.

      Quantitative metrics are important but should be used alongside qualitative metrics. In the same way that mixed methods research designs see quantitative and qualitative data integrated so that each makes up for the weaknesses of the other.

    6. Scientists searching for literature with which to contest an evaluation find the material scattered in what are, to them, obscure journals to which they lack access.

      It's not easy to marshal the resources necessary to make this argument; there's no single place that serves as a collection of the evidence you need.

    7. Across the world, universities have become obsessed with their position in global rankings (such as the Shanghai Ranking and Times Higher Education's list), even when such lists are based on what are, in our view, inaccurate data and arbitrary indicators.

      I think there's a difference between a genuine concern for the question of how we evaluate the impact of research (after all, this has very important real world implications), and the competition among universities to increase their ranking. These are two different things entirely. We can have evaluations of research impact without having university ranking. I think we need to be careful of conflating the two.

    8. Lately, metrics related to social usage and online comment have gained momentum — F1000Prime was established in 2002, Mendeley in 2008, and Altmetric.com (supported by Macmillan Science and Education, which owns Nature Publishing Group) in 2011.

      See altmetrics.

    9. The problem is that evaluation is now led by the data rather than by judgement.

      But what is 'judgement' based on? I feel like there's a specific definition of 'data' being used here that's not made explicit. We all rely on data, albeit in different forms and different weightings, to make judgements.

    1. “This is a cultural thing,” says Bertuzzi, “and it takes pressure from multiple points to change behaviour”.

      Not just 'cultural' but also an ecosystem within which many stakeholders must function.

    1. Assessing staff solely on the basis of quantitative metrics is never acceptable, no matter what type of metric is being used

      See Goodhart's Law and some background on why these kinds of measurements are difficult.