10,000 Matching Annotations
- Jul 2025
-
tabletop-creator.com tabletop-creator.com
-
tabletop-creator.com tabletop-creator.com
-
this useful article
le lien devrait être souligné
-
-
www.youtube.com www.youtube.com
-
The power of contrasts. The only way for hell to work is for its inhabitants to have hope; if they had no hope, torture and suffering is pointless. Without hope, hell loses its power.
-
-
-
Reviewer #1 (Public review):
Summary:
In this manuscript, the authors aim to address significant limitations of existing experimental paradigms used to study dyadic social interactions by introducing a novel experimental setup - the Dyadic Interaction Platform (DIP). The DIP uniquely allows participants to interact dynamically, face-to-face, with simultaneous access to both social cues and task-related stimuli. The authors demonstrate the versatility and utility of this platform across several exemplary scenarios, notably highlighting cases of significant behavioral differences in conditions involving direct visibility of a partner.
Major strengths include comprehensive descriptions of previous paradigms, detailed explanations of the DIP's technical features, and clear illustrations of multimodal data integration. These elements greatly enhance the reproducibility of the methods and clarify the potential applications across various research domains and species. Particularly compelling is the authors' demonstration of behavioral impacts related to transparency in interactions, as evidenced by the macaque-human experiments using the Bach-or-Stravinsky game scenario.
Strengths:
The DIP represents a methodological advance in the study of social cognition. Its transparent, touch-sensitive display elegantly solves the problem of enabling participants to attend to both their social partner and task stimuli simultaneously without requiring attention switching. This paper marks a notable step forward toward more options for naturalistic yet still lab-based studies of social decision-making, an area where the field is actively moving, especially given recent research highlighting significant differences in neural activity depending upon the context in which an action is performed. The DIP offers researchers a valuable tool to bridge the gap between tightly controlled laboratory paradigms and the dynamic, bidirectional nature of real-world social interactions.
The authors do well to provide comprehensive documentation of the technical specifications for the four different implementations of the platform, allowing other researchers to adapt and build upon their work. The detailed information about hardware configurations demonstrates careful attention to practical implementation details. They also highlight numerous options for integration with other tools and software, further demonstrating the versatility of this apparatus and the variety of research questions to which it could be applied.
The historical review of dyadic experimental paradigms is thorough and effectively positions the DIP as addressing a critical gap in existing methodologies. The authors convincingly argue that studying continuous, dynamic social interactions is essential for understanding real-world social cognition, and that existing paradigms often force unnatural attention-splitting or turn-taking behaviors that don't reflect naturalistic interaction patterns.
The four example applications showcase the DIP's versatility across diverse research questions. The Bach-or-Stravinsky economic game example is particularly compelling, demonstrating how continuous access to partners' actions substantially changes coordination strategies in non-human primates. This highlights a key strength of the DIP, which is that it removes a level of abstraction that can make tasks more difficult for non-human primates to learn. By being able to see their partner and actions directly, rather than having to understand that a cursor on a screen represents a partner, the platform makes the task more accessible to non-human primates and possibly children as well. This opens up important avenues for enhanced cross-species investigations of cognition, allowing researchers to study social dynamics in a setting that remains naturalistic yet controlled across different populations.
Weaknesses:
Some of the experimental applications would benefit from stronger evidence demonstrating the unique advantages of the transparent setup. For instance, in the dyadic foraging example, it's not entirely clear how participants' behavior differs from what might be observed when simply tracking each other's cursor movements in a non-transparent setup. More evidence showing how direct visibility of the partner, beyond simply being able to track the position of the partner's cursor, influences behavior would strengthen this example. Similarly, in the continuous perceptual report (CPR) task, the subjects could perform this task and see feedback from their partners' actions without having to see their partner through the transparent screen. Evidence showing that 1) subjects do indeed look at their partner during the task and 2) viewing their partner influences their performance on the task would significantly strengthen the claim that the ability to view the partner brings in a new dimension to this task. These additions would better demonstrate the specific value added by the transparent nature of the DIP beyond what could be achieved with standard cursor-tracking paradigms.
A significant limitation that is inadequately addressed relates to neural investigations. While the authors position the platform's ability to merge attention to social stimuli and task stimuli as a key advantage, they don't sufficiently acknowledge the challenges this creates for dissociating neural signals attributed to social cues versus task-based stimuli. More traditional lab-based experiments intentionally separate components like task-stimulus perception, social perception, and decision-making periods so that researchers can isolate the neural signals associated with each process. This deliberate separation, which the authors frame as a weakness, actually serves an important functional purpose in neural investigations. The paper would be strengthened by explicitly discussing this limitation and offering potential approaches to address it in experimental design or data analysis. For instance, the authors could suggest methodological innovations or analytical techniques that might help disentangle the overlapping neural signals that would inevitably arise from the integrated presentation of social and task stimuli in the DIP setup.
Furthermore, the authors' suggestion to arrange task stimuli around the periphery of the screen to maintain a clear middle area for viewing the partner appears to contradict their own critique of traditional paradigms. This recommended arrangement would seemingly reintroduce the very problem of attentional switching between task stimuli and social partners that the authors identified as a limitation of previous approaches. The paper would be strengthened by discussing the potential trade-offs associated with their suggested stimulus arrangement. Additionally, offering potential approaches to address these limitations in experimental design or data analysis would enhance the paper's contribution to the field.
Tags
Annotators
URL
-
-
static.igem.org static.igem.org
-
SYNBIO AUCTIONHANDBOOK22.4.19Made with ❤ for D.A.V. Public School, VelacheryiGEM 2019 SASTRA Team - Human Practices #1
(from Slack chats: CR) What is everyone’s favorite resource to show to undergraduates new to synthetic biology? Bonus points to printable/written/non-video stuff
(GP) Not directly a resource, as this assumes some level of bg has been given, but I was introduced to different parts of a genetic construct via this game during my undergrad:
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
The concept of online identities is fascinating, especially how individuals craft different personas across platforms. For instance, a student may use LinkedIn to present a professional side, while Facebook highlights personal connections. It’s interesting how these platforms allow for a wide range of self-expression, from informal language with friends to creating game avatars or posting instagram stories in a second language. These digital spaces not only help build new identities but also expand how we connect with others and explore different cultures. How do you think these online identities influence our real-life interactions and perceptions?
Annotators
URL
-
-
researchonline.ljmu.ac.uk researchonline.ljmu.ac.uk
-
Regardless of this, the sub-sample of validatedcarbon monoxide outputs provided at 3-month follow-up correlated very strongly with theself-report measures, supporting previous evidence that self-report measures are highlyaccurate for smokers who are not adolescents, high risk smokers, or medical patients
This still seems to me like it would be easy to game and get around with no easy way to verify
-
Exhaled carbon monoxide outputs were available from the stop smokingpractitioners to verify the self-reported measures at baseline and 3-month follow-up.
Ive never heard of these tests before, but I dont know how well they would work at a 3 month follow up. Im skeptical that they would work if you had a cigarette more than a few days before, and everywhere I look online says that they arent very effective past 2 days. This seems like it would be an easy system to game if you simply lie about your smoking in the self report, but I could be wrong
-
-
theahura.substack.com theahura.substack.com
-
The best way to understand Punjabi weddings is to realize that every single thing that's happening fulfills one of three goals.Meeting the religious obligations of marriage, as defined by the Vedas.Merging two families together and building a joint community.Showing off as much as humanly possible.You do your vows by walking around a sacred fire seven times while a pandit recites prayers in Sanskrit. That's category one. But you take those vows while dressed in the most ornate clothing you will ever see, decked out in jewelry fit for royalty. That's all category three. O, and while those vows are going on, there's a low key capture-the-flag game happening with the groom's shoes, with the bride's family members attempting to steal them from the groom's family members. That's, of course, category two.Every ritual, event, and tradition is like this, multiplied over at least 5 days.
Wry introspection, elaborate detail
-
-
open.spotify.com open.spotify.com
-
Its a good idea to explore cattle raiding. The football game we played pushing back people into their own village is a tribal reality which is coded into the GAA I also sense it is coded into the stories of rape and babies killed by nuns and children raped by priests and uncles and fathers that is also a reality of the Gaelic psycche
-
-
www.dappfort.com www.dappfort.com
-
Development Company Build your Crypto business empire with the most redefined crypto wallet in the digital space. Dappfort, a promising crypto wallet development company, offers you next-gen-powered wallet solutions. Let's work together Crypto Wallet Development Dappfort is a promising leader in crypto wallet development, offering an advanced wallet solution that helps wallet users buy, sell, trade, and swap a variety of cryptocurrencies at any time. Are you a crypto enthusiast looking to launch your own business in the emerging crypto market? Then Dappfort’s crypto wallet would be a great solution. The certified blockchain developers of Dappfort work on your crypto wallet idea to bring it to reality. Crypto wallet development requires a team of experienced developers from UI design to smart contract programming. Dappfort takes your wallet development process through a sequence of workflows to bring the best idea to the digital world. We, as a team, make sure every wallet is hassle-free, compatible, secure, and user-friendly. Also, we are keen on providing feature-rich crypto wallet development services tailored to your business needs. Crypto Wallet Development Services We offer a wide range of crypto wallet development services based on the requirement of client. Here is a list of the top services we work for our clients NFT Wallet A dedicated NFT wallet that allows users to hold only non-fungible tokens, the wallet is specifically designed to buy, sell, trade, and hold any kind of NFT safely. Multi-Chain Wallet The futuristic wallet that attracts most users around the globe is this one, as this allows users to trade and hold multiple cryptocurrencies across various blockchain networks. Coin-Based Wallet We also work on dedicated crypto wallet projects, especially designed for a particular crypto coin based on its dedicated blockchain network. White Label Solution Looking to launch a similar wallet like Trust Wallet? Avail our white label crypto wallet solution and launch in the crypto space in no time. Wallet as a Service Our professionals are also employed to develop robust digital solutions to handle every aspect of functions and transactions in the wallet. Browser Extensions Are you looking for something like Metamaks? We have got you, our professionals are ready to launch your next browser extensions wallet like Metamask. Features of our Cryptocurrency Wallet Solutions We make sure every wallet developed by our experts is planned with the most prominent features that users look for and helps admin manage their business. Admin features User features Effective Data It provides all the data of every transaction and trade that took place through the wallet platform, and also the activity of the Dapps and staking details. API Admin has control over the API integrated to the platform and can modify, add, or remove the API based on the needs of the users of the crypto wallet. Analytics The analytics dashboard helps the admin to analyze the users' access to the features of the wallet, and this helps to upgrade the features that users opt for the most. Security Management The admin holds the key position to manage the security of the wallet, where he enables various security features to protect the crypto wallet solution. Data Backup On a daily basis, the wallet records all the required data and stores it for future analysis, and also helps to transfer data in the future at any time. Notifications Admins will be able to send notifications to a specific set of users or to all users of the crypto wallet from the dashboard. Trade History The user will be able to go through the history of trades done before for analysis, and this helps to plan future trades and also the previous assets he/she was holding. Multiple Cryptos The wallet is designed to hold multiple cryptocurrencies, which allows users to explore various assets and let them choose the best crypto they want to buy/trade/sell. Referral & Reward The referral and reward system allows users to earn commission when a user makes any transaction with the wallet through his/her referral ID after creating their wallet. User Dashboard The user dashboard holds all the information of the user since when he/she made their first transaction in the wallet, which helps them to analyze their digital assets. Staking Staking helps users to lock their digital assets for a certain period and allows them to earn rewards and crypto based on their locked assets and the time period. Multiple Payment Gateways Users can deposit/withdraw their funds through any of their desired payment gateways, which makes their process easier. Addon Features Apart from the above-listed features, below are a few additional features that help the wallet reach out to maximum users around the world. Token Listing Multiple tokens can be listed in the wallet to attract multiple users around the globe and reach your business goals. Browser Extension Are you looking to add browser extensions like Metamask for your wallet? We have got you, our developers will bring it for you. Fiat Support We also bring you Fiat support, which allows users around the globe to deposit their Fiat into the wallet for the purchase of any asset. Cross-Chain The cross-chain options allow users to trade across multiple blockchain networks, allowing users to connect to various Dapps they want. Dapp Dapps play a major role in the crypto space, so with the wallet, we integrate the Dapp to ease the process for users for further needs. Lightning Network BTC Lightning Network allows instant, low-cost transactions that are completed using off-chain payment techniques, enabling fast Bitcoin transfers. Security Features We are keen on the security features of the crypto wallet to avoid any kind of disruption in the service and to avoid any hacks, where we add multiple layers of security, and here are a few among them Multi-Factor Authentication Multiple kinds of authentication are used, including passwords, hardware keys, and more. This provides extra security levels even if one aspect is compromised. Encryption Integrates strong encryption algorithms such as AES or ECC to protect sensitive data. It ensures that encrypted data is not readable without the decryption key. Biometric Authentication requires the use of unique physical traits such as fingerprints or facial recognition, providing an additional security layer that prevents any hacks. Anti-Phishing software Anti-phishing software detects fraudulent websites and programs. It avoids scams and the exposure of private keys or login information. DDoS Protection DDoS protection ensures that the overwhelming unwanted traffic is blocked that aims to disrupt or break the system, not directly preventing data breaches. Cryptocurrency Wallet Development Process Project Scope Analysis Determine Tech Stack UI/UX Design Backend Development Smart Contract Coding Set up Security Layers Dapp Integration API Integration Testing and Deployment Project Scope Analysis and Take Profit We assigned the experts on our team to analyze the project scope and proceed with the project's vision and mission, thereby informing the development process and project outcome. Determine Tech Stack Once the process is set up, we identify the required tools and tech stack for developing the project, presenting the best solution for the digital space. UI/UX Design and Candlestick Close Then our designers will start their work with designing the best impressive UI for your wallet that helps the users to find and access every feature of the wallet. Backend Development Simultaneously, our developers will proceed with the backend development process and set up the core functionality of the wallet and additional features as the project demands. Smart Contract Coding On the other hand, our certified blockchain developers will be coding smart contracts that play a major role in the crypto wallet and secure every transaction. Set up Security Layers Once the core functionality and features are set up, we will add multiple layers of security features to the wallet to safeguard the users and digital assets of the wallet. Dapp Integration If the client requires any decentralized application to be integrated with the wallet to provide additional services, our developers would work to integrate DApps with the wallet. API Integration For various additional features and functions, API’s are integrated with the wallet based on the project demand to ease multiple operations for the users. Testing and Deployment Once the development process is done, the wallet is taken through a series of tests where the bugs & vulnerabilities are fixed and removed before deployment. Benefits of Launching a Crypto Wallet Looking to launch a crypto wallet but don’t have any idea on why to launch and what benefits you can get, here are a few things that may help before you move with the wallet development process Global Reach Crypto Wallet allows you to take your business to a global audience and helps to build your crypto business empire further in the digital space. Multiple Revenue Streams Crypto wallet lets you make multiple revenue which, including transaction fees, token listing fees, even for staking, exchange, and more. DeFi Users can access DeFi platforms, and also users can benefit from staking, lending, and borrowing, making it a one-stop solution. Market Demand The crypto market is growing, and the crypto space demands that every user have a wallet for any operation in the crypto space, which allows you to reach more users. Secured Crypto wallets are a more secure solution in the crypto space because of multiple security layers and deployed smart contracts, which is an added advantage. Scalability The wallet offers hassle-free performance to the users, even if more than 50000+ active users access the wallet at the same time. Tech Stack we use for Cryptocurrency Wallet Development Our certified professionals work on various tech stacks to present the best crypto wallet with top-notch features, and here are a few on which our professionals most commonly work. Frontend React Next.Js Vue Backend Node.js Nest.js Express.js Socket.io Blockchain Ledger Bitcoin-core Pinata Cloud Hardhat IPFS Alchemy TronWeb Blockchain network Avalanche Ethereum Bitcoin Solana Polygon Fantom Blockchain platforms Solidity Stellar Hyperledger fabric Rust Additional tools Apollo Docker REST/GraphQL ClickUp/ Jira GitHub What Makes Dappfort the Best Cryptocurrency Wallet Development Company? Dappfort is the best cryptocurrency wallet development company that works with clients around the world on various requirements and provides the best solution that helps their business grow in the crypto space. If you are looking to launch your wallet in the crypto space or looking to give your existing wallet an upgrade, connect with our experts, and they will help you reach your business goal with advanced solutions. Dappfort’s crypto wallet solution offers a wide range of opportunities in digital assets that attract a lot of users around the globe and help you reach your business goals. The crypto wallet is the gateway to every transaction and every operation that takes place in the crypto space, so if you have an idea to enter the crypto space, then launching a wallet would be the best solution at this time. Contact us! Book a call or fill out the form below and we'll get back to you once we've processed your request. Select Country I agree the Terms and conditions & send me NDA FAQs Related to Crypto Wallet App Development Frequently asked questions regarding Crypto Wallet App What are the benefits of developing an crypto Wallet app? The benefits of creating a cryptocurrency wallet app include convenience, security, quick transactions, connection with other services, and easy access to digital payments. How can AI be integrated into crypto Wallet application development? AI can improve eWallet apps by providing tailored suggestions, fraud detection, speech recognition, and chatbots, which improves user experience, security, and transaction efficiency. What technologies are commonly used in crypto Wallet apps development? It comprises mobile app frameworks, backend technology, payment gateway integrations, and AI technologies. What are the essential components to add in a cryptocurrency wallet app? It includes user registration, account linking, wallet balance management, transaction history, QR code scanning, peer-to-peer money transfers, bill payments, and security features like as authentication. How can I monetize crypto Wallet app? An eWallet app's monetization options include transaction fees, merchant partnerships, in-app advertising, premium feature subscription plans, and app licensing to other businesses. How can I ensure the security of an crypto Wallet app? Implement data encryption, two-factor authentication, frequent security audits, industry standard compliance, user education on secure practices, and AI-powered fraud detection to assure the security of your crypto wallet software. Explore all of free resource Discover guides on wallet security, reviews of the best options, and the latest trends in the cryptocurrency space. Stay informed and make the most of your digital assets! Explore Insights Get in touch Get in touch with us for all your crypto wallet inquiries! Whether you have questions, need support, or want to share feedback, we’re here to help you navigate your digital asset journey. Contact us Boost your business with our customized web3 digital solutions. Partner with Dappfort to turn your vision into reality. Contact Us Business Enquiry: +91 88385 34884 Email Us: sales@dappfort.com LONDON 71-75 Shelton Street, Covent Garden, London, England WC2H 9JQ, GB MADURAI Baskar Complex, Besant Road, Chinna Chokkikulam Madurai, TN-625002 India Company About us Blog Services Web3 Game Development Web3 Wallet Development Web3 Dapp Development Web3 Defi Development Web3 Ecommerce Development Crypto Exchange Development White Label Exchange Software Cryptocurrency Wallet Development Hybrid Exchange Development P2P Exchange Development Metaverse Development Metaverse Casino Game Development Metaverse Avatar Development Metaverse Game Development Metaverse Token Development Metaverse App Development Metaverse Social Media Development Products Kucoin Clone Script Bybit Clone Script Solanart Clone Script Superrare Clone Script Axie infinity Clone Script Alien Worlds Clone Script Sandbox Clone Script Cryptoblades Clone Script Zedrun Clone Script BC Games Clone Script Cricket Betting Software Whitelabel Sportsbook Software Draftkings Clone Script Fanduel Clone Script Lotus365 Clone Script Dream11 Clone Script Betfury Clone Script Foundation Clone Script Disclaimer: logos and other registered trademarks of blockchain networks & other popular application used on this platform are held by their respective owners. Dappfort does not claim ownership or association on them, and their use is purely for informational and illustrative purposes to understand by our audience. Privacy Policy Terms & Conditions © 2025 Dappfort, All Rights Reserved Connect Whatsapp Connect Telegram $(document).ready(function() { $('.menu').click(function(event) { event.stopPropagation(); // Prevents the event from bubbling up to the document $(this).toggleClass('clicked'); var submenu = $(this).next('.submenu'); if ($(this).hasClass('clicked')) { submenu.addClass('active').delay(100).queue(function(next) { $(this).addClass('show'); // Add the class that triggers the slow transition next(); // Proceed to the next item in the queue }); } else { submenu.removeClass('active show'); } // Hide other submenus $('.menu').not(this).removeClass('clicked').next('.submenu').removeClass('active show'); }); $(document).click(function(event) { // Check if the click is outside of the menu and submenu if (!$(event.target).closest('.submenu-container').length) { $('.menu').removeClass('clicked'); $('.submenu').removeClass('active show'); } }); // Prevent submenu from closing when clicked inside $('.submenu').click(function(event) { event.stopPropagation(); // Prevents the click event from closing the submenu }); }); $(window).scroll(function() { if ($(this).scrollTop() > 20) { $('.navbar').addClass('stickyNav'); } else { $('.navbar').removeClass('stickyNav'); } if ($(this).scrollTop() > 20) { $('.ctaPopBtn').addClass('stickyBtn'); } else { $('.ctaPopBtn').removeClass('stickyBtn'); } }); document.addEventListener("DOMContentLoaded", function() { var input = document.querySelector("#phone"); var iti = window.intlTelInput(input, { initialCountry: "auto", separateDialCode: true, utilsScript: "https://cdnjs.cloudflare.com/ajax/libs/intl-tel-input/17.0.8/js/utils.js", }); // Optional: Retrieve and set the phone number value from localStorage var storedPhoneNumber = localStorage.getItem("phoneNumber"); if (storedPhoneNumber) { input.value = storedPhoneNumber; } // Handle form submission var form = document.getElementById("phoneForm"); if (form) { form.addEventListener("submit", function(event) { event.preventDefault(); // Save the phone number to localStorage localStorage.setItem("phoneNumber", input.value); // Log the phone number (you can replace this with actual form submission) console.log("Submitted Phone Number:", input.value); }); } else { console.error("Form element not found."); } }); document.getElementById('countries').addEventListener('change', function() { var selectedCountryCode = this.options[this.selectedIndex].value; var selectedCountry = this.options[this.selectedIndex].text; $("#hiddencountry").val(selectedCountry); $("#phone").val(selectedCountryCode); }); $(document).ready(function() { $(window).scroll(function() { if ($(this).scrollTop() > 100) { // Change 100 to the height you want to trigger the effect $('.header-section').addClass('is-sticky'); } else { $('.header-section').removeClass('is-sticky'); } }); }); $("#social_select li").click(function() { var t = $(this).attr("data-value"); $(".contact_type").val(t), "skype" == t ? $(".social_contact").attr("placeholder", "Your " + t + " ID") : $(".social_contact").attr("placeholder", "Your " + t + " Number") }), $("ul.social-select").on("click", ".init", function() { $(this) .closest("ul") .children("li:not(.init)") .toggle() }), $(document).on("click", function(t) { var e = $("ul.social-select"); e === t.target || e .has(t.target) .length || $(".init") .closest("ul") .children("li:not(.init)") .slideUp("fast") }); var allOptions = $("ul.social-select").children("li:not(.init)"); $("ul.social-select").on("click", "li:not(.init)", function() { allOptions.removeClass("selected"), $(this).addClass("selected"), $("ul.social-select") .children(".init") .html($(this).html()); var t = $(this).attr("data-value"); $("ul.social-select") .children(".init") .attr("data-value", t), allOptions.toggle() }); function fetchDataFromJson() { // alert('Function called'); fetch('https://www.dappfort.com/json/country.json') .then(response => { if (!response.ok) { throw new Error('Network response was not ok'); } return response.json(); }) .then(data => { // alert('Data fetched successfully'); const selectElement = document.getElementById("countries"); if (data && data.items && Array.isArray(data.items)) { data.items.forEach(item => { const option = document.createElement("option"); option.value = item.value; option.text = item.country; selectElement.appendChild(option); }); } else { alert('No items found in the fetched data'); } }) .catch(error => { console.error('Error fetching the JSON file:', error); }); } var Tawk_API=Tawk_API||{}, Tawk_LoadStart=new Date(); (function(){ var s1=document.createElement("script"),s0=document.getElementsByTagName("script")[0]; s1.async=true; s1.src='https://embed.tawk.to/63e22ab4c2f1ac1e2031e0fa/1golndl1o'; s1.charset='UTF-8'; s1.setAttribute('crossorigin','*'); s0.parentNode.insertBefore(s1,s0); })(); function openCity(evt, cityName) { var i, tabcontent, tablinks; tabcontent = document.getElementsByClassName("tabcontents"); for (i = 0; i < tabcontent.length; i++) { tabcontent[i].style.display = "none"; } tablinks = document.getElementsByClassName("tablinks"); for (i = 0; i < tablinks.length; i++) { tablinks[i].className = tablinks[i].className.replace(" active", ""); } document.getElementById(cityName).style.display = "block"; evt.currentTarget.className += " active"; } document.getElementById("defaultOpen").click(); function googletag() { var head = document.getElementsByTagName("head")[0]; var script = document.createElement("script"); script.type = "text/javascript"; script.src = "https://www.googletagmanager.com/gtag/js?id=G-2NL8ZX1DCM"; script.defer = true; head.appendChild(script); } setTimeout(googletag, 7000); window.dataLayer = window.dataLayer || []; function gtag() { dataLayer.push(arguments); } setTimeout(() => { gtag('js', new Date()); gtag('config', 'G-2NL8ZX1DCM', { 'debug_mode': true }); }, 7000); const swiper = new Swiper(".swiper-process", { direction: "vertical", loop: false, slidesPerView: 1, spaceBetween: 0, // autoplay: { // delay: 4000, // disableOnInteraction: false // }, navigation: { nextEl: ".swiper-button-next", prevEl: ".swiper-button-prev" } }); // Navigation List Sync const navItems = document.querySelectorAll("#featureNav li"); function updateNavActive(index) { navItems.forEach((el, i) => { el.classList.toggle("active", i === index); }); } // Sync nav on slide change swiper.on("slideChangeTransitionEnd", () => { updateNavActive(swiper.realIndex); }); // On nav click navItems.forEach((el, i) => { el.addEventListener("click", () => { swiper.slideToLoop(i); updateNavActive(i); }); }); var sc_project = 12903339; var sc_invisible = 1; var sc_security = "a037a771"; <div class="statcounter"><a title="Web Analytics Made Easy - Statcounter" href="https://statcounter.com/" target="_blank"><img class="statcounter" src="https://c.statcounter.com/12903339/0/a037a771/1/" alt="Web Analytics Made Easy - Statcounter" referrerPolicy="no-referrer-when-downgrade"></a></div> { "@context": "https://schema.org/", "@type": "FAQPage", "mainEntity": [{ "@type": "Question", "name": "What are the benefits of developing an crypto Wallet app?", "acceptedAnswer": { "@type": "Answer", "text": "The benefits of creating a cryptocurrency wallet app include convenience, security, quick transactions, connection with other services, and easy access to digital payments." } }, { "@type": "Question", "name": "How can AI be integrated into crypto Wallet application development?", "acceptedAnswer": { "@type": "Answer", "text": "AI can improve eWallet apps by providing tailored suggestions, fraud detection, speech recognition, and chatbots, which improves user experience, security, and transaction efficiency." } }, { "@type": "Question", "name": "What technologies are commonly used in crypto Wallet apps development?", "acceptedAnswer": { "@type": "Answer", "text": "It comprises mobile app frameworks, backend technology, payment gateway integrations, and AI technologies." } }, { "@type": "Question", "name": "What are the essential components to add in a cryptocurrency wallet app?", "acceptedAnswer": { "@type": "Answer", "text": "It includes user registration, account linking, wallet balance management, transaction history, QR code scanning, peer-to-peer money transfers, bill payments, and security features like as authentication." } }, { "@type": "Question", "name": "How can I monetize crypto Wallet app?", "acceptedAnswer": { "@type": "Answer", "text": "An eWallet app's monetization options include transaction fees, merchant partnerships, in-app advertising, premium feature subscription plans, and app licensing to other businesses." } }, { "@type": "Question", "name": "How can I ensure the security of an crypto Wallet app?", "acceptedAnswer": { "@type": "Answer", "text": "Implement data encryption, two-factor authentication, frequent security audits, industry standard compliance, user education on secure practices, and AI-powered fraud detection to assure the security of your crypto wallet software." } } ] }
Crypto Wallet Development Company, can be referred if looking to develop your crypto wallet.
-
-
app.podscribe.com app.podscribe.com
-
daft
daft
English Explanation:
In the excerpt, the term "daft" is used to describe something foolish or silly. The speaker asserts, "I’m not daft," indicating awareness of the situation and a rebuttal to any suggestion that they lack common sense. The context suggests that the conversation involves planning a round of golf, which is being framed not just as a game but as an opportunity for meaningful conversation. The speaker expresses confidence that their plans, although considered impractical, could ultimately persuade others.
Chinese Explanation:
在这一段中,“daft”一词用来描述愚蠢或傻的事物。说话者表示:“我并不傻”,这表明他们对情况有清晰的认识,并反驳了任何暗示他们缺乏常识的说法。上下文提到的高尔夫活动不仅是一场比赛,更是一个进行有意义谈话的机会。说话者自信他们的计划尽管被认为是不切实际,但可能最终会说服他人。
-
-
www-sciencedirect-com.ezp1.lib.umn.edu www-sciencedirect-com.ezp1.lib.umn.edu
-
Separate models including participants' overweight and smoking status and their interaction with incentive type show that both the very overweight and daily smokers are more in favour of incentive-based treatments than those who were never overweight or never smoked
These could be people with low intrinsic motivation and just looking to game the system
-
-
www.youtube.com www.youtube.com
-
children are expensive
children are expensive like if you can say the
sentence children are expensive expensive there may be something wrong with your framing of what children are
all about so that's one of the rules of the game children are expensive and I know many people and young people today
but even in my it's like should I delay my career or should I have a family or should I like this is not a good way to
run your Society where people feel like maybe we shouldn't have a family at any I mean like isn't the point to have a
society you have to have babies to have a so anyway
-
-
zettelkasten.de zettelkasten.de
-
Because of its unique structure, the Antinet is noted as “a surprise generator,” and a system that develops “a creativity of its own.”
-
This is important because it allows one to communicate with the Antinet, transforming it into a communication experience with a second mind, a doppelgänger, or a ghost in a box, as Luhmann called it. (5)5 This is the entity Luhmann referred to when he titled his paper, Communicating with Noteboxes.
El comentario a una anotación continente habla de Monos de Markov, en el sentido de atribuirle personalidad y vida a algo inanimado, al hablar con él (tanto como se habla con un Tarot u otro juego interpretativo/narrativo). Estos serían los límites del "ghost in a box", que valdría la pena reconocer a pesar de lo útil de la interpretación animista para el archivo.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This valuable work investigates cooperative behaviors in adolescents using a repeated Prisoner's Dilemma game. The computational modeling approach used in the study is solid and well established, yet evidence supporting certain claims remains incomplete. The work could be strengthened with the consideration of additional experimental contexts, non-linear relationships between age and observed behavior, and modeling details. If these concerns are addressed, the results will be of interest to developmental psychologists, economists, and social psychologists.
-
Reviewer #1 (Public review):
Summary:
Wu and colleagues aimed to explain previous findings that adolescents, compared to adults, show reduced cooperation following cooperative behaviour from a partner in several social scenarios. The authors analysed behavioural data from adolescents and adults performing a zero-sum Prisoner's Dilemma task and compared a range of social and non-social reinforcement learning models to identify potential algorithmic differences. Their findings suggest that adolescents' lower cooperation is best explained by a reduced learning rate for cooperative outcomes, rather than differences in prior expectations about the cooperativeness of a partner. The authors situate their results within the broader literature, proposing that adolescents' behaviour reflects a stronger preference for self-interest rather than a deficit in mentalising.
Strengths:
The work as a whole suggests that, in line with past work, adolescents prioritise value accumulation, and this can be, in part, explained by algorithmic differences in weighted value learning. The authors situate their work very clearly in past literature, and make it obvious the gap they are testing and trying to explain. The work also includes social contexts that move the field beyond non-social value accumulation in adolescents. The authors compare a series of formal approaches that might explain the results and establish generative and model-comparison procedures to demonstrate the validity of their winning model and individual parameters. The writing was clear, and the presentation of the results was logical and well-structured.
Weaknesses:
I also have some concerns about the methods used to fit and approximate parameters of interest. Namely, the use of maximum likelihood versus hierarchical methods to fit models on an individual level, which may reduce some of the outliers noted in the supplement, and also may improve model identifiability.
There was also little discussion given the structure of the Prisoner's Dilemma, and the strategy of the game (that defection is always dominant), meaning that the preferences of the adolescents cannot necessarily be distinguished from the incentives of the game, i.e. they may seem less cooperative simply because they want to play the dominant strategy, rather than a lower preferences for cooperation if all else was the same.
Appraisal & Discussion:
The authors have partially achieved their aims, but I believe the manuscript would benefit from additional methodological clarification, specifically regarding the use of hierarchical model fitting and the inclusion of Bayes Factors, to more robustly support their conclusions. It would also be important to investigate the source of the model confusion observed in two of their models.
I am unconvinced by the claim that failures in mentalising have been empirically ruled out, even though I am theoretically inclined to believe that adolescents can mentalise using the same procedures as adults. While reinforcement learning models are useful for identifying biases in learning weights, they do not directly capture formal representations of others' mental states. Greater clarity on this point is needed in the discussion, or a toning down of this language.
Additionally, a more detailed discussion of the incentives embedded in the Prisoner's Dilemma task would be valuable. In particular, the authors' interpretation of reduced adolescent cooperativeness might be reconsidered in light of the zero-sum nature of the game, which differs from broader conceptualisations of cooperation in contexts where defection is not structurally incentivised.
Overall, I believe this work has the potential to make a meaningful contribution to the field. Its impact would be strengthened by more rigorous modelling checks and fitting procedures, as well as by framing the findings in terms of the specific game-theoretic context, rather than general cooperation.
-
Reviewer #2 (Public review):
Summary:
This manuscript investigates age-related differences in cooperative behavior by comparing adolescents and adults in a repeated Prisoner's Dilemma Game (rPDG). The authors find that adolescents exhibit lower levels of cooperation than adults. Specifically, adolescents reciprocate partners' cooperation to a lesser degree than adults do. Through computational modeling, they show that this relatively low cooperation rate is not due to impaired expectations or mentalizing deficits, but rather a diminished intrinsic reward for reciprocity. A social reinforcement learning model with asymmetric learning rate best captured these dynamics, revealing age-related differences in how positive and negative outcomes drive behavioral updates. These findings contribute to understanding the developmental trajectory of cooperation and highlight adolescence as a period marked by heightened sensitivity to immediate rewards at the expense of long-term prosocial gains.
Strengths:
(1) Rigid model comparison and parameter recovery procedure.
(2) Conceptually comprehensive model space.
(3) Well-powered samples.
Weaknesses:
(1) A key conceptual distinction between learning from non-human agents (e.g., bandit machines) and human partners is that the latter are typically assumed to possess stable behavioral dispositions or moral traits. When a non-human source abruptly shifts behavior (e.g., from 80% to 20% reward), learners may simply update their expectations. In contrast, a sudden behavioral shift by a previously cooperative human partner can prompt higher-order inferences about the partner's trustworthiness or the integrity of the experimental setup (e.g., whether the partner is truly interactive or human). The authors may consider whether their modeling framework captures such higher-order social inferences. Specifically, trait-based models-such as those explored in Hackel et al. (2015, Nature Neuroscience)-suggest that learners form enduring beliefs about others' moral dispositions, which then modulate trial-by-trial learning. A learner who believes their partner is inherently cooperative may update less in response to a surprising defection, effectively showing a trait-based dampening of learning rate.
(2) This asymmetry in belief updating has been observed in prior work (e.g., Siegel et al., 2018, Nature Human Behaviour) and could be captured using a dynamic or belief-weighted learning rate. Models incorporating such mechanisms (e.g., dynamic learning rate models as in Jian Li et al., 2011, Nature Neuroscience) could better account for flexible adjustments in response to surprising behavior, particularly in the social domain.
(3) Second, the developmental interpretation of the observed effects would be strengthened by considering possible non-linear relationships between age and model parameters. For instance, certain cognitive or affective traits relevant to social learning-such as sensitivity to reciprocity or reward updating-may follow non-monotonic trajectories, peaking in late adolescence or early adulthood. Fitting age as a continuous variable, possibly with quadratic or spline terms, may yield more nuanced developmental insights.
(4) Finally, the two age groups compared - adolescents (high school students) and adults (university students) - differ not only in age but also in sociocultural and economic backgrounds. High school students are likely more homogenous in regional background (e.g., Beijing locals), while university students may be drawn from a broader geographic and socioeconomic pool. Additionally, differences in financial independence, family structure (e.g., single-child status), and social network complexity may systematically affect cooperative behavior and valuation of rewards. Although these factors are difficult to control fully, the authors should more explicitly address the extent to which their findings reflect biological development versus social and contextual influences.
-
-
bmcpublichealth.biomedcentral.com bmcpublichealth.biomedcentral.com
-
Schedules ofcontingent reinforcement are also a key to the success ofrewards (materialised incentives) and often used in behav-ioral psychology circles.
This can also be what reinforces bad behavior and causes people to start practicing a bad habit only to take themselves off of it in order to game the system
-
-
www.frenchcolonialstudies.org www.frenchcolonialstudies.org
-
ToJolliet’s claim that game animals such as bison were plen-tiful in the Illinois Country, La Salle wrote, “The buffaloare becoming scarce here since the Illinois are at war withtheir neighbors; both kill and hunt them continually.”
La Salle begins to undermine Jolliet's findings as he was interested in using the resources himself, especially the bison hide which could make him a fortune.
-
-
Local file Local file
-
play the game, man i foul it
-
-
arxiv.org arxiv.org
-
I started reading this paper with great interest, which flagged over time. As someone with extensive experience both publishing peer-reviewed research articles and working with publication data (Web of Science, Scopus, PubMed, PubMedCentral) I understand there are vagaries in the data because of how and when it was collected, and when certain policies and processes were implemented. For example, as an author starting in the late 1980s, we were instructed by the journal “guide to authors” to use only initials. My early papers were all only using initials. This changed in the mid-late 1990s. Another example, when working with NIH publications data, one knows dates like 1946 (how far back MedLine data go), 1996 (when PubMed was launched), and 2000 (when PubMedCentral was launched) and 2008 (when NIH Open Access policy enacted). There are also intermediate dates for changes in curation policy…. that underlie a transition from initials to full name in the biomedical literature.
I realize that the study covers all research disciplines, but still I am surprised that the authors of this paper don’t start with an examination of the policies underlying publications data, and only get to this at the end of a fairly torturous study.
As a reader, this reviewer felt pulled all over the place in this article and increasingly frustrated that this is a paper that explores the Dimensions database vagaries only and not really the core overall challenges of bibliometric data, irrespective of data source. Dimensions ingests data from multiple sources — so any analysis of its contents needs to examine those sources first.
A few specific comments:
-
The “history of science” portion of the paper focuses on English learned societies in the 17th century. There were many other learned societies across Europe, and also “papers” (books, treatises) from long before the 17th century in Middle-eastern and Asian countries (e.g, see history of mathematics, engineering, governance and policy, etc.). These other histories were not acknowledged by the authors. Research didn’t just spring full-formed out of Zeus’ head.
-
It is unclear throughout if the authors are referring to science, research, which disciplines are or are not included. The first chart on discipinary coverage is Fig 13 and goes back to 1940ish. Also, which languages are included in the analysis? For example, Figure 2 says “academic output” but from which academies? What countries? What languages? Disciplines? Also, in Figure 2, this reviewer would have like to see discussion about the variability in the noisiness of the data over time.
-
The inclusion of gender in the paper misses the mark for this reviewer. When dealing with initials, how can one identify gender? And when working in times/societies where women had to hide their identity to be published…. how can a name-based analysis of gender be applied? If this paper remains a study of the “initial era”, this reviewer recommends removing the gender analysis.
-
Reference needed for “It is just as important to see ourselves reflected in the outputs of the research careers…” (section B).
-
Reference needed for “This period marked the emergence of “Big Science” (Section B). How do we know this is Big Science? What is the relationship with the nature of science careers? Here it would be useful perhaps to mention that postdocs were virtually unheard of before Sputnik.
-
Fig 3. This would be more effective as a % total papers than absolute #.
-
Gradual Evolution of the Scholarly Record. This reviewer would like to see proportion of papers without authors. A lot of history of science research is available for this period, and a few references here would be welcome, as well as a by-country analysis (or acknowledgement that the data are largely from Europe and/or English-speaking countries).
-
Accelerated Changes in Recent Times. Again, this reviewer would like to see reference to scholarship on the history of science. One of the things happening in the post WW2 timeframe is the increase in government spending (in the US particularly) on R&D and academic research. So, is the academy changing or is it responding to “market forces”.
-
Reflective richness of data. “Evolution of the research community” is not described in the text, not is collaborative networks.
-
In the following paragraph, one could argue that evaluation was a driver of change, not a response to it. This reviewer would like to see references here.
-
II. Methodology. (i) 2nd sentence missing “to” “… and full form to refer to an author name…”. (ii) 2nd para the authors talk about epochs, but the data could be (are) discontinuous because of (a) curation policy, (b) curation technology, (c) data sources (e.g., Medline rolled out in the 1960s and back-populated to 1946). (iii) 4th para referes to Figs 3 and 4 showing a marked change between 1940 and 1950, but Fig 3 goes back only to 1960, and Fig 4 is so compressed it is hard to see anything in that time range. (iv) Para 7. “the active publishing community is a reasonable proxy for the global research population”. We need a reference here and more analysis. Is this Europe? English language? Which disciplines? All academia? Dimensions data? (v) Para 12 “In exploring the issue of gender…” see comments above. Gender is an important consideration but is out of scope, in this reviewer’s opinion, for this paper focused on use of initials vs. full name.
-
Listing 1. Is there a resolvable URL/DOI for this query?
-
Figs 9-11, 14, 15. This reviewer would like to see a more fulsome examination / discussion of data discontinuities. Particularly around ~1985-2000.
Discussion
-
The country-level discussion suggests the data (publications included) are only those that have been translated into English. Please clarify. Also, please add references in this section. There are a lot of bold statements, such as “A characteristic of these countries was the establishment of strong national academies.” Is this different from other places in the world? How? In the para before this statement, there is a phrase “picking out Slavonic stages” that is not clear to this reviewer.
-
The authors seem to get ahead of themselves talking about “formal” and “informal” in relation to whether initials or full names are used. And then discuss the “Power Distance” and end up arguing that it isn’t formal/informal … but rather publisher policies and curation practices driving the initial era and its end.
-
And then the authors come full circle on research articles being a technology, akin to a contract. Which is neat and useful. But all the intermediate data analysis is focused on the Dimensions data base and this reviewer would argue should be a part of the database documentation rather than a scholarly article.
-
This reviewer would prefer this paper be focused much more tightly on how publishing technology can and has driven the sociology of science. Dig more into the E. Journal Analysis and F. Technological analysis. Stick with what you have deep data for, and provide us readers with a practical and useful paper that maybe, just maybe, publishers will read and be incentivized to up their game with respect to adoption of “new” technologies like ORCID, DOIs for data, etc. Because these papers are not just expositions on a disciplinary discourse, they are also a window into how science (research) works and is done.
-
-
-
www.ludism.org www.ludism.org
-
In the late 1990s, there was a thriving GlassBeadGame scene in the Seattle area. The local GBG workshop was called the Bamboo Garden after the "Bamboo Grove", the hermitage where Joseph Knecht studied the I Ching in TheNovel). Each member of the Bamboo Garden had an idiosyncratic version/vision of the Game.
setup...
-
-
wcln.ca wcln.ca
-
The girl began to laugh, enjoying the game, I imagine. So how many turtlesare there? she wanted to know. The storyteller shrugged. No one knowsfor sure, he told her, but it’s turtles all the way down.
This line is funny but also deep. The question has no real answer, but the idea keeps going and going. It shows us how Indigenous stories don't always work the same way as scientific ones. They focus more on connection and wonder rather than trying to prove something.
-
-
openoregon.pressbooks.pub openoregon.pressbooks.pub
-
Introduction Welcome to “A Beginner's Guide to Information Literacy,” a step-by-step guide to understanding information literacy concepts and practices. This guide will cover each frame of the "Framework for Information Literacy for Higher Education," a document created by the Association of College and Research Libraries (ACRL) to help educators and librarians think about, teach, and practice information literacy. The goal of this guide is to break down the basic concepts in the Framework and put them in accessible, digestible language so that we can think critically about the information we're exposed to in our daily lives. To start, let's look at the ACRL definition of information literacy, so we have some context going forward: Information Literacy is the set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued, and the use of information in creating new knowledge and participating ethically in communities of learning. Boil that down and what you have are the essentials of information literacy: asking questions, finding information, evaluating information, creating information, and doing all of that responsibly and ethically. We'll be looking at each of the Frames alphabetically, since that's how they are presented in the Framework. None of these Frames is more important than another, and all need to be used in conjunction with the others, but we have to start somewhere, so alphabetical it is! In order, the frames are: Authority is Constructed and Contextual Information Creation as a Process Information Has value Research as Inquiry Scholarship as Conversation Searching as Strategic Exploration Just because we're laying this out alphabetically does not mean you have to go through it in order. Some of the sections reference Frames previously mentioned, but for the most part you can jump to wherever you like and use this guide however you see fit! You can also open up the Framework using the link above or in the attached resources to read the Framework in its original form and follow along with each section. The following sections originally appeared as blog posts for the Texas A&M- Corpus Christi's library blog. Edits have been made to remove institutional context, but you can see the original posts in the Mary and Jeff Bell Library blog archives. Authority is Constructed and Contextual The first frame is Authority is Constructed and Contextual. There's a lot to unpack in that language, so let's get started.
Start with the word "Authority." At the root of “Authority” is the word Author. So start there: who wrote the piece of information you’re reading? Why are they writing? What stake do they have in the information they’re presenting? What are their credentials (you can straight up google their name to learn more about them)? Who are they affiliated with? A public organization? A university? A company trying to make a profit? Check it out.
Now let's talk about how authority is "Constructed." Have you ever heard the phrase “social construct”? Some people say gender is a social construct or language, written and spoken, is a construct. “Constructed” basically means humans made it up at some point to instill order in their communities. It’s not an observable, scientifically inevitable fact. When we say “authority” is constructed, we’re basically saying that we as individuals and as a society choose who we give authority to, and sometimes we might not be choosing based on facts.<br /> A common way of assessing authority is by looking at an author’s education. We’re inclined to trust someone with a PhD over someone with a high school diploma because we think the person with a PhD is smarter. That’s a construct. We’re conditioned to think that someone with more education is smarter than people with less education, but we don't know it for a fact. There are a lot of reasons someone might not seek out higher education. They might have to work full time, or take care of a family, or maybe they just never wanted to go to college. None of these factors impact someone’s intelligence or ability to think critically. If aliens land on South Padre Island, TX, there will be many voices contributing to the information collected about the event. Someone with a PhD in astrophysics might write an article about the mechanical workings of the aliens’ spaceship. Cool, they are an authority on that kind of stuff, so I trust them. But the teenager who was on the island and watched the aliens land has first-hand experience of the event, so I trust them too. They have authority on the event even though they don’t have a PhD in astrophysics. So we cannot think someone with more education is inherently more trustworthy, or smarter, or has more authority than anyone else. Some people who are authorities on a subject are highly educated, some are not. Likewise, let’s say I film the aliens landing and stream it live on Facebook. At the same time, a police officer gives an interview on the news that says something contradicting my video evidence. All of a sudden, I have more authority than the police officer. Many of us are raised to trust certain people automatically based on their jobs, but that’s also a construct. The great thing about critical thinking is that we can identify what is fact and fiction, and we can decide for ourselves who to trust.
The final word is "Contextual." This one is a little simpler. If I go to the hospital and a medical doctor takes out my appendix, I’ll probably be pretty happy with the outcome. If I go to the hospital and Dr. Jill Biden, a professor of English, takes out my appendix, I’m probably going to be less happy with the results. Medical doctors have authority in the context of medicine. Dr. Jill Biden has authority in the context of education. And Doctor Who has authority in the context of inter-galactic heroics and nice scarves. This applies when we talk about experiential authority, too. If an 8th grade teacher tells me what it’s like to be a 4th grade teacher, I will not trust their authority. I will, however, trust a 4th grade teacher to tell me about teaching 4th grade.
The Takeaway: Basically, when we think about Authority, we need to ask ourselves, “Do I trust them? Why?” If they do not have experience with the subject (like witnessing an event or holding a job in the field) or subject expertise (like education or research), then maybe they aren’t an authority after all. P.S. I'm sorry for the uncalled-for dig, Dr. Biden. I’m sure you’d do your best with an appendectomy.
Ask Yourself In what context are you an authority? If you needed to figure out how to do a kickflip on a skateboard, who would you ask? Who's an authority in that situation? Information Creation as a Process The second Frame is "Information Creation as a Process."
Information Creation So first of all, let’s get this out of the way: Everyone is a creator of information. When you write an essay, you’re creating information. When you log the temperature of the lizard tank, you’re creating information. Every Word Doc, Google Doc, survey, spreadsheet, Tweet, and PowerPoint that you’ve ever had a hand in? All information products. That YOU created. In some way or another, you created that information and put it out into the world.
Processes One process you’re probably familiar with if you're a student is the typical “Research Paper.” You know your professor wants about five to eight pages consisting of an introduction that ends in a thesis statement, a few paragraphs that each touch on a piece of evidence that supports your thesis, and then you end in a conclusion paragraph which starts with a rephrasing of your thesis statement. You save it to your hard drive or Google Drive and then you submit it to your professor. This is one process for creating information. It’s a boring one, but it’s a process.<br /> Outside of the classroom, the information creation process looks different, and we have lots of choices to make. Once of the choice you’ll need to make is the mode or format in which you present information. The information I’m creating right now comes to you in the mode of an Open Educational Resource. Originally, I created these sections as blog posts. Those five-page essays I mentioned earlier are in the mode of essays. When you create information (outside of a course assignment), it’s up to you how to package that information. It might feel like a simple or obvious choice, but some information is better suited to some forms of communication. And some forms of communication are received in a certain way, regardless of the information in them. For example, if I tweet “Jon Snow knows nothing,” it won’t carry with it the authority of my peer-reviewed scholarly article that meticulously outlines every instance in which Jon Snow displays a lack of knowledge. Both pieces of information are accurate, but the processes I went through to create and disseminate the information have an effect on how the information is received by my audience. And that is perhaps the biggest thing to consider when creating information: your audience.
The Audience Matters If I just want my twitter followers to know Jon Snow knows nothing, then a tweet is the right way to reach them. If I want my tenured colleagues and other various scholars to know Jon Snow knows nothing then I’m going to create a piece of information that will reach them, like a peer-reviewed journal article. Often, we aren’t the ones creating information, we're the audience members ourselves. When we're scrolling on Twitter, reading a book, falling asleep during a PowerPoint presentation-- we're the audience observing the information being shared. When this is the case, we have to think carefully about the ways information was created. Advertisements are a good example. Some are designed to reach a 20-year old woman in Corpus Christi through Facebook, while others are designed to reach a 60-year old man in Hoboken, NJ over the radio. They might both be selling the same car, and they’re going to put the same information (size, terrain, miles per gallon, etc.) in those ads, but their audiences are different, so their information creation process is different, and we end up with two different ads for different audiences.
Be a Critical Audience Member When we are the audience member, we might automatically trust something because it’s presented a certain way. I know that, personally, I’m more likely to trust something that is formatted as a scholarly article than I am something that is formatted as a blog. And I know that that's biased thinking and it's a mistake to make that assumption. It's risky to think like that for a couple of reasons: Looks can be deceiving. Just because someone is wearing a suit and tie doesn’t mean they’re not an axe murderer and just because something looks like a well-researched article, doesn’t mean it is one. Automatic trust unnecessarily limits the information we expose ourselves to. If I only ever allow myself to read peer-reviewed scholarly articles, think of all the encyclopedias and blogs and news articles I’m missing out on! If I have a certain topic I’m really excited about, I’m going to try to expose myself to information regardless of the format and I’ll decide for myself (#criticalthinking) which pieces of information are authoritative and which pieces of information suit my needs. Likewise, as I am conducting research and considering how best to share my new knowledge, I’m going to consider my options for distributing this newfound information and decide how best to reach my audience. Maybe it’s a tweet, maybe it’s a Buzzfeed quiz, or maybe it’s a presentation at a conference. But whatever mode I choose will also convey implications about me, my information creation process, and my audience.
The Takeaway You create information all of the time. The way you package and share it will have an effect on how others perceive it.
Ask Yourself Is there a form of information you're likely to trust at first glance? Either a publication like a newspaper or a format like a scholarly article? Can you think of some voices that aren't present in that source of information? Where might you look to find some other perspectives? If you read an article written by medical researchers that says chocolate is good for your health, would you trust the article? Would you still trust their authority if you found out that their research was funded by a company that sells chocolate bars? Funding and stakeholders have an impact on the creation process, and it's worth thinking about how this can compromise someone's authority.
Information Has Value Onwards and upwards! We're onto Frame 3: Information Has Value.
What Counts as Value? There are a lot of different ways we value things. Some things, like money, are valuable to us because we can exchange them for goods and services. On the other hand, some things, like a skill, are valuable to use because we can exchange them for money (which we exchange for more goods and services). Some things are valuable to us for sentimental reasons, like a photograph or a letter. Some things, like our time, are valuable because they are finite.
The Value of Information Information has all different kinds of value.<br /> One kind is monetary. If I write a book and it gets published, I’m probably going to make some money off of that (though not as much money as the publishing company will make). So that’s valuable to me. But I’m also getting my name out into the world, and that’s valuable to me too. It means that when I apply for a job or apply for a grant, someone can google me and think, “Oh look! She wrote a book! That means she has follow-through and will probably work hard for us!” That kind of recognition is a sort of social value. That social value, by the way, can also become monetary value. If I’ve produced information, a university might give me a job, or an organization might fund my research. If I’ve invented a machine that will floss my teeth for me, the patent for my invention could be worth a lot of money (plus it'd be awesome. Cool factor can count as value.). In a more altruistic slant, information is also valuable on a societal level. When we have more information about political candidates, for example, it influences how we vote, who we elect, and how our country is governed. That’s some really valuable information right there. That information has an effect on the whole world (plus outer space, if we elect someone who’s super into space exploration). If someone is trying to keep information hidden or secret, or if they’re spreading misinformation to confuse people, it’s probably a sign that the information they’re hiding is important, which is to say, valuable. On a much smaller scale, think about the information on food packages. If you’re presented with calorie counts, you might make a different decision about the food you buy. If you’re presented with an item’s allergens, you might avoid that product and not end up in an Emergency Room with anaphylactic shock. You know what’s super valuable to me? NOT being in an Emergency Room! But if you do end up in the Emergency Room, the information that doctors and nurses will use to treat your allergic reaction is extremely valuable. That value of that information is equal to the lives it’s saved.
Acting Like Information is Valuable When we create our own information by writing papers and blog posts and giving presentations, it’s really important that we give credit to the information we’ve used to create our new information product for a couple of reasons. First, someone worked really hard to create something, let’s say an article. And that article’s information is valuable enough to you to use in your own paper or presentation. By citing the author properly, you’re giving the author credit for their work which is valuable to them. The more their article is cited, the more valuable it becomes because they’re more likely to get scholarly recognition and jobs and promotions. Second, by showing where you’re getting your information, you’re boosting the value of your new information product. On the most basic level, you’ll get a higher grade on your paper which is valuable to you. But you’re also telling your audience, whether it’s your professor or your boss or your YouTube subscribers, that you aren’t just making stuff up—you did the work of researching and citing, and that makes your audience trust you more. It makes the audience value your information more. Remember early on when I said the frames all connect? "Information Has Value" ties into the other information literacy frames we've talked about, "Information Creation as a Process" and "Authority as Constructed and Contextual." When I see you’ve cited your sources of information, then I, as the audience, think you’re more authoritative than someone who doesn’t cite their sources. I also can look at your information product and evaluate the effort you’ve put into it. If you wrote a tweet, which takes little time and effort, I’ll generally value it less than if you wrote a book, which took a lot of time and effort to create. I know that time is valuable, so seeing that you were willing to dedicate your time to create this information product makes me feel like it’s more valuable.
The Takeaway: Information is valuable because of what goes into its creation (time and effort) and what comes from it (an informed society). If we didn’t value information, we wouldn’t be moving forward as a society, we’d probably have died out thousands of years ago as creatures who never figured out how to use tools or start a fire. So continue to value information, because it improves your life, your audiences’ lives, and the lives of other information creators. More importantly, if we stop valuing information a smarter species will eventually take over and it’ll be a whole Planet of the Apes thing and I just don't have the energy for that right now.
Ask Yourself Can you think of some ways in which a YouTube video on dog training has value? Who values it? Who profits from it? Think of some information that would be valuable to someone applying to college. What does that person need to know?
Research as Inquiry Easing on down the road, we've come to frame number 4: Research as Inquiry. Inquiry is another word for curiosity or questioning. I like to think of this frame as "Research as Curiosity," because I think it more accurately captures the way our adorable human brains work.
Inquiring Minds Want to Know When you think to yourself, “How old is Madonna?” and you google it to find out she’s 62 (as of the creation of this resource), that’s research! You had a question (how old is Madonna?), you applied a search strategy (googling “Madonna age”) and you found an answer (62). That’s it! That’s all research has to be! But it’s not all research can be. This example, like most research, is comprised of the same components we use in more complex situations. Those components are: a question and an answer, Inquiry and Research, “how old is Madonna?” and "62." But when we’re curious, we go back to the inquiry step again and ask more questions and seek more answers. We’re never really done, even when we’ve answered the initial question and written the paper and given the presentation and received accolades and awards for all our hard work. If it’s something we’re really curious about, we’ll keep asking and answering and asking again. If you’re really curious about Madonna, you don’t just think, “How old is Madonna?” You think “How old is Madonna? Wait, really? Her skin looks amazing! What’s her skincare routine? Seriously, what year was she born? Oh my god, she wrote children’s books! Does my library have any?” Your questions lead you to answers which, when you’re really interested in a topic, lead you to more and more questions. Humans are naturally curious, we have this sort of instinct to be like, “huh, I wonder why that is?” and it’s propelled us to learn things and try things and fail and try again! It’s all Research as Inquiry. And to satisfy your curiosity, yes, the library I currently work at does own one of Madonna’s children’s books. It’s called The Adventures of Abdi and you can find it in our Juvenile Collection on the second floor at PZ8 M26 Adv 2004. And you can find a description of her skincare routine in this article from W Magazine: https://www.wmagazine.com/story/madonna-skin-care-routine-tips-mdna. You’re welcome.
Identifying an Information Need One of the tricky parts of Research as Inquiry is determining a situation’s information need. It sounds simple to ask yourself, “What information do I need?” and sometimes we do it unconsciously. But it’s not always easy. Here are a few examples of information needs: You need to know what your niece’s favorite Paw Patrol character is so you can buy her a birthday present. Your research is texting your sister. She says, “Everest.” And now you’re done. You buy the present, you're a rock star at the birthday party. Your information need was a short answer based on a 3-year old’s opinion. You’re trying to convince someone on twitter that Nazis are bad. You compile a list of opinion pieces from credible news publications like the Wall Street Journal and the New York Times, gather first-hand narratives of Holocaust survivors and victims of hate crimes, find articles that debunk eugenics, etc. Your information need isn’t scholarly publications, it’s accessible news and testimonials. It’s articles a person might actually read in their free time, articles that aren’t too long and don’t require access to scholarly materials that are sometimes behind paywalls. You need to write a literature review for an assignment, but you don’t know what a literature review is. So first you google “literature review example.” You find out what it is, how one is created, and maybe skim a few examples. Next, you move to your library's website and search tool and try “oceanography literature review,” and find some closer examples. Finally, you start conducting research for your own literature review. Your information need here is both broader and deeper. You need to learn what a literature review is, how one is compiled, and how one searches for relevant scholarly articles in the resources available to you. Sometimes it helps to break down big information needs into smaller ones. Take the last example, for instance: you need to write a literature review. What are the smaller parts? Information Need 1: Find out what a literature review is Information Need 2: Find out how people go about writing literature reviews Information Need 3: Find relevant articles on your topic for your own literature review It feels better to break it into smaller bits and accomplish those one at a time. And it highlights an important part of this frame that’s surprisingly difficult to learn: ask questions. You can’t write a literature review if you don’t know what it is, so ask. You can’t write a literature review if you don’t know how to find articles, so ask. The quickest way to learn is to ask questions. Once you stop caring if you look stupid, and once you realized no one thinks poorly of people who ask questions, life gets a lot easier. So let’s add this to our components of research: ask a question, determine what you need in order to thoroughly answer the question, and seek out your answers. Not too painful, and when you’re in love with whatever you’re researching, it might even be fun.
The Takeaway When you have a question, ask it. When you’re genuinely interested in something, keep asking questions and finding answers. When you have a task at hand, take a second to think realistically about the information you’ll need to accomplish that task. You don’t need a peer-reviewed article to find out if praying mantises eat their mates, but you might if you want to find out why.
Ask Yourself What's the last thing you looked up on Wikipedia? Did you stop when you found an answer, or did you click on another link and another link until you learned about something completely different? If you can't remember, try it now! Search for something (like a favorite book or tv show) and click on linked words and phrases within Wikipedia until you learn something new! What was the last thing you researched that you were really excited about? Do you struggle when teachers and professors tell you to "research something that interests you"? Instead, try asking yourself, "What makes me really angry?" You might find you have more interests than you realized!
Scholarship as Conversation We've made it friends! My favorite frame: Scholarship as Conversation. Is it weird to have a favorite frame of information literacy? Probably. Am I going to talk about it anyway? You betcha!
What does "Scholarship as Conversation" mean? Scholarship as conversation refers to the way scholars reference each other and build off of one another’s work, just like in a conversation. Have you ever had a conversation that started when you asked someone what they did last weekend and ended with you telling a story about how someone (definitely not you) ruined the cake at your mom's dog's birthday party? And then someone says, “but like I was saying earlier…” and they take the conversation back to a point in the conversation where they were reminded of a different point or story? Conversations aren’t linear, they aren’t a clear line to a clear destination, and neither is research. When we respond to the ideas and thoughts of scholars, we’re responding to the scholars themselves and engaging them in conversation.
Why do I Love this Frame so Much? Let me count the ways. Reason 1 I really enjoy the imagery of scholarship as a conversation among peers. Just a bunch of well-informed curious people coming together to talk about something they all love and find interesting. I imagine people literally sitting around a big round table talking about things they’re all excited about and want to share with each other! It’s a really lovely image in my head. Eventually the image kind of reshapes and devolves into that painting of dogs playing poker, but I love that image too! Reason 2 It harkens back to pre-internet scholarship, which sound excruciating and exhausting, but it was all done for the love of a subject! Scholars used to literally mail each other manuscripts seeking feedback. Then, when they got an article published in a journal, scholars interested in the subject would seek out and read the article in the physical journal it was published in. Then they’d write reviews of the article, praising or criticizing the author’s research or theories or style. As the field grew, more and more people would write and contribute more articles to criticize and praise and build off of one another. So for example, if I wrote an article that was about Big Foot and then Joe wrote an article saying, “Emily’s article on Big Foot is garbage, here’s what I think about Big Foot,” Sam and I are now having a conversation. It’s not always a fun one, but we’re writing in response to one another about something we’re both passionate about. Later, Jaiden comes along and disagrees with Joe and agrees with me (because I’m right) and they cite both me and Joe. Now we’re all three in a conversation. And it just grows and grows and more people show up at the table to talk and contribute, or maybe just to listen. Reason Three You can roll up to the table and just listen if you want to. Sometimes we’re just listening to the conversation. We’re at the table, but we’re not there to talk. We’re just hoping to get some questions answered and learn from some people. When we’re reading books and articles or listening to podcasts or watching movies, we’re listening to the conversation. You don’t have to do groundbreaking research to be part of a conversation. You can just be there and appreciate what everyone’s talking about. You're still there in the conversation. Reason Four You can contribute to the conversation at any time. The imagery of a conversation is nice because it’s approachable, just pull up a chair and start talking. With any new subject, you should probably listen a little at first, ask some questions, and then start giving your own opinion or theories, but you can contribute at any time. Since we do live in the age of internet research, we can contribute in ways people 50 years ago never dreamed of! Besides writing essays in class (which totally counts because you’re examining the conversation and pulling in the bits you like and citing them to give credit to other scholars), you can talk to your professors and friends about a topic, you can blog about it, you can write articles about it, you can even tweet about it (have you ever seen Humanities folk on Twitter? They go nuts on there having actual, literal scholarly conversations). Your ways for engaging are kind of endless! Reason Five Yep, I'm listing reasons. Conversations are cyclical. Like I said above, they're not always a straight path and that’s true of research too. You don’t have to engage with who spoke most recently, you can engage with someone who spoke ten years ago, someone who spoke 100 years ago, you can respond to the person who started the conversation! Jump in wherever you want. And wherever you do jump in, you might just change the course of the conversation. Because sometimes we think we have an answer, but then something new is discovered or a person who hadn’t been at the table or who had been overlooked says something that drastically impacts what we knew, so now we have to reexamine it all over again and continue the conversation in a trajectory we hadn’t realized was available before. Reason Six Lastly, this frame is about sharing and responding and valuing one another’s work. If Joe, my Big Foot nemesis, responds to my article, they're going to cite me. If Jaiden then publishes a rebuttal, they're going to cite both Joe and me, because fair is fair. This is for a few reasons: 1) even if Jaiden disagrees with Joe’s work, they respect that Joe put effort into it and it’s valuable to them. 2) When Jaiden cites Joe, it means anyone who jumps into the conversation at the point of Jaiden's article will be able to backtrack and catch up using Jaiden's citations. A newcomer can trace it back to Joe’s article and trace that back to mine. They can basically see a transcript of the whole conversation so they can read Jaiden’s article with all of the context, and they can write their own well-informed piece on Big Foot.
The Takeaway There’s a lot to take away from this frame, but here’s what I think is most important: Be respectful of other scholars’ work and their part in the conversation by citing them. Start talking whenever you feel ready, in whatever platform you feel comfortable. And finally, make sure everyone who wants to be at the table is at the table. This means making sure information is available to those who want to listen and making sure we lift up the voices that are at risk of being drowned out.
Ask Yourself What scholarly conversations have you participated in recently? Is there a Reddit forum you look in on periodically to learn what's new in the world of cats wearing hats? Or a Facebook group on roller skating? Do you contribute or just listen?<br /> Think of a scholarly conversation surrounding a topic-- sharks, ballet, Game of Thornes. Who's not at the table? Whose voice is missing from the conversation? Why do you think that is?
Searching as Strategic Exploration You've made it! We've reached the last frame: Searching as Strategic Exploration. “Searching as Strategic Exploration” addresses the part of information literacy that we think of as “Research.” It deals with the actual task of searching for information, and the word “Exploration” is a really good word choice, because it’s evocative of the kind of struggle we sometimes feel when we approach research. I imagine people exploring a jungle, facing obstacles and navigating an uncertain path towards an ultimate goal (note: the goal is love and it was inside of us all along). I also kind of imagine all the different Northwest Passage explorations, which were cool in theory, but didn’t super-duper work out as expected. But research is like that! Sometimes we don’t get where we thought we were headed. But the good news is this: You probably won’t die from exposure or resort to cannibalism in your research. Fun, right?
Step 1: Identify a Goal The first part of any good exploration is identifying a goal. Maybe it’s a direct passage to Asia or the diamond the old lady threw into the ocean at the end of Titanic. More likely, the goal is to satisfy an information need. Remember when we talked about "Research as Inquiry?" All that stuff about paw patrol and Madonna's skin care regimen? Those were examples of information needs. We’re just trying to find an answer or learn something new. So great! Our goal is to learn something new. Now we make a strategy.
Step 2: Make a Strategy For many of your information needs you might just need to Google a question. There’s your strategy: throw your question into Google and comb through the results. You might limit your search to just websites ending in .org, .gov, or .edu. You might also take it a step further and, rather than type in an entire question fully formed, you just type in keywords. So “Who is the guy who invented mayonnaise?” becomes “mayonnaise inventor.” Identifying keywords is part of your strategy and so is using a search engine and limiting the results you’re interested in.
Step 3: Start Exploring Googling “mayonnaise inventor” probably brings you to Wikipedia where we often learn that our goals don’t have a single, clearly defined answer. For example, we learn that mayonnaise might have gotten its name after the French won a battle in Port Mahon, but that doesn't tell us who actually made the mayonnaise, just when it was named. Prior to being named, the sauce was called “aioli bo” and was apparently in a Menorcan recipe book from 1745 by Juan de Altimiras. That’s great for Altimiras, but the most likely answer is that mayonnaise was invented way before him and he just had the foresight to write down the recipe. Not having a single definite answer is an unforeseen obstacle tossed into our path that now affects our strategy. We know we have a trickier question than when we first set sail. But we have a lot to work with! We now have more keywords like Port Mahon, the French, and Wikipedia taught us that the earliest known mention of “mayonnaise” was in 1804, so we have 1804 as a keyword too. Let’s see if we can find that original mention. Let’s take our keywords out of Wikipedia where we found them and voyage to a library's website! At my library we have a tool that searches through all of our resources. We call it the "Quick Search." You might have a library available to you, either at school, on a University's campus, or a local public library. You can do research in any of these places! So into the Quick Search tool (or whatever you have available to you) go our keywords: 1804, mayonnaise, and France. The first result I see is an e-book by a guy who traveled to Paris in 1804, so that might be what we’re looking for. I search through the text and I do, in fact, find a reference to mayonnaise on page 99! The author (August von Kotzebue) is talking about how it’s hard to understand menus at French restaurants, for “What foreigner, for instance, would at first know what is meant by a mayonnaise de poulet, a galatine de volaille, a cotelette a la minute, or even an epigramme d’agneau?” He then goes on to recommend just ordering the fish, since you’ll know what you’ll get (Kotzebue, 99).<br /> So that doesn't tell us who invented mayonnaise, but I think it's pretty funny! So I’d call that detour a win.
Step 4: Reevaluate When we hit ends that we don’t think are successful, we can always retrace our steps and reevaluate our question. Dead ends are a part of exploration! We’ve learned a lot, but we’ve also learned that maybe “who invented mayonnaise?” isn’t the right question. Maybe we should ask questions about the evolution of French cuisine or about ownership of culinary experimentation. I’m going to stick with the history or mayonnaise, for just a little while longer, but my “1804 mayonnaise france” search wasn’t as helpful as I’d hoped, so I’ll try something new. Let’s try looking at encyclopedias. I searched in a database called Credo Reference (which is a database filled with encyclopedia entries) and just searching “mayonnaise.” I can see that the first entry, “Minorca or Menorca” from The Companion to British History, doesn’t initially look helpful, but we’re exploring, so let’s click on it! It tells us that Mayonnaise was invented in 1756 by a French commander’s cook and its name comes from Port Mahon where the French fended off the British during a siege (Arnold-Baker, 2001). That’s awesome! It’s what Wikipedia told us! But let’s corroborate that fact. I click on The Hutchinson Chronology of World History entry for 1756 which says mayonnaise was invented in France in 1756 by the duc de Richelieu (Helicon, 2018). I’m not sure I buy it. I could see a duke’s cook inventing mayonnaise, but I have a hard time imagining a duke and military commander taking the time to create a condiment.<br /> But now I can go on to research the duc de Richelieu and his military campaigns and his culinary successes! Just typing “Duke de Richelieu” into the library’s Quick Search shows me a TON of books (16,742 as of writing this) on his life and he influence on France. So maybe now we’re actually exploring Richelieu or the intertwined history of French cuisine and the lives of nobility.
What Did We Just Do? Our strategy for exploring this topic has had a lot of steps, but they weren't random. It was a wild ride, but it was a strategic one. Let’s break the steps down real quick: We asked a question or identified a goal We identified keywords and googled them We learned some background information and got new keywords from Wikipedia and had to reevaluate our question We followed a lead to a book but hit a dead end when it wasn’t as useful as we’d hoped We identified an encyclopedia database and found several entries that support the theory we learned in Wikipedia which forced us to reevaluate our question again We identified a key player in our topic and searched for him in the library’s Quick Search tool and the resources we found made us reevaluate our question yet again! Other strategies could include looking through an article’s reference list, working through a mind map, outlining your questions, or recording your steps in a research log so you don’t get lost-- whatever works for you!
The Takeaway Exploration is tricky. Sometimes you circle back and ask different questions as new obstacles arise. Sometimes you have a clear path and you reach your goal instantly. But you can always retrace your steps, try new routes, discover new information, and maybe you’ll get to your destination in the end. Even if you don't, you've learned something. For instance, today we learned that if you can’t understand a menu in French, you should just order the fish.
Ask Yourself Where do you start a search for information? Do you start in different places when you have different information needs? If your research questions was, "What is the impact of fast fashion on carbon emissions?" what keywords would you use to start searching?
Wrap Up The Framework for Information Literacy in Higher Education is heck of a document. It's complicated, its frames intertwine, it's written in a way that can be tricky to understand. But essentially, it's just trying to get us to understand that the ways we interact with information are complicated and we need to think about our interactions to make sure we're behaving in an ethical and responsible way. Why do your professors make you cite things? Because those citations are valuable to the original author, and they prove your engagement with the scholarly conversation. Why do we need to hold space in the conversation for voices that we haven't heard from before? Because maybe no one recognized the authority in those voices before. The old process for creating information shut out lots of voices while prioritizing others. It's important for us to recognize these nuances when we see what information is available to us and important for us to ask, "whose voice isn't here? why? am I looking hard enough for those voices? can I help amplify them?" And it's important for us to ask, "why is the loudest voice being so loud? what motivates them? why should I trust them over others?" When we think critically about the information we access and the information we create and share, we're engaging as citizens in one big global conversation. Making sure voices are heard, including your own voice, is what moves us all towards a more intelligent and understanding society. Of course, part of thinking critically about information means thinking critically about both this Guide and the Framework. Lots of people have criticized the Framework for including too much library jargon. Other folks think the Framework needs to be rewritten to explicitly address how information seeking systems and publishing platforms have arisen from racist, sexist institutions. We won’t get into the criticisms here, but they're important to think about. You can learn more about the criticism of the Framework in a blog post by Ian Beilin, or you can do your own search for criticism on the Framework to see what else is out there and form your own opinions.
The Final Takeaway Ask questions, find information, and ask questions about that information.
-
-
alraziuni.edu.ye alraziuni.edu.ye
-
fluke
fluke
English Explanation
The term "fluke" has several meanings based on its context. Here are the most common interpretations:
-
Scientific Definition: In biology, a "fluke" refers to a type of flatworm belonging to the class Trematoda. These organisms often live as parasites in the bodies of various hosts, including humans, where they can cause diseases.
-
Colloquial Use: In everyday language, "fluke" is used to describe a fortunate occurrence that happens by chance. For example, if someone wins a game by an unexpected move or score, it might be considered a "fluke," implying that the outcome was not expected or was somewhat accidental.
-
Fishing Term: In fishing terminology, a "fluke" can refer to a type of fish, specifically the summer flounder, which is known for its flat body and is a popular target for sport fishermen.
Overall, the word "fluke" captures the idea of an unexpected or unusual occurrence, whether in nature, conversation, or specific hobbies like fishing.
中文解释
“fluke”这个词根据不同的语境有几种含义。以下是最常见的解释:
-
科学定义:在生物学中,“fluke”指的是一种属于扁形虫门(Trematoda)的扁虫。这些生物通常作为寄生虫寄居在各种宿主的体内,包括人类,并且可能引起疾病。
-
口语用法:在日常语言中,“fluke”用来形容偶然发生的幸运事件。例如,如果某人通过意外的举动或得分赢得了比赛,这可能被视为一个“fluke”,暗示结果并不是预期的,或者说有些意外。
-
钓鱼术语:在钓鱼术语中,“fluke”可以指一种鱼,具体来说,就是夏季比目鱼(summer flounder),这种鱼以其扁平的身体而闻名,是运动钓鱼者的热门目标。
总的来说,“fluke”这个词传达了一个意外或不寻常发生的事件的概念,无论是在自然界、日常对话,还是特定的爱好(如钓鱼)中。
-
-
Tit for Tat,
Tit for Tat,
Excerpt Explanation in English
"Tit for Tat" is a strategy often discussed in the context of game theory, particularly in the study of cooperation and conflict. It describes a situation in which one individual responds to the actions of another with a similar action. If the other party cooperates, the response will also be cooperative. Conversely, if the other party acts in an uncooperative or hostile manner, the response will mirror that behavior.
This strategy is simple but effective in encouraging mutual cooperation, especially in repeated interactions. It fosters a sense of fairness because it rewards cooperation and punishes betrayal. "Tit for Tat" has been shown to be successful in various scenarios, including economic transactions, social dilemmas, and biological contexts.
Excerpt Explanation in Chinese
“以牙还牙”是一种通常在博弈论的背景下讨论的策略,特别是在合作与冲突的研究中。它描述了一种情况,即一个个体以类似的行为回应另一个个体的行动。如果对方合作,回应也将是合作性的。反之,如果对方表现出非合作或敌对的行为,回应也会反映出这种行为。
这种策略简单却有效,能够在反复的交互中鼓励相互合作。它培养了一种公平感,因为它奖励合作并惩罚背叛。“以牙还牙”在各种场景中被证明是成功的,包括经济交易、社会困境和生物学背景。
-
jackpot
jackpot
English Explanation:
The term "jackpot" commonly refers to a large prize or sum of money that is won in a game of chance, particularly in gambling contexts such as slot machines or lottery games. When someone hits the jackpot, it often means they have won a significant amount of money, usually the highest possible prize in that particular game. The term can also be used metaphorically to indicate a major success or a sudden stroke of good luck in any context outside of gambling. For example, someone might say they hit the jackpot in their career after receiving a prestigious job offer or a promotion.
Chinese Explanation:
“jackpot”这个词通常指的是在机会游戏(特别是在赌博场合,如老虎机或彩票游戏)中赢得的大笔奖金或金额。当有人“中头奖”时,通常意味着他们赢得了大量的钱,通常是该游戏中可能获得的最高奖金。这个词也可以用作隐喻,表示在赌博之外的任何情况下的重大成功或突如其来的好运。例如,有人可能会说自己在职业生涯中“中头奖”,因为他们获得了一个有声望的工作机会或晋升。
-
the ramifications of this beguiling game.
the ramifications of this beguiling game.
Certainly! Let's break down the excerpt "the ramifications of this beguiling game."
English Explanation
-
Ramifications: This term refers to the consequences or outcomes that result from a particular action or decision. It suggests that there are multiple layers to consider, indicating that the results may be complex and far-reaching.
-
Beguiling: This adjective is used to describe something that is charming or enchanting, often in a deceptive way. It indicates that the game in question possesses qualities that are alluring and may draw people in, but there could be underlying issues or complexities that are not immediately apparent.
-
Game: In this context, "game" could refer to a literal game, such as a board game or video game, or it could be a metaphorical game, representing strategies, interactions, or dynamics in social contexts, politics, economics, etc.
Putting it all together, the phrase suggests a need to carefully consider the complex consequences stemming from a captivating (but possibly deceptive) activity or scenario. The intriguing nature of the game catches attention, but it also implies that there are significant and perhaps unforeseen effects that need to be acknowledged.
Chinese Explanation
-
Ramifications (后果): 这个词指的是某个行动或决策所带来的结果或影响。它暗示了需要考虑多个层面的后果,这些结果可能是复杂并且影响深远的。
-
Beguiling (迷人的): 这个形容词用来描述某种迷人或令人着迷的事物,往往带有欺骗的意味。它表明这个游戏具有吸引人的特质,可能使人们沉迷其中,但可能也存在一些不易察觉的潜在问题或复杂性。
-
Game (游戏): 在这个上下文中,“游戏”可以指一个字面上的游戏,比如桌面游戏或视频游戏,也可以是一个比喻游戏,代表社交、政治、经济等领域的策略、互动或动态。
综合来说,这句话暗示需要仔细考虑因一个迷人的(但可能具欺骗性的)活动或情境所产生的复杂后果。这个游戏的吸引力引发了关注,但同时也意味着需要承认可能存在重大的、甚至是意想不到的影响。
-
-
consolation
consolation
Certainly! Let's break down the term "consolation" thoroughly in both English and Chinese.
English Explanation
Consolation refers to the comfort or relief provided to someone who is experiencing sorrow, grief, or disappointment. It can take various forms, such as words of comfort, gestures of kindness, or even physical presence. Consolation serves to alleviate emotional pain and can come from family, friends, or even strangers.
There are different contexts in which the term can be used:
-
Emotional Support: When someone is feeling sad due to a loss (like the death of a loved one), the support from friends and family can provide consolation.
-
Encouragement: In situations where a person has faced failure (like losing a game), offering words of encouragement can serve as consolation, helping them to move past the disappointment.
-
Consolation Prizes: In competitive contexts, a consolation prize may be given to participants who do not win, providing recognition and validating their effort.
The concept of consolation is important in human relationships as it emphasizes empathy, compassion, and the bond between individuals. It signifies that although hardships are part of life, support and understanding can help individuals cope with their challenges.
Chinese Explanation
安慰 (ān wèi) 指的是给予正在经历悲伤、痛苦或失落的人们的情感支持和舒缓。安慰可以表现为安慰的话语、善意的举动,或者甚至是肢体上的陪伴。安慰的目的是帮助减轻情感上的痛苦,这种支持可以来自家人、朋友,甚至陌生人。
以下是该词的不同上下文使用:
-
情感支持:当一个人因失去(比如亲人的去世)而感到悲伤时,朋友和家人的支持能起到安慰的作用。
-
鼓励:在一个人面临失败(例如输掉比赛)的情况下,提供鼓励的话语可以作为安慰,帮助他们度过失望。
-
安慰奖:在竞争场合中,未获胜的参与者可能会得到一种安慰奖,以此认可他们的努力,并给予他们价值感。
安慰的概念在人与人之间的关系中非常重要,因为它强调了同理心、同情心和人际之间的连结。它表明虽然生活中充满了困难,但支持和理解可以帮助人们应对挑战。
Summary
In summary, "consolation" or "安慰" plays a vital role in fostering connections among people facing difficult times, focusing on the importance of support and understanding in overcoming emotional distress.
-
-
more devious cheating
more devious cheating
English Explanation
The phrase "more devious cheating" refers to a form of cheating that is particularly cunning, sly, or deceitful. The word "devious" implies a level of craftiness or indirectness, suggesting that the cheating is not only dishonest but also executed in a way that is clever or sneaky. This type of cheating might involve intricate plans or hidden strategies that make it harder to detect. It often suggests that the person cheating has gone above standard dishonesty to employ more complex and subtle means to achieve their goals.
For example, in a game setting, "more devious cheating" might involve using hidden devices or collaborating with others in secret, rather than just obvious cheating like looking at someone else's paper. It indicates a greater level of premeditation and intelligence in the maneuvering involved in the cheating process.
Chinese Explanation
“更狡猾的作弊”指的是一种特别狡诈、阴险或具有欺骗性的作弊行为。“狡猾”这个词暗示了一种巧妙或间接的程度,表明这种作弊不仅不诚实,而且以聪明或偷偷摸摸的方式进行。这种类型的作弊可能涉及复杂的计划或隐藏的策略,使其更难被发现。它通常暗示作弊者在标准的不诚实之上,以更复杂和微妙的手段来实现他们的目标。
例如,在游戏环境中,“更狡猾的作弊”可能涉及使用隐藏的设备或与他人秘密合作,而不仅仅是像抄袭别人试卷那样明显的作弊。这表明在作弊过程中涉及更高程度的预谋和智能。
-
But the cheats still have the grudgers toreckon with.
But the cheats still have the grudgers to reckon with.
English Explanation
The excerpt "But the cheats still have the grudgers to reckon with" implies a situation where individuals who cheat (likely in a game or competition) are faced with the consequences of their actions. The word "grudgers" refers to those who hold a grudge, meaning they feel resentment or bitterness towards someone else, often due to perceived wrongs or unfairness. The phrase "to reckon with" means to confront or deal with a particular issue or challenge.
In this context, it suggests that the cheats not only face the results of their dishonesty but also have to deal with people who are upset with them for their cheating. This adds an additional layer of conflict or tension in the situation. Ultimately, it conveys the idea that cheating may provide short-term gains, but it can lead to long-term repercussions in terms of relationships and reputations.
Chinese Explanation
这段摘录“但作弊者仍然需要面对那些怀恨在心的人”暗示了一种情况,其中作弊的人(可能是在游戏或比赛中)需要面对他们行为的后果。“怀恨在心的人”(grudgers)指那些心中有怨恨的人,意味着他们对其他人感到愤恨或苦涩,通常是由于感到被不公正对待或受到伤害。“面对”(to reckon with)的意思是要面对或处理某个特定的问题或挑战。
在这种情况下,它表明作弊者不仅会面临诚实带来的后果,而且还要与那些因他们的作弊而感到不满的人打交道。这给情况增加了一层额外的冲突或紧张感。总的来说,这传达了一个观点:作弊可能带来短期的利益,但长期来看,可能会导致人际关系和声誉方面的后果。
-
a cunning blow struck by queens againstworker
a cunning blow struck by queens against worker
English Explanation
The excerpt "a cunning blow struck by queens against worker" can be interpreted in several ways, depending on the context in which it is used.
-
Literal Interpretation in Chess: In the game of chess, "queens" refer to the powerful pieces that can move in any direction. A "worker" could be interpreted as a pawn, which is a weaker piece. The phrase might describe a strategic move where a queen captures a pawn or puts it in a vulnerable position, showcasing the tactical nature of the game.
-
Metaphorical Interpretation: Beyond the chessboard, this phrase could symbolize the strategies employed by those in positions of power (the "queens") against those who are more vulnerable or in a subservient role (the "worker"). It might reflect themes of social inequality, where powerful individuals or entities use their influence to undermine or exploit the less powerful.
-
Cunning Nature: The word "cunning" suggests a level of deception or cleverness involved in the action, indicating that the strike was not merely a direct attack but rather a tactful maneuver aimed at achieving a specific goal.
Chinese Explanation
这段文字“女王对工人发出的狡诈一击”可以根据使用的上下文进行多种解读。
-
象棋中的字面解读:在象棋中,“女王”指的是可以在任意方向移动的强力棋子,而“工人”可以理解为士兵(Pawn),一种相对较弱的棋子。这句话可能描绘了女王捕捉士兵或将其置于脆弱位置的战略动作,展示了游戏的战术性。
-
隐喻解读:超越棋盘,这句话可以象征那些掌握权力的个体(“女王”)对更脆弱或处于从属地位的人的策略(“工人”)。它可能反映了社会不平等的主题,强大个人或实体利用其影响力来削弱或剥削弱势群体。
-
狡诈的性质:“狡诈”一词暗示这一举动涉及一定的欺骗或聪明才智,表明这一击并不是简单的直接攻击,而是一种技巧性的操作,旨在实现特定目标。
-
-
-
d6n4gv1yf26qb8.archive.ph d6n4gv1yf26qb8.archive.ph
-
. He claimed that at the time he had been discombobulated by a fire-bomb attack on his home (an incident which credible figures in government have linked to the Russians, who will be delighted to know it put the PM off his game
lol
-
-
jembendell.com jembendell.com
-
we don’t restrict bridge building because there aren’t enough metres, the availability of a currency should not restrict how much we trust and collaborate with each other
Very good point. The denomination or quantification (e.g. measurement) of a somthing should not affect its exchangability-- except that is exactly how we practice the "currency game"
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
Early findings revealed that participants who made their contributions faster gave more to the public good (greater cooperation). These results were consistent in several replications (Cone & Rand 2014); when forced to make a quick decision, participants cooperated more than when asked to reflect on their decision. It seems that under certain circumstances, social contexts and social norms, ‘going with your gut’ leads to increased cooperation (Henrich, 2016).
game theory baby!!
-
-
crypto.news crypto.news
-
Unpacking "The Convergence Stack” - with Outlier Ventures’ CEO Jamie Burke -

🔎 https://hypothes.is/users/gyuri?q=outlier+ventures+stack

-
- Jun 2025
-
alraziuni.edu.ye alraziuni.edu.ye
-
hindsight
hindsight
English Explanation:
The term "hindsight" refers to the understanding of an event or situation only after it has happened. It implies a kind of wisdom or insight that comes too late to influence the outcome. Hindsight allows individuals to analyze the decisions made or the actions taken, often leading to reflections on how things could have been different if they had known then what they know now.
For example, after a sports game, fans often critique the coach's decisions with the benefit of hindsight, having seen the results of those decisions. The phrase "hindsight is 20/20" conveys the idea that it is easy to see what should have been done after the fact, but in the moment, it isn't always clear.
Chinese Explanation (中文翻译):
“hindsight”这个词指的是在事件或情况发生后,才理解其意义的能力。这意味着一种智慧或洞察力,通常是在结果已成定局后才产生的。回头看使个人能够分析他们所做的决策或采取的行动,常常导致反思,如果他们当时知道现在所知道的事情,事情会如何不同。
例如,在一场体育比赛后,球迷们常常会利用“事后诸葛亮”的眼光批评教练的决定,因为他们已经看到了这些决定的结果。“hindsight is 20/20”这个短语传达了一个观点,即在事后很容易看到应该做什么,但在当下,事情往往并不那么清晰。
-
thrashing
thrashing
English Explanation:
The term "thrashing" can have different meanings depending on the context in which it is used. Here are the most common interpretations:
-
In Computing: Thrashing refers to a condition in which a computer's virtual memory system is overwhelmed. When there is insufficient physical memory (RAM) available, the operating system spends more time swapping data between RAM and disk storage than executing actual processes. This can lead to a severe decrease in performance, as the system is continually trying to free up memory. It is often caused by running too many applications simultaneously or by using applications that require more memory than is available.
-
In General Use: Thrashing can also refer to a physical action or behavior, such as flailing or moving violently. For example, in a physical altercation, one might say that a person is "thrashing" about if they are struggling or fighting vigorously. This term evokes an image of chaotic movement.
-
In Sports or Competitive Contexts: Thrashing can refer to defeating an opponent decisively, often with a significant score difference. For example, in a game or match, if one team wins overwhelmingly against another, one might say they "thrashed" their opponent.
Overall, the concept of thrashing often conveys a sense of chaos, inefficiency, or overpowering defeat, depending on its application.
中文解释:
“thrashing”这个术语在不同的上下文中有不同的含义。以下是最常见的解释:
-
在计算机领域: Thrashing指的是一种计算机虚拟内存系统被淹没的状态。当物理内存(RAM)不足时,操作系统花费更多的时间在RAM和磁盘存储之间交换数据,而不是实际执行进程。这可能导致性能严重下降,因为系统不断尝试释放内存。通常,thrashing是由同时运行过多应用程序或使用需求超出可用内存的应用程序所造成的。
-
在一般用法中: Thrashing也可以指一种身体动作或行为,比如挥舞或剧烈移动。例如,在身体冲突中,可以说一个人正在“thrashing”,如果他们正在奋力挣扎或激烈打斗。这个术语带有混乱运动的形象。
-
在体育或竞争背景中: Thrashing可以指以压倒性优势击败对手,通常是有显著的分数差距。比如在比赛中,如果一支队伍以绝对优势赢得另一支队伍的比赛,可以说他们“thrash”了对手。
总体而言,thrashing这个概念常常传达一种混乱、低效或压倒性失败的感觉,具体取决于其应用场景。
-
-
darts
darts
Certainly! The excerpt "darts" can refer to several different concepts, depending on the context. Below, I will explain its meanings in detail in both English and Chinese.
English Explanation:
-
Darts as a Sport: Darts is a popular game typically played in pubs or recreational settings where players throw small, pointed missiles (called darts) at a circular board (the dartboard) that is divided into numbered sections. The objective is to score points by hitting specific areas of the board. The game can be played in various formats, including singles and doubles, and often involves strategies regarding which areas to target to maximize scores. The most well-known type of play is “501,” where players start with 501 points and aim to reduce their score to zero.
-
Darts as Projectiles: Darts can also refer to the actual projectiles used in the game. These typically consist of a metal tip, a barrel (which players grip) made of materials like tungsten or brass, and often have a flight attached to the back to stabilize the throw.
-
Figurative Use: The term "darts" can also be used metaphorically to describe piercing or rapid movements, or to indicate something that hurtful has been said, much like someone might describe words as "darting" towards someone.
-
Dart as a Verb: The verb "to dart" means to move quickly and suddenly. For example, one might say "the cat darted across the street" to indicate a fast, sudden movement.
Chinese Explanation (中文解释):
-
飞镖作为一种运动: 飞镖是一项受欢迎的游戏,通常在酒吧或休闲场所进行,玩家将小型尖头的飞镖(称为飞镖)投射向一个圆形的靶子(飞镖盘),飞镖盘被分成多个编号的区域。目标是通过击中特定区域来得分。这项游戏有多种玩法形式,包括单人和双人,通常涉及有关目标选择的策略,以最大化得分。最常见的玩法是“501”,玩家从501分开始,目标是将分数减少到零。
-
飞镖作为投射物: 飞镖也可以指在游戏中使用的实际投射物。这些通常由一个金属尖端、一个用钨或黄铜等材料制成的瓶身(玩家握持的部分)以及通常在背部附带的一个飞行器组成,以稳定投掷。
-
比喻用法: “飞镖”一词还可以用于比喻,描述刺痛或快速的动作,或表示某些伤人的话就像飞镖一样朝某人刺去。
-
动词“dart”: 动词“dart”意味着迅速而突然地移动。例如,人们可能会说“猫突然冲过街道”,以表示快速的、突发的移动。
This dual explanation aims to provide a comprehensive understanding of the term "darts" in both English and Chinese.
-
-
gleefully
gleefully
The word "gleefully" is an adverb derived from the adjective "gleeful," which means to express joy, happiness, or delight. When someone does something gleefully, they do it in a way that shows they are very happy and are enjoying what they are doing. This word often implies a sense of playfulness or childlike joy, suggesting that the person is not only happy but also perhaps a bit carefree or exuberant.
For example, if a child is playing a game and is laughing and jumping around, you might say that the child is playing gleefully. It conveys a sense of joy that is unmistakable and infectious, often making others around them feel happy as well.
“gleefully” 这个词是一个副词,源于形容词“gleeful”,意思是表达喜悦、幸福或快乐。当某人以“gleefully”的方式做某事时,他们是在以一种显示极其开心和享受所做事情的方式进行。这种词通常暗示一种游戏性或儿童般的快乐,暗示这个人不仅快乐,而且可能有一些无忧无虑或充满活力。
例如,如果一个孩子在玩游戏,欢笑着和跳跃,你可能会说这个孩子在“gleefully”地玩耍。这传达了一种不可否认和有感染力的快乐感,通常也让他们身边的其他人感到快乐。
-
engaged in a tug ofwar over a worm
engaged in a tug of war over a worm
English Explanation:
The phrase "engaged in a tug of war over a worm" describes a situation where two or more parties are competing or fighting over a single object or advantage, represented metaphorically by the "worm."
-
Tug of War: This is a game or competition where two teams pull on opposite ends of a rope, trying to pull the other team across a central line. It symbolizes struggle, competition, and the effort to gain control over something.
-
Worm: The worm in this context symbolizes a prize or something of value that is contested. It may not have significant value in real life, but it represents the obsession or determination of the parties involved in the competition.
So, overall, this excerpt could illustrate a scene where two individuals (or groups) are stubbornly trying to gain possession of something, suggesting themes of rivalry, competition, or even absurdity in their fixation on the object of their struggle.
Chinese Explanation:
“为了一个虫子进行拔河”这个短语描述了两个或多个方在为一个共同的对象或优势而竞争或争斗的情景,这个对象在隐喻中被称为“虫子”。
-
拔河:拔河是一种游戏或比赛,两个团队在绳子的两端用力拉,试图把对方拉过中间的线。这象征着斗争、竞争,以及争取对某物控制权的努力。
-
虫子:在这个语境中,虫子象征着一个有价值的奖品或者物品,尽管在现实生活中它可能并没有太大价值,但它代表了参与竞争的各方对该对象的痴迷或执着。
因此,整体来看,这段摘录可能描绘了两个个人(或团体)固执地努力争夺某物的场景,暗示了竞争、对抗,甚至在对斗争对象的执着中体现的荒谬感。
-
-
knights and pawns
knights and pawns
Certainly! Let's delve into the excerpt "knights and pawns."
English Explanation:
In the context of chess, "knights" and "pawns" refer to two different types of chess pieces, each with unique movements and roles in the game:
- Knights:
- Knights are considered one of the most versatile pieces in chess. They are represented by the horse figure and can "jump" over other pieces, which gives them unique movement capabilities. A knight moves in an "L" shape: two squares in one direction (horizontally or vertically) and then one square at a right angle, or one square in one direction and then two squares at a right angle. This ability allows knights to reach positions on the board that other pieces may not easily access.
-
Knights are often valued for their ability to control the center of the board and their effectiveness in both offensive attacks and defensive strategies.
-
Pawns:
- Pawns are the most numerous pieces on the chessboard but are generally considered the least powerful. They are represented as small, circular pieces. Pawns move forward one square at a time, but they capture diagonally. On their first move, they have the option to move forward two squares.
- Pawns also have the unique ability to promote upon reaching the opponent's back rank (the farthest row from their starting position). They can be promoted to any other piece (except a king), typically a queen, which greatly increases their power.
- Despite their initial limitations, pawns are crucial for maintaining control over the board and can be transformed into powerful pieces if they reach the opponent's side.
In a broader metaphorical sense, "knights and pawns" can also symbolize different roles or statuses in society, with knights representing those with power or higher status and pawns representing those who may be less powerful or influential.
Chinese Explanation:
在国际象棋的背景下,“骑士”和“兵”指的是两种不同类型的棋子,它们在游戏中各有独特的移动方式和角色:
- 骑士:
- 骑士被认为是国际象棋中最灵活的棋子之一。它们通常用马的形象表示,可以“跳过”其他棋子,因此拥有独特的移动能力。骑士以“L”形移动:向一个方向移动两个正方格(横向或纵向)后再转90度移动一个正方格,或者先向一个方向移动一个正方格,再以90度移动两个正方格。这种能力使骑士能够到达其他棋子可能难以到达的棋盘位置。
-
骑士通常被重视,因为它们能够控制棋盘中心,并且在进攻和防守策略上都非常有效。
-
兵:
- 兵是棋盘上数量最多的棋子,但通常被认为是最弱的。它们用小圆形棋子表示。兵每次只能向前移动一个正方格,但可以斜着吃子。第一次移动时,它们可以选择向前移动两个正方格。
- 当兵到达对方的底线(远离其起始位置的最后一排)时,有机会升变为其他棋子(除了国王)。它们可以被升变为任何其他棋子,通常是后,这大大增加了它们的力量。
- 尽管兵在初期有局限性,但它们在控制棋盘上至关重要,并且如果它们到达对方一侧,能够转变为强大的棋子。
在更广泛的隐喻意义上,“骑士与兵”也可以象征社会中的不同角色或地位,骑士代表掌权者或高地位的人,而兵则代表那些权力较小或影响力较弱的人。
-
'forking'with the knight.
'forking' with the knight.
English Explanation
The term 'forking' in the context of chess refers to a tactical maneuver where a single piece attacks two or more of the opponent's pieces simultaneously. When the piece executing the fork is a knight, it is often called "forking with the knight."
Knights are unique in that they move in an 'L' shape: two squares in one direction and then one square perpendicular to that direction. This movement allows them to reach squares that are not easily accessible for many other pieces, enabling them to create forks in various positions on the board.
When a knight forks two pieces, it puts both of them in jeopardy, often forcing the opponent to choose which piece to save while the other can be captured on the next turn. This tactic can be particularly effective because knights can potentially fork a king and a more valuable piece, leading to significant strategic advantages in the game.
Chinese Explanation
在国际象棋中,“叉击(forking)”这个术语指的是一种战术手法,其中一个棋子同时攻击对手的两个或多个棋子。当执行这个叉击动作的棋子是骑士时,它通常被称为“骑士叉击(forking with the knight)”。
骑士在移动时具有独特性:它的移动方式是一个“L”形:在一个方向上移动两个格子,然后再在与其方向垂直的方向上移动一个格子。这种移动方式使它能够到达很多其他棋子难以达到的格子,从而在棋盘上在各种位置创建叉击。
当骑士叉击了两个棋子时,它让这两个棋子都有危险,通常迫使对手选择要保护哪个棋子,而另一个棋子则可以在下一回合被吃掉。这种战术特别有效,因骑士可能叉击国王和一个更有价值的棋子,从而在游戏中获得明显的战略优势。
-
bishops move in a diagonal
bishops move in a diagonal
English Explanation
The phrase "bishops move in a diagonal" refers to a specific rule in the game of chess concerning the movement of one of the pieces, which is called a bishop. In chess, each type of piece has its own rules regarding how it can move across the board.
-
Movement Pattern: Bishops can only move diagonally, meaning they can travel any number of squares along a diagonal line on the chessboard. This allows them to control long lines of squares but restricts them to one color. For example, a bishop that starts on a white square will always remain on white squares throughout the game.
-
Strategic Implications: The diagonal movement of bishops allows players to create strong defensive and offensive positions. They can attack opponents' pieces that are on the same diagonal line without obstruction.
-
Game Strategy: Understanding the movement of bishops is crucial for developing effective strategies during a game. Players often aim to position their bishops where they can exert maximum pressure on the board, potentially targeting vulnerable opponent pieces.
Chinese Explanation
“主教沿对角线移动”这一短语是指国际象棋游戏中一个特定规则,涉及到一种棋子,即主教(bishop)。在国际象棋中,每种棋子的移动规则都是不同的。
-
移动模式:主教只能沿对角线移动,也就是说,它可以在棋盘的对角线上移动任意数量的格子。这使得主教能够控制长线的格子,但限制了它只能在一种颜色的格子上活动。例如,一个起始于白色格子的主教将在整个游戏中始终保持在白色格子上。
-
战略意义:主教的对角线移动使玩家能够创建强大的防御和进攻位置。它们可以攻击在同一对角线上的对方棋子,而不会被阻挡。
-
棋局策略:理解主教的移动对于在游戏中制定有效的策略至关重要。玩家通常会努力将主教放置在可以最大限度施加压力的位置,以便有可能攻击到对手的处于脆弱位置的棋子。
This thorough explanation covers the movement and significance of bishops in a chess game, as well as their strategic implications in both English and Chinese.
-
-
-
www.fulcrum.org www.fulcrum.org
-
A student once summarized the accommodation process as being like the game Battleship—you can’t perceive what’s on the other side of the board, because there is a barrier there, and so you have to just keep trying to guess where the other player’s ships are—or where the relevant accommodations are, if they exist. You throw your diagnosis over, and hope that it will land on something that will actually help you. But you cannot sense the full range of what may be on the other side, and thus you cannot directly ask for what you need.
This is why it's so important to have open, ongoing dialogue with individual students about what is and is not working for them before assignments are due, as assignments are due, and after they've submitted and assignment. Simple things like, "If you are willing, would take about 5 minutes to jot down some things that made completing this assignment difficult for you. Then, I'll ask you to shift gears and list or explain what made it possible for you to do the assignment." I'm really writing this for myself as a reminder of how this type of intervention can help all students.
-
-
en.wikipedia.org en.wikipedia.org
-
In
Play a game of football with Brains, Alan and Tin Tin against the rest of the Tracy brothers.
-
-
en.wikipedia.org en.wikipedia.org
-
in
Playing a game of rugby.
-
in
Playing a game of jungle golf
-
in
A game of baseball
-
-
en.wikipedia.org en.wikipedia.org
-
in
A game of American football in the garden
-
of Mukundilal Gupta, in
A good game of backyard cricket on an Australian Summer's day
-
in
A good game of basketball
-
longest
Enjoying fried chicken in your bedroom on the night the other people in your family watch the Big Game
-
in
Popcorn, drinks and snacks during a baseball game.
-
-
bmcpublichealth.biomedcentral.com bmcpublichealth.biomedcentral.com
-
potential ‘gaming’
this seems to be the main concern for a lot of policymakers as they see so many people game things and work around their policies on technicalities that they are more cautious about such things, especially when they dont stand to actually benefit themselves from them in most cases
-
-
fershad.com fershad.com
-
It's similar to the discussion around flying. As an individual, me not choosing to take a flight doesn't change much. That flight is still going ahead. It's the same with data transfer and network energy use. Me sending a few less kilobytes over the network isn't going to signal to the network operator that capacity can be reduced. Like flying, though, we can collectively signal to airline operators that certain routes are less valuable if a sufficiently large number of people stop flying them. But that's a long game, with a lot of collective action required. We can get there, especially in places with suitable alternatives to flying, but we can't completely remove flying from our life. I'd say the same applies for the network. It's not a lost cause, but rather a long game that we can play alongside realising shorter term wins.
When I read this, I had a hard time because I do understand the argument, from the POV of there being an airline route in the first place, the current framing focusses so much on an indivdual case that you can miss that on the scale of hundreds of people, doubling the people flying will very likely double the emissions.
This is because the key driver of emissions is burning the fuel, and because airlines scaling up and down the frequency of flights happens on a much faster frequency than laying new cable and network infra.
-
-
www.radiofrance.fr www.radiofrance.fr
-
Compte rendu détaillé : "Y a-t-il une culture de l'inceste en France ?" (France Culture, 12.10.2022)
- Ce compte rendu explore les thèmes principaux et les idées essentielles abordées lors du débat sur France Culture, en s'appuyant sur les propos des intervenants.
L'émission, en se basant sur le rapport de la CIVISE (Commission Indépendante sur l'Inceste et les Violences Sexuelles), questionne l'existence d'une "culture de l'inceste" en France, remettant en cause le mythe du tabou anthropologique et soulignant la réalité omniprésente de ces violences.
1. L'ampleur et la sous-estimation de l'inceste en France
Le débat s'ouvre sur un chiffre choc issu du rapport de la CIVISE : "160 000 enfants subissent des violences sexuelles chaque année en France."
Ce chiffre, longtemps "sous-estimé voire complètement négligé", contredit l'idée reçue d'un tabou anthropologique sur ce crime.
Au contraire, les enquêtes récentes montrent que "ce crime touche un français ou une française sur 10 et est présent dans toutes les classes sociales."
- Juliette Drouard, thérapeute et co-directrice de l'ouvrage collectif "La culture de l'inceste", cite cette phrase percutante : "s'il est tabou de dire l'inceste, il n'est pas tabou de le faire."
Cette affirmation résume la dissonance entre la perception publique de l'inceste comme un interdit absolu et sa réalité fréquente et dissimulée.
Édouard Duran, juge des enfants et co-président de la CIVISE, confirme que "16 500 personnes sont venues [à la CIVISE], nous ont fait confiance [...] toutes nous disent cela, que les violences commencent quelques jours après la naissance ou qu'elles durent jusqu'à la majorité ou au-delà de la majorité." Il insiste sur le fait que "la maison est pour beaucoup le lieu du danger, de la confrontation à la terreur et à la mort même."
2. Le mythe du tabou anthropologique et la réalité de la "culture de l'inceste"
- Juliette Drouard et les autres auteurs de "La culture de l'inceste" remettent en question la notion de tabou suprême héritée de l'anthropologie classique (notamment Claude Lévi-Strauss).
Ils affirment que cette idée, véhiculée par des "anthropologues depuis leur position située, c'est-à-dire d'hommes blancs qui sont arrivés sans vouloir parler de violence sexuelle mais simplement en voulant étudier les règles du mariage", a conforté le silence autour de l'inceste.
Pour eux, le concept de tabou du mariage "n'a rien à voir avec les pratiques d'inceste. Marier ou pas marier avec certaines personnes, ça n'empêche pas d'incester ces certaines personnes."
Le terme de "culture de l'inceste" est utilisé dans plusieurs sens par Juliette Drouard :
- Un phénomène propre à l'espèce humaine : contrairement aux animaux, les humains utilisent la sexualité pour la domination.
- Une culture spécifique au sein des cultures humaines : cela se produit dans certaines sociétés, mais "il n'est pas nécessaire pour les êtres humains pour vivre et pour exister ou pour fonder une culture d'agresser sexuellement d'autres personnes."
- Les productions culturelles : celles-ci "vont soutenir la systématicité de l'inceste en permettant de ne pas le parler en tant que violence sexuelle."
Elles peuvent "romantiser l'inceste comme dans Game of Thrones avec le frère, la sœur" ou, comme dans le porno, où le "stepmom" est un hashtag très recherché.
D'autres œuvres "n'adoptent pas le point de vue de la victime" ou reprennent des mythes comme celui de Lolita, où "ce serait la personne victime qui vient séduire l'agresseur."
3. L'évolution historique et juridique de la perception de l'inceste
Julie Doyon, historienne, apporte un éclairage diachronique sur la question. Elle souligne que l'inceste, dans l'Ancien Régime, était "beaucoup dit, montré, écrit" dans la littérature et était un crime considéré comme tel dans la doctrine pénale.
Cependant, il n'était "pas du tout la même signification qu'aujourd'hui.
C'est-à-dire qu'il n'est pas indexé à une forme de violence ni spécifiquement à la catégorie de l'enfance." L'inceste était alors un "crime sans victime.
Un crime avec deux coupables", considéré comme un crime de mœurs et de péché entre personnes apparentées.
Le "point de bascule" se situe entre le 18e et le 19e siècle, où l'inceste passe d'une conception de "couple incestueux" à celle d'"acte d'agression sexuelle commis par un adulte sur un enfant dans le cadre familial."
La Révolution française, en voulant séculariser le droit pénal, a supprimé le crime d'inceste, le considérant comme relevant de la sphère religieuse et de la "vie privée".
Aujourd'hui, Édouard Duran déplore cette persistance de l'idée que "la maison est éminemment essentiellement le lieu du privé."
Il insiste sur la nécessité que "ce qui doit régner dans la maison, c'est la loi commune et pas la loi d'un seul, pas la loi du dominant."
4. La spécificité de la violence incestueuse et la vulnérabilité des enfants
Édouard Duran insiste sur la vulnérabilité des enfants : "les agresseurs recherchent toujours une proie en raison de sa vulnérabilité.
Et l'enfant parmi les êtres vulnérables dans la société est le plus vulnérable et parmi les enfants vulnérables, il y a les enfants handicapés, plus vulnérables et plus invisibilisés encore."
-
Il récuse l'argument souvent avancé par les agresseurs : "Je n'ai jamais entendu en audience, en cours d'assise, au tribunal correctionnel ou au tribunal pour enfants un agresseur dire autre chose que c'est l'enfant qui m'a séduit." Édouard Duran refuse de "chercher à comprendre" dans le sens de "chercher dans la psychologie de l'agresseur ce qui pourrait l'excuser." Pour lui, l'impératif moral est de "mettre en sécurité les enfants victimes d'inceste et les personnes victimes de violence."
-
Juliette Drouard souligne l'importance de parler de "pédocriminalité de manière générale", car "les adultes qui commettent des agressions sur des enfants, les commettent aussi bien sur leurs enfants que sur les enfants des autres."
Elle met en évidence une "communauté de traumatisme" et de destruction pour toutes les victimes, avec seulement une "différence de degré dans l'échelle de la trahison éthologique" selon Sortnaf.
Édouard Duran, citant Christine Ang, décrit l'inceste comme un "crime absolument spécifique, un crime contre l'humanité du sujet, un crime généalogique."
Il explique que "en venant à elle sexuellement, il se refuse à elle comme père.
C'est une humiliation sociale avant tout par laquelle l'enfant n'a plus de place dans l'histoire des humains."
Il n'y a "pas d'amour dans l'inceste," comme le souligne Juliette Drouard : "L'excitant ça n'est pas l'amour mais le pouvoir et les fractions."
5. Le silence, la prescription et la difficile écoute de la parole des victimes
Le silence est présenté comme un facteur mortifère : "Ce qui tue c'est le silence. C'est de ne pas parler. C'est de ne pas dire, de ne pas pouvoir dire."
L'extrait du documentaire "Inceste, le dire et l'entendre" illustre le ressenti des victimes : "On t'a juste dit que l'agression sexuelle c'est dehors que ça se passe.
C'est des étrangers qui peuvent t'attaquer. C'est des étrangers. C'est jamais dedans la famille. Et que tu pressens, tu ressens que quand il t'arrive un truc à l'intérieur de la famille, il faut fermer sa gueule."
La question de la prescription est abordée. Le ministre de la Justice, Éric Dupond-Moretti, évoque l'allongement du délai de 20 à 30 ans à compter de la majorité depuis 2018.
Édouard Duran souligne l'importance de cet allongement, car les traumatismes générés par ces violences "ne sont pas cachés dans un passé lointain.
C'est un présent perpétuel qui s'immisce dans toutes les sphères de l'existence, des plus sociales au plus intimes." Il insiste sur "l'aspiration profonde à ce que justice soit rendue."
L'expression d'Iris Bray, "Mon corps est une archive vivante de mon inceste," résonne avec cette idée de persistance du traumatisme.
Malgré une apparente "libération de la parole" dans l'espace public, Juliette Drouard et Édouard Duran soulignent que le tabou reste "absolu" là où l'inceste a lieu.
Seulement "1000 condamnations" pour "160 000 enfants victimes de violence sexuelle chaque année" révèlent un "système d'impunité des agresseurs."
Les enfants n'ont pas les outils pour décrire ce qui leur arrive et sont souvent "tués ou resilenciés" lorsqu'ils parlent.
Édouard Duran révèle que "dans 9 cas sur 10, le confident de l'enfant ne fait rien." Le processus de "silenciation" est au cœur de la stratégie de l'agresseur, qui vise à "imposer le silence à l'enfant victime" et à "contaminer le groupe."
Julie Doyon nuance l'idée d'un silence absolu en soulignant l'existence de moments passés où l'inceste a été discuté publiquement, comme la fin des années 1980 avec les "dossiers de l'écran."
Elle insiste sur le fait que le vrai problème n'est peut-être "pas tant de le parler que de l'entendre."
Elle met en lumière les dynamiques complexes au sein des familles, où le "silence familial n'est pas un bloc monolithique" et où les rôles et statuts des individus influencent la manière dont la parole circule ou est étouffée.
Conclusion
Le débat met en lumière une réalité complexe et souvent douloureuse de l'inceste en France.
Loin d'être un tabou universellement respecté, il est une violence omniprésente, souvent dissimulée par des mécanismes de silence, d'impunité et une certaine "culture" qui minimise ou romantise la souffrance des victimes.
Les intervenants appellent à une meilleure compréhension historique, juridique et sociétale de l'inceste, une protection accrue des enfants victimes, et une capacité collective à écouter et croire la parole de ceux qui osent briser le silence.
Numéro de téléphone Inceste : 0805 802 804 (anonyme et gratuit)
-
-
www.radiofrance.fr www.radiofrance.fr
-
Compte Rendu Détaillé : "La culture de l'inceste" sur France Inter
- Ce document de synthèse analyse les thèmes principaux et les idées clés abordées lors de l'émission de France Inter intitulée "France inter Iris Brey et Juliet Drouar: existe-t-il une culture de l'inceste", diffusée le 8 septembre 2022. L'émission présente l'ouvrage collectif "La culture de l'inceste", un livre rouge décrit comme un "traité, un manifeste, un brûlot", co-écrit par Iris Brey, Juliette Drouar, Sorna Fal, Wendy de Lorn, Dorothée Dussy, Tal Piter Bro Merx et Ovidi.
1. L'Inceste : Non une Déviance Individuelle, mais un Phénomène Culturel et Systémique
Le thème central de l'émission et de l'ouvrage est la remise en question de la vision traditionnelle de l'inceste comme une "déviance, d'exception pathologiques, de monstres à la marge".
Au contraire, les invitées soutiennent que l'inceste est un phénomène "massif" et "systématique" ancré au "cœur même de notre organisation sociale".
- Statistiques Éffarantes et Témoignages Affluents : Les statistiques sont présentées comme "effarantes", avec "une personne sur 10 en France" victime d'inceste, ce qui représente "7 millions de victimes". Cette ampleur remet en question la notion d'exception.
- De l'Individu au Système : Iris Brey et Juliette Drouar insistent sur le fait que "les monstres, ça n'existe pas.
C'est notre société, c'est nous, c'est nos amis, c'est nos pères. C'est ça qu'on doit regarder." La responsabilité est ainsi déplacée de l'individu "monstrueux" vers le collectif et le système social. * Continuité avec la Culture du Viol : Juliette Drouar explique que le terme "culture de l'inceste" est décalqué de l'expression "culture du viol", visant à souligner un aspect "culturel" et non une "exception, une pathologie, une monstruosité". * L'Inceste comme Outil de Domination Patriarcale : L'ouvrage postule que l'inceste est "une expression, c'est une reconduction d'un fonctionnement social qui s'appuie sur l'idée de domination". Iris Brey affirme que c'est un "système qui est mis en place pour que le corps des enfants et que le corps des femmes continue à être dominé par le patriarcat et par les hommes". Les agresseurs, à 76% des hommes, se sentent "autorisé[s] partout et depuis toujours" à agresser le corps "le plus faible".
2. Le Tabou de l'Inceste : Ne pas Parler, Plutôt que Ne pas Exister
Les auteures déconstruisent l'idée reçue selon laquelle l'inceste serait un interdit social fondamental.
Elles affirment que l'inceste n'est pas un tabou dans sa pratique, mais plutôt un tabou dans sa discussion et sa reconnaissance.
- Critique de Lévi-Strauss : Le célèbre anthropologue Claude Lévi-Strauss est cité pour sa conception de l'interdit de l'inceste comme "socle du contrat social". Cependant, Juliette Drouar rectifie que Lévi-Strauss parlait de l'interdit du mariage avec certains membres de la famille, et non de l'interdit des violences sexuelles. Elle ajoute : "on peut tout à fait se ne pas se marier avec certains membres de sa famille et les violer."
- L'Omerta et l'Inaccessibilité de la Pensée : Iris Brey souligne l' "omerta et une impossibilité de penser ça" qui rend même les textes de chercheurs sur l'inceste "pas disponibles".
-
Les Enfants Parlent, les Parents n'Entendent pas : Le véritable tabou n'est pas le silence des enfants victimes – "les enfants en parlent" – mais plutôt l'incapacité des adultes à les entendre : "C'est que les parents ne veulent pas entendre ou ne peuvent pas entendre."
-
- La Condition de l'Enfant et le Mythe de la Famille Protectrice
L'ouvrage met en lumière la vulnérabilité intrinsèque des enfants dans le système social et familial, où leur dépendance est naturalisée et leurs droits sont "déprivés".
- L'Enfant comme Catégorie Sociale Constituée : Le livre, notamment à travers l'article de Tal Piter Bro Merx, introduit l'idée que les "enfants sont pas une catégorie qui est les mineurs en tout cas sont pas une catégorie naturelle. C'est une catégorie qui a été constituée". Cette catégorie est "complètement déprivée de droit et posé dans une position d'absolue dépendance par rapport aux adultes et leur famille."
- Privation de Droits et de Crédibilité : Les enfants sont "privé[s] de paroles, privé[s] de crédibilité, privé[s] d'individualité, privé[s] de légitimité". Cette condition de "dépendance matérielle", d'absence de droit de vote ou de représentation, crée les "meilleures conditions pour pouvoir disposer des corps de l'autre".
- La Famille, Lieu de Risque et non de Protection Naturelle : Contrairement au "mythe qui entoure la famille qui serait extrêmement bienveillante, extrêmement chaleureuse" et "naturellement protect[rice]", les études concordent : "c'est très majoritairement au sein de la famille qu'on lieu ces violences et ses abus sexuels."
4. Représentations Médiatiques et Culturelles de l'Inceste : Banalisation et Distorsion
Une part importante de la discussion est consacrée à la manière dont l'inceste est représenté ou non représenté dans la culture populaire, contribuant à sa banalisation et à la culpabilisation des victimes.
- Le "Séisme Médiatique" et le #MeToo Inceste : Les auteures reviennent sur l'impact des témoignages de personnalités comme Vanessa Springora, Camille Kouchner et Adèle Haenel. Le #MeToo Inceste en France a commencé par des récits de violences sexuelles dans l'enfance, impliquant des "femmes mais aussi d'hommes et aussi de lesbienne, de personnes gay, de personnes trans", comme Mathieu Fouchet avec le #MeToo gay.
- L'Héritage de "Lolita" : Iris Brey analyse le film de Kubrick, "Lolita", comme une "bascule" culturelle. Le terme "Lolita" est passé dans l'imaginaire collectif, rendant la "jeune fille ... responsable du fait que son beau-père ait envie de coucher avec elle". L'image iconique de Lolita avec ses lunettes en cœur, bien que non issue du film, a contribué à "infuser dans toute la culture populaire" l'idée que l'inceste est "érisé" et souvent imputé à la victime.
Elle rappelle que la Lolita de Nabokov était "une enfant violée par son beau-père". * Distorsion des Représentations de l'Inceste :Inceste père-fille : Souvent présenté avec la culpabilisation de la jeune fille. * Inceste mère-fils : Souvent "montré comme une démarche d'émancipation, comme une relecture du d'Œdipe". * Inceste frère-sœur : Bien que les plus rares dans la réalité, ils sont "montrés beaucoup dans les séries et notamment dans Game of Thrones comme quelque chose d'érotisé et de normal". * L'Inceste et le Pornographie Grand Public : Ovidi (co-auteure) et Juliette Drouar abordent l'infiltration de l'inceste dans le porno grand public, notamment via le mythe de la "MILF" (Mother I'd Like to F***) qui a évolué vers la "Stepmom" (belle-mère) comme hashtag principal.
Ce phénomène, initialement américain, s'est "très largement diffusé", banalisant une "représentation érotisée de l'inceste" où les violences sont déniées au profit d'une sexualisation "sexy" et "fun".
Les auteures déplorent que ces représentations "ne représente[nt] jamais l'inceste comme un acte de violence et de domination".
5. Une Lutte Collective pour une Pensée Collective
L'écriture de ce livre est présentée comme une "lutte, un combat", rendue possible uniquement par un effort collectif.
- Nécessité du Collectif : Iris Brey a eu l'idée du livre en lisant un article de Juliette Drouar sur "la culture de l'inceste" mais ne voulait pas "déplier" ce terme seule. Le collectif était essentiel pour "pousser nos propres réflexions" et pour "se soutenir" face à un sujet "difficile". La "pensée collective est pour moi la seule solution pour qu'on mette les mains un peu dans le camboui et qu'on réfléchisse à qu'est-ce qu'on fait maintenant".
- Implication Personnelle des Auteures : Iris Brey ouvre l'ouvrage en se présentant comme victime d'inceste, soulignant l'importance de comprendre "d'où je parle" pour les lecteurs.
Le suicide de Tal Piter Bro Merx pendant l'écriture du livre témoigne de l'épreuve que représente l'engagement sur ce sujet, même dans une approche théorique.
En conclusion, "La culture de l'inceste" est un ouvrage politique et théorique qui vise à déconstruire les mythes entourant l'inceste, le présentant non pas comme un fait divers isolé, mais comme un symptôme d'un système de domination patriarcale et d'une invisibilisation de la vulnérabilité et des droits des enfants.
L'émission met en lumière la nécessité d'une prise de conscience collective et d'une relecture critique des représentations culturelles pour démanteler ce système.
-
-
www.edutopia.org www.edutopia.org
-
You can also create your own instructional videos for students to view at their own pace.
At first, I did not think this concept would be beneficial at the Elementary level. But, the more I thought about it, I do think it could be utilized within small group activities. Recording a short video explaining an extension activity, or a game, could allow you to focus on the group you are working with while the other students become familiarized with online platforms.
-
-
journals.plos.org journals.plos.org
-
perception that they reward ‘bad’ behaviour, are socially divisive and ineffective, and that they are too easy for participants to manipulate or ‘game’
This is a valid viewpoint as it could be seen as another way for lazy, unhealthy people to profit off of their bad habits, but at the end of the day, taking the vaccine is better than not taking it regardless of the method
-
-
app.podscribe.com app.podscribe.com
-
Good of you to say.
Good of you to say.
English Explanation
The phrase "Good of you to say" is a polite acknowledgment, indicating appreciation for a compliment or positive remark made by someone else. In the given context, this expression follows a tense moment in a cricket game, where players and commentators are engaged in a tense discussion about the game. The speaker is responding to a previous comment, showing gratitude while also possibly downplaying their own contribution or significance in the ongoing situation. This reflects a sense of camaraderie and sportsmanship.
Chinese Explanation
“好话说得不错”是礼貌地承认,表示对他人所说的赞美或积极评论的感谢。在这个上下文中,这句话出现在一个紧张的板球比赛时刻,选手和评论员们正在进行紧张的讨论。说话者在回应之前的评论,表达感激,同时也可能在谦虚地贬低自己在当前情境中的贡献或重要性。这反映出一种友谊和体育精神的感觉。
-
cricket tea The excerpt "cricket tea" refers to a traditional social gathering associated with cricket matches, usually held between innings or after a game. During cricket tea, players and spectators enjoy refreshments like sandwiches, cakes, and tea, fostering camaraderie and conversation. The context suggests a discussion among characters, possibly about organizing a cricket match or dealing with a related issue, where "cricket tea" symbolizes a jovial and communal atmosphere.
在这段摘录中,“cricket tea”指的是与板球比赛相关的传统社交聚会,通常在局间或比赛结束后举行。在“cricket tea”期间,球员和观众享用三明治、蛋糕和茶,增进友谊和交流。上下文暗示角色们在讨论组织板球比赛或处理相关问题,而“cricket tea”象征着一种愉快和社团的氛围。
-
positively confident.
positively confident.
English Explanation
The phrase "positively confident" expresses a strong sense of assurance and certainty about a situation. In the context of a sports game, it suggests that the speaker feels very hopeful and optimistic about their team's chances to succeed. This mindset is vital in competitive settings, as it can boost morale and performance.
Chinese Explanation
“积极自信”这个短语表示对某种情况有强烈的信心和确定性。在运动比赛的背景下,它暗示说话者对自己球队的成功非常有希望和乐观。这种心态在竞争环境中至关重要,因为它可以提升士气和表现。
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
Public Review
Joint Public Review:
This manuscript presents an algorithm for identifying network topologies that exhibit a desired qualitative behaviour, with a particular focus on oscillations. The approach is first demonstrated on 3-node networks, where results can be validated through exhaustive search, and then extended to 5-node networks, where the search space becomes intractable. Network topologies are represented as directed graphs, and their dynamical behaviour is classified using stochastic simulations based on the Gillespie algorithm. To efficiently explore the large design space, the authors employ reinforcement learning via Monte Carlo Tree Search (MCTS), framing circuit design as a sequential decision-making process.
This work meaningfully extends the range of systems that can be explored in silico to uncover non-linear dynamics and represents a valuable methodological advance for the fields of systems and synthetic biology.
Strengths
The evidence presented is strong and compelling. The authors validate their results for 3-node networks through exhaustive search, and the findings for 5-node networks are consistent with previously reported motifs, lending credibility to the approach. The use of reinforcement learning to navigate the vast space of possible topologies is both original and effective, and represents a novel contribution to the field. The algorithm demonstrates convincing efficiency, and the ability to identify robust oscillatory topologies is particularly valuable. Expanding the scale of systems that can be systematically explored in silico marks a significant advance for the study of complex gene regulatory networks.
Weaknesses
The principal weakness of the manuscript lies in the interpretation of biological robustness. The authors identify network topologies that sustain oscillatory behaviour despite perturbations to the system or parameters. However, in many cases, this persistence is due to the presence of partially redundant oscillatory motifs within the network. While this observation is interesting and of clear value for circuit design, framing it as evidence of evolutionary robustness may be misleading. The "mutant" systems frequently exhibit altered oscillatory properties, such as changes in frequency or amplitude. From a functional cellular perspective, mere oscillation is insufficient - preservation of specific oscillation characteristics is often essential. This is particularly true in systems like circadian clocks, where misalignment with environmental cycles can have deleterious effects. Robustness, from an evolutionary standpoint, should therefore be framed as the capacity to maintain the functional phenotype, not merely the qualitative behaviour.
A secondary limitation is that, despite the methodological advances, the scale of the systems explored remains modest. While moving from 3- to 5-node systems is non-trivial, five elements still represent a relatively small network. It is somewhat surprising that the algorithm does not scale further, particularly when considering the performance of MCTS in other domains - for instance, modern chess engines routinely explore far larger decision trees. A discussion on current performance bottlenecks and potential avenues for improving scalability would be valuable.
Finally, it is worth noting that the emergence of oscillations in a model often depends not only on the topology but also critically on parameter choices and the nature of the nonlinearities. The use of Hill functions and high Hill coefficients is a common strategy to induce oscillatory dynamics. Thus, the reported results should be interpreted within the context of the modelling assumptions and parameter regimes employed in the simulations.
We thank the reviewers for their careful consideration of our work and for the interesting feedback and scientific discussion. We are working on a revised text based on their recommendations, which will include some of the discussion below.
This work meaningfully extends the range of systems that can be explored in silico to uncover non-linear dynamics and represents a valuable methodological advance for the fields of systems and synthetic biology.
We thank the reviewers for their positive assessment of our work’s impact!
The use of reinforcement learning to navigate the vast space of possible topologies is both original and effective, and represents a novel contribution to the field. The algorithm demonstrates convincing efficiency, and the ability to identify robust oscillatory topologies is particularly valuable. Expanding the scale of systems that can be systematically explored in silico marks a significant advance for the study of complex gene regulatory networks.
We appreciate these kind comments about our work’s merits. We are excited to share our reinforcement learning (RL) based method with the fields of systems and synthetic biology, and we consider it a valuable tool for the systematic analysis and design of larger-scale regulatory networks!
The principal weakness of the manuscript lies in the interpretation of biological robustness. The authors identify network topologies that sustain oscillatory behaviour despite perturbations to the system or parameters… [However, these] "mutant" systems frequently exhibit altered oscillatory properties, such as changes in frequency or amplitude. From a functional cellular perspective, mere oscillation is insufficient - preservation of specific oscillation characteristics is often essential. This is particularly true in systems like circadian clocks, where misalignment with environmental cycles can have deleterious effects. Robustness, from an evolutionary standpoint, should therefore be framed as the capacity to maintain the functional phenotype, not merely the qualitative behaviour.
We thank the reviewers for their attention to this point. In the large-scale circuit search, summarized in Figures 4A and 4B, we ran a search for 5-component oscillators that can spontaneously oscillate even when subjected to the deletion of a random gene. Some of the best performing circuits under these conditions exhibited a design feature we call “motif multiplexing,” in which multiple smaller motifs are interleaved in a way that makes oscillation possible under many different mutational scenarios. Interestingly, despite not selecting for preservation of frequency, the 3Ai+3Rep circuit (a 5-gene circuit highlighted in Figure 5) anecdotally appears to have a natural frequency that is robust to partial gene knockdowns, although not to complete gene deletions. As shown in Figure 5C, this circuit has a natural frequency of 6 cycles/hr (with one particular parameterization), and it can sustain a knockdown of any of its 5 genes to 50% of the wild-type transcription rate without altering the natural frequency by more than 20%.
However, we agree that there are salient differences between this training scenario and natural evolution. The revised text will clarify that these differences limit what conclusions can be drawn about biological evolution by analogy. As the reviewers point out, we use the presence of spontaneous oscillations (with or without the deletion) as a measure of fitness, regardless of frequency, so as to screen for designs with promising behavior. Also, the deletion mutations introduced during training likely represent larger perturbations to the system than a typical mutation encountered during genome replication (for example, a point mutation in a response element leading to a moderate change in binding affinity). Finally, we do not introduce any entrainment. Real circadian oscillators are aligned to a 24-hour period (“entrained”) by environmental inputs such as light and temperature. For this reason, natural circadian clocks may have natural frequencies that are slightly shorter or longer than 24 hours, although a close proximity to the 24-hour period does seem to be an important selective factor [1].
...despite the methodological advances, the scale of the systems explored remains modest. While moving from 3- to 5-node systems is non-trivial, five elements still represent a relatively small network. It is somewhat surprising that the algorithm does not scale further, particularly when considering the performance of MCTS in other domains - for instance, modern chess engines routinely explore far larger decision trees. A discussion on current performance bottlenecks and potential avenues for improving scalability would be valuable.
We thank the reviewers for their attention to this point. The main limitation we encountered to exploring circuits with more than 5 nodes in this work was the poor computational scaling of the Gillespie stochastic simulation algorithm, rather than a limitation of MCTS itself. While the average runtime of a 3-node circuit simulation was roughly 7 seconds, this number increased to 18-20 seconds with 5-node circuits. For this reason, we limited the search to topologies with ≤15 interaction arrows (15 sec/simulation). In general, the simulation time was proportional to the square of the number of transcription factors (TFs). We will revise the text to include the reason for stopping at 5 nodes, which is significant for understanding CircuiTree’s scaling properties.
With regards to scaling, an important advantage of CircuiTree is its ability to generate useful candidate designs after exploring only a portion of the search space. Like exhaustive search, given enough time, MCTS will comprehensively explore the search space and find all possible solutions. However, for large search spaces, RL-based agents are generally given a finite number of simulations (or time) to learn as much as possible.
Across machine learning (ML) applications [2] and particularly with RL models [3], this training time tends to obey a power law with respect to the underlying complexity of the problem. Thus we can use the complexity of the 3-node and 5-node searches to infer the current scaling limits of CircuiTree. The first oscillator topology was discovered after 2,280 simulations for the 3-node search, and in the 5-node search, the first oscillator using 5 nodes appeared at ~8e5 simulations, resulting in a power law of Y ~ 84.4 X<sup>0.333</sup>. Thus, useful candidate designs may be found for 6-node and 7-node searches after 4.5e7 and 5.26e9 simulations, respectively, even though these spaces contain 1.5e17 and 2.5e23 topologies, respectively. Thus, running a 7-node search with the current implementation of CircuiTree would require resources close to the current boundaries of computation, requiring roughly 1.8 million CPU-hours, or 2 weeks on 5,000 CPUs, assuming a 1-second simulation. These points will be incorporated into both the results and discussion sections in our revised text.
However, we are optimistic about CircuiTree’s potential to scale to much larger circuits with modifications to its algorithm. CircuiTree uses the original (so-called “vanilla”) implementation of MCTS, which has not been used in professional game-playing AIs in over a decade. Contemporary RL-based game-playing engines leverage deep neural networks to dramatically reduce the training time, using value networks to identify game-winning positions and policy networks to find game-winning moves. AlphaZero, developed by Google DeepMind to learn games by self-play and without domain knowledge, outperformed all other chess AIs after 44 million training games, much smaller than the 10^43 possible chess states [4]. Similarly, the game of go has 10<sup>170</sup> possible states, but AlphaZero outperformed other AIs after only 140 million games [4]. Large circuits live in similarly large search spaces; for example, 19-node and 20-node circuits represent spaces of 10<sup>172</sup> and 10<sup>190</sup> possible topologies. The revised text will include this discussion and identify value and policy networks, as well as more scalable simulation paradigms such as ODEs and neural ODEs, as our future directions for improving CircuiTree’s scalability.
Finally, our revised discussion will note some important differences between game-playing and biological circuit design. Unlike deterministic games like chess, the final value of a circuit topology is determined stochastically, by running a simulation whose fitness depends on the parameter set and initial conditions. Thus, state-for-state, it is possible that training an agent for circuit design may inherently require more simulations to achieve the same level of certainty compared to classical games. Additionally, while we often possess a priori knowledge about a game such as its overall difficulty or certain known strategies, we lack this frame of reference when searching for circuit designs. Thus, it remains challenging to know if and when a large space of designs has been “satisfactorily” or “comprehensively” searched, since the answer depends on data that are unknown, namely the quantity, quality, and location of solutions residing in the search space.
Not accounting for redundancy due to structural symmetries
Finally, it is worth noting that the emergence of oscillations in a model often depends not only on the topology but also critically on parameter choices and the nature of the nonlinearities. The use of Hill functions and high Hill coefficients is a common strategy to induce oscillatory dynamics. Thus, the reported results should be interpreted within the context of the modelling assumptions and parameter regimes employed in the simulations.
In our dynamical modeling of transcription factor (TF) networks, we do not rely on continuum assumptions about promoter occupancy such as Hill functions. Rather, we model each reaction - transcription, translation, TF binding/unbinding, and degradation - explicitly, and individual molecules appear and disappear via stochastic birth and death events. Many natural TFs are homodimers that bind cooperatively to regulate transcription; similarly, we assume that pairs of TFs bind more stably to their response element than individual TFs. Thus, our model has similar cooperativity to a Hill function, and it can be shown that in the continuum limit, the effective Hill coefficient is always ≤2. Our revision will clarify this aspect of the modeling and include a derivation of this property. Currently, the parameter values used in the figures are shown in Table 2. In the revised text, these will be displayed in the body of the text as well for clarity.
Bibliography (1) Spoelstra, K., Wikelski, M., Daan, S., Loudon, A. S. I., & Hau, M. (2015). Natural selection against a circadian clock gene mutation in mice. PNAS, 113(3), 686–691. https://doi.org/https://doi.org/10.1073/pnas.1516442113<br /> (2) Neumann, O., & Gros, C. (2023). Scaling Laws for a Multi-Agent Reinforcement Learning Model. The Eleventh International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=ZrEbzL9eQ3W (3) Jones, A. L. (2021). Scaling Scaling Laws with Board Games. arXiv [Cs.LG]. Retrieved from http://arxiv.org/abs/2104.03113 (4) Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that Masters Chess, Shogi, and go through self-play. Science, 362(6419), 1140–1144. https://doi.org/10.1126/science.aar6404
-
-
cutedressup.com cutedressup.com
-
Play Ragdoll Archer Unblocked online for free! Aim, shoot, and defeat your enemies in this fun, physics-based archery game. Test your skills, enjoy endless action, and challenge yourself with ragdoll archers anytime, anywhere — no download needed!
-
-
cutedressup.com cutedressup.com
-
Play Urban Basketball online for free! Experience fast-paced street basketball action in this unblocked game. Show off your skills, shoot hoops, and compete for victory in exciting urban courts anytime, anywhere — no downloads needed!
-
-
cutedressup.com cutedressup.com
-
Aim, shoot, and crack your way to victory in this fun-filled Egg Shooting Game! Test your precision as you target colorful eggs, clear levels, and beat high scores. Simple controls, addictive gameplay — perfect for players of all ages!
-
-
cutedressup.com cutedressup.com
-
Experience the thrill of the Table Tennis World Tour, where top players from around the globe compete for glory! Witness fast-paced rallies, powerful smashes, and incredible skill in this electrifying international table tennis showdown online game.
-
-
cutedressup.com cutedressup.com
-
Low Adventure 2 is a fun-packed indie game where you explore quirky lands, solve simple puzzles, and dodge cheeky enemies. A lighthearted quest perfect for casual gamers seeking charm, humor, and easygoing action in a pixelated world.
-
-
www.reddit.com www.reddit.com
-
reply to u/TypewriterJustice at https://old.reddit.com/r/typewriters/comments/1lbjr5f/sorry_to_say/mxunsb6/
I think the real crime here was the quote of $200 for all this work. $200 should just barely cover the recovered platen, rollers, and new feet with any margin. The full clean, oil, and adjust is a few more hours at $40-75/hour and that's not even getting to the parts or labor on the tougher troubleshooting and repair portions. With this rough diagnosis and potential issues, I (and many others I'm sure) would be quoting closer to $500-600 for a refurbish job at this level.
Living in the LA area, I'm blessed to have 7 shops within a reasonable drive, but if I put a machine into the queue at most of them it'd be a two or three month's wait time at the very best. Most of them have been at the game for decades much less in the midst of also recently setting up a brick and mortar shop.
As a point of comparison, Lucas Dul publishes his wait list on his website (currently 84 people) where he states "Average repair cost is $300-350 for general cleaning, service, and minor repairs. Average turnaround time is 2-3 weeks." Perhaps Charlie might benefit from creating a wait list and not taking machines into the shop until his time and attention can turn directly to them?
It's not often addressed here in this fora how much one should expect to either pay or wait for repair services which aren't evenly distributed across the United States and likely even less so in many other countries. In the broader scheme of things, I think that you get a far better deal at professional shops than you're going to find within the broader public of so-called typewriter sellers (antique shops, thrift stores, etc.)
As a point of reference, I'm an advanced hobbyist with my own garage-based shop for my personal collection and even I get one or two queries a week about repairing or restoring the machines of others, so I'm at least reasonably aware of what some of the wait times can look like. I wish I had the time or stock of parts machines to do more than a handful of friends and family repairs on top of my own personal repair work.
Sadly, at the end of the day, it sounds like both sides were potentially not good at communicating expectations about how long repairs would take. If nothing else we should all be sharing more details about these issues to help level set how this all works for the broader typewriter community.
-
-
medium.com medium.com
-
research is about the long game
Indeed—which should really cast the norms around scientific computing in stark light as being clearly the wrong way to go about doing things. (Which itself isn't to say that there's anything special about scientific computing here—there are plenty of programmers (working on open source or otherwise) that get things just as wrong. Most of them, even. It's virtually everyone.)
-
-
cutedressup.com cutedressup.com
-
Play Basketball Legends 2020 Unblocked, the ultimate 2-player sports game where you team up with iconic basketball stars. Show off your skills, dunk, block, and score big in thrilling matches you can enjoy anytime, anywhere — no restrictions!
-
-
cutedressup.com cutedressup.com
-
Subway Surfers Classic is an endless runner game where you dash through vibrant city subways, dodging trains and obstacles. Collect coins, power-ups, and unlock characters as you race to escape the grumpy inspector and his dog. Non-stop, thrilling fun!
-
-
cutedressup.com cutedressup.com
-
Sandtrix is a captivating block-dropping puzzle game where players arrange falling sand blocks to clear lines and score points. With vibrant visuals and addictive gameplay, it challenges your reflexes and strategy skills in every fast-paced round.
-
-
www.reddit.com www.reddit.com
-
Example of using a typewriter and a roll of cash register paper to score a baseball game.
-
-
cutedressup.com cutedressup.com
-
Play Football Bros Unblocked, an exciting online multiplayer game where you team up with your friends and score epic goals! Fast-paced matches, smooth controls, and endless fun await — challenge your bros anytime, anywhere, unblocked!
-
-
cutedressup.com cutedressup.com
-
Enjoy fun, creative girl games! Play dress-up, makeup, cooking, and adventure games. Design fashion, decorate rooms, or solve puzzles. Perfect for kids who love cute, colorful, and exciting challenges
Tags
Annotators
URL
-
-
cutedressup.com cutedressup.com
-
Cross Stitch Masters is a relaxing and creative embroidery game where you craft beautiful patterns by stitching colorful threads. Unwind, follow designs, and bring intricate images to life one stitch at a time in this soothing crafting experience.
-
-
-
Play Backyard Baseball Unblocked online — relive the classic childhood game with Pablo Sanchez and the gang! Swing for the fences, build your dream team, and enjoy nostalgic baseball fun right from your browser. No downloads, no limits! ⚾
-
-
www.biorxiv.org www.biorxiv.org
-
Reviewer #3 (Public review):
Summary:
The study investigates the development of reinforcement learning across the lifespan with a large sample of participants recruited for an online game. It finds that children gradually develop their abilities to learn reward probability, possibly hindered by their immature spatial processing and probabilistic reasoning abilities. Motor noise and exploration after a failure all contribute to children's subpar performance.
Strengths:
Experimental manipulations of both the continuity of movement options and the probabilistic nature of the reward function enable the inference of what cognitive factors differ between age groups. <br /> A large sample of participants is studied.<br /> The model-based analysis provides further insights into the development of reinforcement learning ability.
Weaknesses:
The conclusion that immature spatial processing and probabilistic reasoning abilities limit reinforcement learning here still needs more direct evidence.
-
Author response:
The following is the authors’ response to the original reviews
Overview of changes in the revision
We thank the reviewers for the very helpful comments and have extensively revised the paper. We provide point-by-point responses below and here briefly highlight the major changes:
(1) We expanded the discussion of the relevant literature in children and adults.
(2) We improved the contextualization of our experimental design within previous reinforcement studies in both cognitive and motor domains highlighting the interplay between the two.
(3) We reorganized the primary and supplementary results to better communicate the findings of the studies.
(4) The modeling has been significantly revised and extended. We now formally compare 31 noise-based models and one value-based model and this led to a different model from the original being the preferred model. This has to a large extent cleaned up the modeling results. The preferred model is a special case (with no exploration after success) of the model proposed in Therrien et al. (2018). We also provide examples of individual fits of the model, fit all four tasks and show group fits for all, examine fits vs. data for the clamp phases by age, provide measures of relative and absolute goodness of fit, and examine how the optimal level of exploration varies with motor noise.
Reviewer #1 (Public review):
Summary:
Here the authors address how reinforcement-based sensorimotor adaptation changes throughout development. To address this question, they collected many participants in ages that ranged from small children (3 years old) to adulthood (1 8+ years old). The authors used four experiments to manipulate whether binary and positive reinforcement was provided probabilistically (e.g., 30 or 50%) versus deterministically (e.g., 100%), and continuous (infinite possible locations) versus discrete (binned possible locations) when the probability of reinforcement varied along the span of a large redundant target. The authors found that both movement variability and the extent of adaptation changed with age.
Thank you for reviewing our work. One note of clarification. This work focuses on reinforcementbased learning throughout development but does not evaluate sensorimotor adaptation. The four tasks presented in this work are completed with veridical trajectory feedback (no perturbation).
The goal is to understand how children at different ages adjust their movements in response to reward feedback but does not evaluate sensorimotor adaptation. We now explain this distinction on line 35.
Strengths:
The major strength of the paper is the number of participants collected (n = 385). The authors also answer their primary question, that reinforcement-based sensorimotor adaptation changes throughout development, which was shown by utilizing established experimental designs and computational modelling.
Thank you.
Weaknesses:
Potential concerns involve inconsistent findings with secondary analyses, current assumptions that impact both interpr tation and computational modelling, and a lack of clearly stated hypotheses.
(1) Multiple regression and Mediation Analyses.
The challenge with these secondary analyses is that:
(a) The results are inconsistent between Experiments 1 and 2, and the analysis was not performed for Experiments 3 and 4,
(b) The authors used a two-stage procedure of using multiple regression to determine what variables to use for the mediation analysis, and
(c)The authors already have a trial-by-trial model that is arguably more insightful.
Given this, some suggested changes are to:
(a) Perform the mediation analysis with all the possible variables (i.e., not informed by multiple regression) to see if the results are consistent.
(b) Move the regression/mediation analysis to Supplementary, since it is slightly distracting given current inconsistencies and that the trial-by-trial model is arguably more insightful.
Based on these comments, we have chosen to remove the multiple regression and mediation analyses. We agree that they were distracting and that the trial-by-trial model allows for differentiation of motor noise from exploration variability in the learning block.
(2) Variability for different phases and model assumptions:
A nice feature of the experimental design is the use of success and failure clamps. These clamped phases, along with baseline, are useful because they can provide insights into the partitioning of motor and exploratory noise. Based on the assumptions of the model, the success clamp would only reflect variability due to motor noise (excludes variability due to exploratory noise and any variability due to updates in reach aim). Thus, it is reasonable to expect that the success clamps would have lower variability than the failure clamps (which it obviously does in Figure 6), and presumably baseline (which provides success and failure feedback, thus would contain motor noise and likely some exploratory noise).
However, in Figure 6, one visually observes greater variability during the success clamp (where it is assumed variability only comes from motor noise) compared to baseline (where variability would come from: (a) Motor noise.
(b) Likely some exploratory noise since there were some failures.
(c) Updates in reach aim.
Thanks for this comment. It made us realize that some of our terminology was unintentionally misleading. Reaching to discrete targets in the Baseline block was done to a) determine if participants could move successfully to targets that are the same width as the 100% reward zone in the continuous targets and b) determine if there are age dependent changes in movement precision. We now realize that the term Baseline Variability was misleading and should really be called Baseline Precision.
This is an important distinction that bears on this reviewer's comment. In clamp trials, participants move to continuous targets. In baseline, participants move to discrete targets presented at different locations. Clamp Variability cannot be directly compared to Baseline Precision because they are qualitatively different. Since the target changes on each baseline trial, we would not expect updating of desired reach (the target is the desired reach) and there is therefore no updating of reach based on success or failure. The SD we calculate over baseline trials is the endpoint variability of the reach locations relative to the target centers. In success clamp, there are no targets so the task is qualitatively different.
We have updated the text to clarify terminology, expand upon our operational definitions, and motivate the distinct role of the baseline block in our task paradigm (line 674).
Given the comment above, can the authors please:
(a) Statistically compare movement variability between the baseline, success clamp, and failure clamp phases.
Given our explanation in the previous point we don't think that comparing baseline to the clamp makes sense as the trials are qualitatively different.
(b) The authors have examined how their model predicts variability during success clamps and failure clamps, but can they also please show predictions for baseline (similar to that of Cashaback et al., 2019; Supplementary B, which alternatively used a no feedback baseline)?
Again, we do not think it makes sense to predict the baseline which as we mention above has discrete targets compared to the continuous targets in the learning phase.
(c) Can the authors show whether participants updated their aim towards their last successful reach during the success clamp? This would be a particularly insightful analysis of model assumptions.
We have now compared 31 models (see full details in next response) which include the 7 models in Roth et al. (2023). Several of these model variants have updating even after success with so called planning noise). We also now fit the model to the data that includes the clamp phases (we can't easily fit to success clamp alone as there are only 10 trials). We find that the preferred model is one that does not include updating after success.
(d) Different sources of movement variability have been proposed in the literature, as have different related models. One possibility is that the nervous system has knowledge of 'planned (noise)' movement variability that is always present, irrespective of success (van Beers, R.J. (2009). Motor learning is optimally tuned to the properties of motor noise. Neuron, 63(3), 406-417). The authors have used slightly different variations of their model in the past. Roth et al (2023) directly Rill compared several different plausible models with various combinations of motor, planned, and exploratory noise (Roth A, 2023, "Reinforcement-based processes actively regulate motor exploration along redundant solution manifolds." Proceedings of the Royal Society B 290: 20231475: see Supplemental). Their best-fit model seems similar to the one the authors propose here, but the current paper has the added benefit of the success and failure clamps to tease the different potential models apart. In light of the results of a), b), and c), the authors are encouraged to provide a paragraph on how their model relates to the various sources of movement variability and ther models proposed in the literature.
Thank you for this. We realized that the models presented in Roth et al. (2023) as well as in other papers, are all special cases of a more general model. Moreover, in total there are 30 possible variants of the full model so we have now fit all 31 models to our larger datasets and performed model selection (Results and Methods). All the models can be efficiently fit by Kalman smoother to the actual data (rather than to summary statistics which has sometimes been done). For model selection, we fit only the 100 learning trials and chose the preferred model based on BIC on the children's data (Figure 5—figure Supplement 1). After selecting the preferred model we then refit this model to all trials including the clamps so as to obtain the best parameter estimates.
The preferred model was the same whether we combined the continuous and discrete probabilistic data or just examin d each task separately either for only the children or for the children and adults combined. The preferred model is a pecial case (no exploration after success) of the one proposed in Therrien et al. (2018) and has exploration variability (after failure) and motor noise with full updating with exploration variability (if any) after success. This model differs from the model in the original submission which included a partial update of the desired reach after exploration this was considered the learning rate. The current model suggests a unity learning rate.
In addition, as suggested by another reviewer, we also fit a value-based model which we adapted from the model described in Giron et al. (2023). This model was not preferred.
We have added a paragraph to the Discussion highlighting different sources of variability and links to our model comparison.
(e) line 155. Why would the success clamp be composed of both motor and exploratory noise? Please clarify in the text
This sentence was written to refer to clamps in general and not just success clamps. However, in the revision this sentence seemed unnecessary so we have removed it.
(3) Hypotheses:
The introduction did not have any hypotheses of development and reinforcement, despite the discussion above setting up potential hypotheses. Did the authors have any hypotheses related to why they might expect age to change motor noise, exploratory noise, and learning rates? If so, what would the experimental behaviour look like to confirm these hypotheses? Currently, the manuscript reads more as an exploratory study, which is certainly fine if true, it should just be explicitly stated in the introduction. Note: on line 144, this is a prediction, not a hypothesis. Line 225: this idea could be sharpened. I believe the authors are speaking to the idea of having more explicit knowledge of action-target pairings changing behaviour.
We have included our hypotheses and predictions at two points in the paper In the introduction we modified the text to:
"We hypothesized that children's reinforcement learning abilities would improve with age, and depend on the developmental trajectory of exploration variability, learning rate (how much people adjust their reach after success), and motor noise (here defined as all sources of noise associated with movement, including sensory noise, memory noise, and motor noise). We think that these factors depend on the developmental progression of neural circuits that contribute to reinforcement learning abilities (Raznahan et al., 2014; Nelson et al., 2000; Schultz, 1998)."
In results we modified the sentence to:
"We predicted that discrete targets could increase exploration by encouraging children to move to a different target after failure.”
Reviewer #2 (Public review):
Summary:
In this study, Hill and colleagues use a novel reinforcement-based motor learning task ("RML"), asking how aspects of RML change over the course of development from toddler years through adolescence. Multiple versions of the RML task were used in different samples, which varied on two dimensions: whether the reward probability of a given hand movement direction was deterministic or probabilistic, and whether the solution space had continuous reach targets or discrete reach targets. Using analyses of both raw behavioral data and model fits, the authors report four main results: First, developmental improvements reflected 3 clear changes, including increases in exploration, an increase in the RL learning rate, and a reduction of intrinsic motor noise. Second, changes to the task that made it discrete and/or deterministic both rescued performance in the youngest age groups, suggesting that observed deficits could be linked to continuous/probabilistic learning settings. Overall, the results shed light on how RML changes throughout human development, and the modeling characterizes the specific learning deficits seen in the youngest ages.
Strengths:
(1) This impressive work addresses an understudied subfield of motor control/psychology - the developmental trajectory of motor learning. It is thus timely and will interest many researchers.
(2) The task, analysis, and modeling methods are very strong. The empirical findings are rather clear and compelling, and the analysis approaches are convincing. Thus, at the empirical level, this study has very few weaknesses.
(3) The large sample sizes and in-lab replications further reflect the laudable rigor of the study.
(4) The main and supplemental figures are clear and concise.
Thank you.
Weaknesses:
(1) Framing.
One weakness of the current paper is the framing, namely w/r/t what can be considered "cognitive" versus "non-cognitive" ("procedural?") here. In the Intro, for example, it is stated that there are specific features of RML tasks that deviate from cognitive tasks. This is of course true in terms of having a continuous choice space and motor noise, but spatially correlated reward functions are not a unique feature of motor learning (see e.g. Giron et al., 2023, NHB). Given the result here that simplifying the spatial memory demands of the task greatly improved learning for the youngest cohort, it is hard to say whether the task is truly getting at a motor learning process or more generic cognitive capacities for spatial learning, working memory, and hypothesis testing. This is not a logical problem with the design, as spatial reasoning and working memory are intrinsically tied to motor learning. However, I think the framing of the study could be revised to focus in on what the authors truly think is motor about the task versus more general psychological mechanisms. Indeed, it may be the case that deficits in motor learning in young children are mostly about cognitive factors, which is still an interesting result!
Thank you for these comments on the framing of our study. We now clearly acknowledge that all motor tasks have cognitive components (new paragraph at line 65). We also explain why we think our tasks has features not present in typical cognitive tasks.
(2) Links to other scholarship.
If I'm not mistaken a common observation in tudies of the development of reinforcement learning is a decrease in exploration over-development (e.g., Nussenbaum and Hartley, 2019; Giron et al., 2023; Schulz et al., 2019); this contrasts with the current results which instead show an increase. It would be nice to see a more direct discussion of previous findings showing decreases in exploration over development, and why the current study deviates from that. It could also be useful for the authors to bring in concepts of different types of exploration (e.g. "directed" vs "random"), in their interpretations and potentially in their modeling.
We recognize that our results differ from prior work. The optimal exploration pattern differs from task to task. We now discuss that exploration is not one size fits all, it's benefits vary depending upon the task. We have added the following paragraphs to the Discussion section:
"One major finding from this study is that exploration variability increases with age. Some other studies of development have shown that exploration can decrease with age indicating that adults explore less compared to children (Schulz et al., 2019; Meder et al., 2021; Giron et al., 2023). We believe the divergence between our work and these previous findings is largely due to the experimental design of our study and the role of motor noise. In the paradigm used initially by Schulz et al. (2019) and replicated in different age groups by Meder et al. (2021) and Giron et al. (2023), participants push buttons on a two-dimensional grid to reveal continuous-valued rewards that are spatially correlated. Participants are unaware that there is a maximum reward available and therefore children may continue to explore to reduce uncertainty if they have difficulty evaluating whether they have reached a maxima. In our task by contrast, participants are given binary reward and told that there is a region in which reaches will always be rewarded. Motor noise is an additional factor which plays a key role in our reaching task but minimal if any role in the discretized grid task. As we show in simulations of our task, as motor noise goes down (as it is known to do through development) the optimal amount of exploration goes up (see Figure 7—figure Supplement 2 and Appendix 1). Therefore, the behavior of our participants is rational in terms of R230 increasing exploration as motor noise decreases.
A key result in our study is that exploration in our task reflects sensitivity to failure. Older children make larger adjustments after failure compared to younger children to find the highly rewarded zone more quickly. Dhawale et al. (2017) discuss the different contexts in which a participant may explore versus exploit (i.e., stick at the same position). Exploration is beneficial when reward is low as this indicates that the current solution is no longer ideal, and the participant should search for a better solution. Konrad et al. (2025) have recently shown this behavior in a real-world throwing task where 6 to 12 year old children increased throwing variability after missed trials and minimized variability after successful trials. This has also been shown in a postural motor control task where participants were more variable after non-rewarded trials compared to rewarded trials (Van Mastrigt et al., 2020). In general, these studies suggest that the optimal amount of exploration is dependent on the specifics of the task."
(3) Modeling.
First, I may have missed something, but it is unclear to me if the model is actually accounting for the gradient of rewards (e.g., if I get a probabilistic reward moving at 45°, but then don't get one at 40°, I should be more likely to try 50° next then 35°). I couldn't tell from the current equations if this was the case, or if exploration was essentially "unsigned," nor if the multiple-trials-back regression analysis would truly capture signed behavior. If the model is sensitive to the gradient, it would be nice if this was more clear in the Methods. If not, it would be interesting to have a model that does "function approximation" of the task space, and see if that improves the fit or explains developmental changes.
The model we use (similar to Roth et al. (2023) and Therrien et al. (2016, 2018)) does not model the gradient. Exploration is always zero-mean Gaussian. As suggested by the reviewer, we now also fit a value-based model (described starting at line 810) which we adapted from the model presented in Giron et al. (2023). We show that the exploration and noise-based model is preferred over the value-based model.
The multiple-trials-back regression was unsigned as the intent was to look at the magnitude and not the direction of the change in movement. We have decided to remove this analysis from the manuscript as it was a source of confusion and secondary analysis that did not add substantially to the findings of these studies.
Second, I am curious if the current modeling approach could incorporate a kind of "action hysteresis" (aka perseveration), such that regardless of previous outcomes, the same action is biased to be repeated (or, based on parameter settings, avoided).
In some sense, the learning rate in the model in the original submission is highly related to perseveration. For example if the learning rate is 0, then there is complete perseveration as you simply repeat the same desired movement. If the rate is 1, there is no perseveration and values in between reflect different amounts of perseveration. Therefore, it is not easy to separate learning rate from perseveration. Adding perseveration as another parameter would likely make it and the learning unidentifiable. However, we now compare 31 models and those that have a non-unity learning rate are not preferred suggesting there is little perseveration.
(4) Psychological mechanisms. There is a line of work that shows that when children and adults perform RL tasks they use a combination of working memory and trial-by-trial incremental learning processes (e.g., Master et al., 2020; Collins and Frank 2012). Thus, the observed increase in the learning rate over development could in theory reflect improvements in instrumental learning, working memory, or both. Could it be that older participants are better at remembering their recent movements in short-term memory (Hadjiosif et al., 2023; Hillman et al., 2024)?
We agree that cognitive processes, such as working memory or visuospatial processing, play a role in our task and describe cognitive elements of our task in the introduction (new paragraph at line 65). However, the sensorimotor model we fit to the data does a good job of explaining the variation across age, which suggests that that age-dependent cognitive processes probably play a smaller role.
Reviewer #3 (Public review):
Summary:
The study investigates reinforcement learning across the lifespan with a large sample of participants recruited for an online game. It finds that children gradually develop their abilities to learn reward probability, possibly hindered by their immature spatial processing and probabilistic reasoning abilities. Motor noise, reinforcement learning rate, and exploration after a failure all contribute to children's subpar performance.
Strengths:
(1) The paradigm is novel because it requires continuous movement to indicate people's choices, as opposed to discrete actions in previous studies.
(2) A large sample of participants were recruited.
(3) The model-based analysis provides further insights into the development of reinforcement learning ability.
Thank you.
Weaknesses:
(1 ) The adequacy of model-based analysis is questionable, given the current presentation and some inconsistency in the results.
Thank you for raising this concern. We have substantially revised the model from our first submission. We now compare 31 noise-based models and 1 value-based model and fit all of the tasks with the preferred model. We perform model selection using the two tasks with the largest datasets to identify the preferred model. From the preferred model, we found the parameter fits for each individual dataset and simulated the trial by trial behavior allowing comparison between all four tasks. We now show examples of individual fits as well as provide a measure of goodness of fit. The expansion of our modeling approach has resolved inconsistencies and sharpened the conclusions drawn from our model.
(2) The task should not be labeled as reinforcement motor learning, as it is not about learning a motor skill or adapting to sensorimotor perturbations. It is a classical reinforcement learning paradigm.
We now make it clear that our reinforcement learning task has both motor and cognitive demands, but does not fall entirely within one of these domains. We use the term motor learning because it captures the fact that participants maximize reward by making different movements, corrupted by motor noise, to unmarked locations on a continuous target zone. When we look at previous ublications, it is clear that our task is similar to those that also refer to this as reinforcement motor learning Cashaback et al. (2019) (reaching task using a robotic arm in adults), Van Mastrigt et al. (2020) (weight shifting task in adults), and Konrad et al. (2025) (real-world throwing task in children). All of these tasks involve trial-by-trial learning through reinforcement to make the movement that is most effective for a given situation. We feel it is important to link our work to these previous studies and prefer to preserve the terminology of reinforcement motor learning.
Recommendations for the authors:
Reviewing Editor Comments:
Thank you for this summary. Rather than repeat the extended text from the responses to the reviewers here, we point the Editor to the appropriate reviewer responses for each issue raised.
The reviewers and editors have rated the significance of the findings in your manuscript as "Valuable" and the strength of evidence as "Solid" (see eLife evalutation). A consultancy discussion session to integrate the public reviews and recommendations per reviewer (listed below), has resulted in key recommendations for increasing the significance and strength of evidence:
To increase the Significance of the findings, please consider the following:
(1) Address and reframe the paper around whether the task is truly getting at a motor learning process or more generic cognitive decision-making capacities such as spatial memory, reward processing, and hypothesis testing.
We have revised the paper to address the comments on the framing of our work. Please see responses to the public review comments of Reviewers #2 and #3.
(2) It would be beneficial to specify the differences between traditional reinforcement algorithms (i.e., using softmax functions to explore, and build representations of state-action-reward) and the reinforcement learning models used here (i.e., explore with movement variability, update reach aim towards the last successful action), and compare present findings to previous cognitive reinforcement learning studies in children.
Please see response to the public review comments of Reviewer #1 in which we explain the expansion of our modeling approach to fit a value-based model as well as 31 other noise-based models. In our response to the public review comments of Reviewer #2, we comment on our expanded discussion of how our findings compare with previous cognitive reinforcement learning studies.
To move the "Strength of Evidence" to "Convincing", please consider doing the following:
(1 ) Address some apparently inconsistent and unrealistic values of motor noise, exploration noise, and learning rate shown for individual participants (e.g., Figure 5b; see comments reviewers 1 and take the following additional steps: plotting r squares for individual participants, discussing whether individual values of the fitted parameters are plausible and whether model parameters in each age group can extrapolate to the two clamp conditions and baselines.
We have substantially updated our modeling approach. Now that we compare 31 noise-based models, the preferred model does not show any inconsistent or unrealistic values (see response to Reviewer #3). Additionally, we now show example individual fits and provide both relative and absolute goodness of fit (see response to Reviewer #3).
(2) Relatedly, to further justify if model assumptions are met, it would be valuable to show that the current learning model fits the data better than alternative models presented in the literature by the authors themselves and by others (reviewer 1). This could include alternative development models that formalise the proposed explanations for age-related change: poor spatial memory, reward/outcome processing, and exploration strategies (reviewer 2).
Please see response to public review comments of Reviewer #1 in which we explain that we have now fit a value-based model as well as 31 other noise-based models providing a comparison of previous models as well as novel models. This led to a slightly different model being preferred over the model in the original submission (updated model has a learning rate of unity). These models span many of the processes previously proposed for such tasks. We feel that 32 models span a reasonable amount of space and do not believe we have the power to include memory issues or heuristic exploration strategies in the model.
(3) Perform the mediation analysis with all the possible variables (i.e., not informed by multiple regression) to see if the results are more consistent across studies and with the current approach (see comments reviewer 1).
Please see response to public review comments of Reviewer #1. We chose to focus only on the model based analysis because it allowed us to distinguish between exploration variability and motor noise.
Please see below for further specific recommendations from each reviewer.
Reviewer #1 (Recommendations for the author):
(1) In general, there should be more discussion and contextualization of other binary reinforcement tasks used in the motor literature. For example, work from Jeroen Smeets, Katinka van der Kooij, and Joseph Galea.
Thank you for this comment. We have edited the Introduction to better contextualize our work within the reinforcement motor learning literature (see line 67 and line 83).
(2) Line 32. Very minor. This sentence is fine, but perhaps could be slightly improved. “select a location along a continuous and infinite set of possible options (anywhere along the span of the bridge)"
Thank you for this comment. We have edited the sentence to reflect this suggestion.
(3) Line 57. To avoid some confusion in successive sentences: Perhaps, "Both children over 12 and adolescents...".
Thank you for this comment. We have edited the sentence to reflect this suggestion.
(4) Line 80. This is arguably not a mechanistic model, since it is likely not capturing the reward/reinforcement machinery used by the nervous system, such as updating the expected value using reward predic tion errors/dopamine. That said, this phenomenological model, and other similar models in the field, do very well to capture behaviour with a very simple set of explore and update rules.
We use mechanistic in the standard use in modeling, as in Levenstein et al. (2023), for example. The contrast is not with neural modeling, but with normative modeling, in which one develops a model to optimize a function (or descriptive models as to what a system is trying to achieve). In mechanistic modeling one proposes a mechanism and this can be at a state-space level (as in our case) or a neural level (as suggested my the reviewer) but both are considered mechanistic, just at different levels. Quoting Levenstein "... mechanistic models, in which complex processes are summarized in schematic or conceptual structures that represent general properties of components and their interactions, are also commonly used." We now reference the Levenstein paper to clarify what we mean by mechanistic.
(5) Figure 1. It would be useful to state that the x-axis in Figure 1 is in normalized units, depending on the device.
Thank you for this comment. We have added a description of the x-axis units to the Fig. 1 caption.
(6) Were there differences in behaviour for these different devices? e.g., how different was motor noise for the mouse, trackpad, and touchscreen?
Thank you for this question. We did not find a significant effect of device on learning or precision in the baseline block. We have added these one way ANOVA results for each task in Supplementary Table 1.
(7) Line 98. Please state that participants received reinforcement feedback during baseline.
Thank you for this comment. We have updated the text to specify that participants receive reward feedback during the baseline block.
(8) Line 99. Did the distance from the last baseline trial influence whether the participant learned or did not learn? For example, would it place them too far from the peak success location such that it impacted learning?
Thank you for this question. We looked at whether the position of movement on the last baseline block trial was correlated with the first movement position in the learning block. We did not find any correlations between these positions for any of the tasks. Interestingly, we found that the majority of participants move to the center of the workspace on the first trial of the learning block for all tasks (either in the presence of the novel continuous target scene or the presentation of 7 targets all at once). We do not think that the last movement in the baseline block "primed" the participant for the location of the success zone in the learning block. We have added the following sentence to the Results section:
"Note that the reach location for the first learning trial was not affected by (correlated with) the target position on the last baseline trial (p > 0.3 for both children and adults, separately)."
(9) The term learning distance could be improved. Perhaps use distance from target.
Thank you for this comment. We appreciate that learning distance defined with 0 as the best value is counter intuitive. We have changed the language to be "distance from target" as the learning metric.
(10) Line 188. This equation is correct, but to estimate what the standard deviation by the distribution of changes in reach position is more involved. Not sure if the authors carried out this full procedure, which is described in Cashaback et al., 2019; Supplemental 2.
There appear to be no Supplemental 2 in the referenced paper so we assume the reviewer is referring to Supplemental B which deals with a shuffling procedure to examine lag-1 correlations.
In our tasks, we are limited to only 9 trials to analyze in each clamp phase so do not feel a shuffling analysis is warranted. In these blocks, we are not trying to 'estimate what the standard deviation by the distribution of changes in reach position' but instead are calculating the standard deviation of the reach locations and comparing the model fit (for which the reviewer says the formula is correct) with the data. We are unclear what additional steps the reviewer is suggesting. In our updated model analysis, we fit the data including the clamp phases for better parameter estimation. We use simulations to estimate s.d. in the clamp phase (as we ensure in simulations the data does not fall outside the workspace) making the previous analytic formulas an approximation that are no longer used.
(11) Line 197-199. Having done the demo task, it is somewhat surprising that a 3-year-old could understand these instructions (whose comprehension can be very different from even a 5-year old).
Thank you for raising this concern. We recognize that the younger participants likely have different comprehension levels compared to older participants. However, we believe that the majority of even the youngest participants were able to sufficiently understand the goal of the task to move in a way to get the video clip to play. We intentionally designed the tasks to be simple such that the only instructions the child needed to understand were that the goal was to get the video clip to play as much as possible and the video clip played based on their movement. Though the majority of younger children struggled to learn well on the probabilistic tasks, they were able to learn well on the deterministic tasks where the task instructions were virtually identical with the exception of how many places in the workspace could gain reward. On the continuous probabilistic task, we did have a small number (n = 3) of 3 to 5 year olds who exhibited more mature learning ability which gives us confidence that the younger children were able to understand the task goal.
(12) Line 497: Can the authors please report the F-score and p-value separately for each of these one-way ANOVA (the device is of particular interest here).
Thank you for this request. We have added ina upplementarytable (Supplementary Table 1) with the results of these ANOVAs.
(13) Past work has discussed how motivation influences learning, which is a function of success rate (van der Kooij, K., in 't Veld, L., & Hennink, T. (2021). Motivation as a function of success frequency. Motivation and Emotion, 45, 759-768.). Can the authors please discuss how that may change throughout development?
Thank you for this comment. While motivation most probably plays a role in learning, in particular in a game environment, this was out of the scope of the direct focus of this work and not something that our studies were designed to test. We have added the following sentence to the discussion section to address this comment:
"We also recognize that other processes, such as memory and motivation, could affect performance on these tasks however our study was not designed to test these processes directly and future work would benefit from exploring these other components more explicitly."
(14) Supplement 6. This analysis is somewhat incomplete because it does not consider success.
Pekny and collegues (2015) looked at 3 trials back but considered both success and reward. However, their analysis has issues since successive time points are not i.i.d., and spurious relationships can arise. This issue is brought up by Dwahale (Dhawale, A. K., Miyamoto, Y. R., Smith, M. A., & R475 Ölveczky, B. P. (2019). Adaptive regulation of motor variability. Current Biology, 29(21), 3551-3562.). Perhaps it is best to remove this analysis from the paper.
Thank you for this comment. We have decided to remove this secondary analysis from the paper as it was a source of confusion and did not add to the understanding and interpretation of our behavioral results.
Reviewer #2 (Recommendations for the author):
(1 ) the path length ratio analyses in the supplemental are interesting but are not mentioned in the main paper. I think it would be helpful to mention these as they are somewhat dramatic effects
Thank you for this comment. Path length ratios are defined in the Methods and results are briefly summarized in the Results section with a point to the supplementary figures. We have updated the text to more explicitly report the age related differences in path length ratios.
(2) The second to last paragraph of the intro could use a sentence motivating the use ofthe different task features (deterministic/probabilistic and discrete/continuous).
Thank you for this comment. We have added an additional motivating sentence to the introduction.
Reviewer #3 (Recommendations for the author):
The paper labeled the task as one for reinforcement motor learning, which is not quite appropriate in my opinion. Motor learning typically refers to either skill learning or motor adaptation, the former for improving speed-accuracy tradeoffs in a certain (often new) motor skill task and the latter for accommodating some sensorimotor perturbations for an existing motor skill task. The gaming task here is for neither. It is more like a
decision-making task with a slight contribution to motor execution, i.e., motor noise. I would recommend the authors label the learning as reinforcement learning instead of reinforcement motor learning.
Thank you for this comment. As noted in the response to the public review comments, we agree that this task has components of classical reinforcement learning (i.e. responding to a binary reward) but we specifically designed it to require the learning of a movement within a novel game environment. We have added a new paragraph to the introduction where we acknowledge the interplay between cognitive and motor mechanisms while also underscoring the features in our task that we think are not present in typical cognitive tasks.
My major concern is whether the model adequately captures subjects' behavior and whether we can conclude with confidence from model fitting. Motor noise, exploration noise, and learning rate, which fit individual learning patterns (Figure 5b), show some quite unrealistic values. For example, some subjects have nearly zero motor noise and a 100% learning rate.
We have now compared 31 models and the preferred model is different from the one in the first submission. The parameter fits of the new model do not saturate in any way and appear reasonable to us. The updates to the model analysis have addressed the concern of previously seen unrealistic values in the prior draft.
Currently, the paper does not report the fitting quality for individual subjects. It is good to have an exemplary subject's fit shown, too. My guess is that the r-squared would be quite low for this type of data. Still, given that the children's data is noisier, it might be good to use the adult data to show how good the fitting can be (individual fits, r squares, whether the fitted parameters make sense, whether it can extrapolate to the two clamp phases). Indeed, the reliability of model fitting affects how we should view the age effect of these model parameters.
We now show fits to individual subjects. But since this is a Kalman smoother it fits the data perfectly by generating its best estimate of motor noise and exploration variability on each trial to fully account for the data — so in that sense R<sup>2</sup> is always 1 so that is not helpful.
While the BIC analysis with the other model variants provides a relative goodness of fit, it is not straightforward to provide an absolute goodness of fit such as standard R<sup>2</sup> for a feedforward simulation of the model given the parameters (rather than the output of the Kalman smoother). There are two problems. First, there is no single model output. Each time the model is simulated with the fit parameters it produces a different output (due to motor noise, exploration variability and reward stochasticity). Second, the model is not meant to reproduce the actual motor noise, exploration variability and reward stochasticity of a trial. For example, the model could fit pure Gaussian motor noise across trials (for a poor learner) by accurately fitting the standard deviation of motor noise but would not be expected to actually match each data point so would have a traditional R<sup>2</sup> of O.
To provide an overall goodness of fit we have to reduce the noise component and to do so we exam ined the traditional R<sup>2</sup> between the average of all the children's data and the average simulation of the model (from the median of 1000 simulations per participant) so as to reduce the stochastic variation. The results for the continuous probabilistic and discrete probabilistic task are R<sup>2</sup> of 0.41 and 0.72, respectively.
Not that variability in the "success clamp" doe not change across ages (Figure 4C) and does not contribute to the learning effect (Figure 4F). However, it is regarded as reflecting motor noise (Figure SC), which then decreases over age from the model fitting (Figure 5B). How do we reconcile these contradictions? Again, this calls the model fitting into question.
For the success clamp, we only have 9 trials to calculate variability which limits our power to detect significance with age. In contrast, the model uses all 120 trials to estimate motor noise. There is a downward trend with age in the behavioral data which we now show overlaid on the fits of the model for both probabilistic conditions (Figure 5—figure Supplement 4) and Figure 6—figure Supplement 4). These show a reasonable match and although the variance explained is 1 6 and 56% (we limit to 9 trials so as to match the fail clamp), the correlations are 0.52 and 0.78 suggesting we have reasonable relation although there may be other small sources of variability not captured in the model.
Figure 5C: it appears one bivariate outlier contributes a lot to the overall significant correlation here for the "success clamp".
Recalculating after removing that point in original Fig 5C was still significant and we feel the plots mentioned in the previous point add useful information to this issue. With the new model this figure has changed.
It is still a concern that the young children did not understand the instructions. Nine 3-to-8 children (out of 48) were better explained by the noisy only model than the full model. In contrast, ten of the rest of the participants (out of 98) were better explained by the noisy-only model. It appears that there is a higher percentage of the "young" children who didn't get the instruction than the older ones.
Thank you for this comment. We did take participant comprehension of the task into consideration during the task design. We specifically designed it so that the instructions were simple and straight forward. The child simply needs to understand the underlying goal to make the video clip play as often as possible and that they must move the penguin to certain positions to get it to play. By having a very simple task goal, we are able to test a naturalistic response to reinforcement in the absence of an explicit strategy in a task suited even for young children.
We used the updated reinforcement learning model to assess whether an individual's performance is consistent with understanding the task. In the case of a child who does not understand the task, we expect that they simply have motor noise on their reach, and crucially, that they would not explore more after failure, nor update their reach after success. Therefore, we used a likelihood ratio test to examine whether the preferred model was significantly better at explaining each participant's data compared to the model variant which had only motor noise (Model 1). Focusing on only the youngest children (age 3-5), this analysis showed that that 43, 59, 65 and 86% of children (out of N = 21, 22, 20 and 21 ) for the continuous probabilistic, discrete probabilistic, continuous deterministic, and discrete deterministic conditions, respectively, were better fit with the preferred model, indicating non-zero exploration after failure. In the 3-5 year old group for the discrete deterministic condition, 18 out of 21 had performance better fit by the preferred model, suggesting this age group understands the basic task of moving in different directions to find a rewarding location.
The reduced numbers fit by the preferred model for the other conditions likely reflects differences in the task conditions (continuous and/or probabilistic) rather than a lack of understanding of the goal of the task. We include this analysis as a new subsection at the end of the Results.
Supplementary Figure 2: the first panel should belong to a 3-year-old not a 5-year-old? How are these panels organized? This is kind of confusing.
Thank you for this comment. Figure 2—figure Supplement 1 and Figure 2—figure Supplement 2 are arranged with devices in the columns and a sample from each age bin in the rows. For example in Figure 2—figure Supplement 1, column 1, row 1 is a mouse using participant age 3 to 5 years old while column 3, row 2 is a touch screen using participant age 6 to 8 years old. We have edited the labeling on both figures to make the arrangement of the data more clear.
Line 222: make this a complete sentence.
This sentence has been edited to a complete sentence.
Line 331: grammar.
This sentence has been edited for grammar.
-
-
www.cs.toronto.edu www.cs.toronto.edudqn.pdf1
-
Playing Atari with Deep Reinforcement Learning 19 Dec 2013 · Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller
The paper from 2013 that introduced the DQN algorithm for using Deep Learning with Reinforcement Learning to play Atari game.
-
-
mutabit.com mutabit.com
-
Pokemon,

La primera generación de Pokémon, que inició con los juegos de Game Boy (Rojo, Verde y Azul en Japón), presenta 151 especies diferentes. Estos 151 Pokémon son todos originarios de la región de Kanto. El último Pokémon de esta generación es Mew, el número 151.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Another change was that as computers became small enough for people to buy them for their homes, they became seen as toys for boys and not girls. The same transition is seen in video game consoles from being for the whole family to being for boys only [s64
While computers have definitely become more compact with laptops becoming more advanced over time, an industry like gaming is still dominated by men even today. While females are slowly gaining more traction with streamers and developers alike, games are viewed as much more neutral than they were 20 years ago in my opinion. I have plenty of female friends who play games and are in the Infomatics or CS fields.
-
-
vsblog.netlify.app vsblog.netlify.app
-
Integrative modeling: by combining evolutionary game theory with agent-based simulations, the research provides a mechanistic account of how epistemic conditional strategies could have evolved naturally from simpler reactive behaviors, filling a gap in existing literature.
Я бы убрал
-
Evolutionary game-theoretic modeling:
Тоже сильное заявление про моделирование
-
Conceptual analysis: The dissertation begins by clarifying key concepts such as social conventions, game-theoretic equilibria, conditional strategies and social norms. This involves critical engagement with existing philosophical literature to establish a coherent conceptual framework that guides the subsequent modeling work.
Это я бы убрал, скорее набор инструментов, а не метод
-
The entire process can be modeled using evolutionary game theory, demonstrating the plausibility of a naturalistic emergence of epistemic correlation devices and proto-normativity without presupposing more advanced cognitive constructs like collective intentionality or explicit sanctioning mechanisms. This transition is not merely conceptual but can be computationally investigated through agent-based simulations that capture the dynamics of the cognitive evolution of correlation devices represented as ecological cues of different layers of abstraction
Это тоже сильные тезисы, я бы не выносил на защиту и вообще вы все модели убрал
-
-
github.com github.com
-
It support offline login and viewing of any locally cached files.
offline first server with an installer
Game Changer
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do you think this game was realistic?
The game was semi-realistic. I thought some of the questions and issues raised were accurate to real world problems. But, there are some aspects that obviously oversimplify issues for the sake of the game.
-
- May 2025
-
www.platformatichq.com www.platformatichq.com
-
For larger projects with multiple interconnected components, monorepos can be a game-changer, providing efficient dependency management, atomic commits, simplified code sharing, and an improved developer experience.
Tags
Annotators
URL
-
-
fakepixels.substack.com fakepixels.substack.com
-
This may be the good news for those that didn’t dare to fully lean into what they love and want to do. What if the most game-optimal play in the new system is actually to become relentlessly, unapologetically you?
Be you
-
Leisure's opportunity cost skyrockets. When an hour of work generates what once took days, rest becomes luxury taxed by your own conscience. Every pause carries an invisible price tag that flickers in your peripheral vision.Productivity breeds new demand. Like efficient engines creating new energy uses, AI can create entirely new work categories and expectations.Competition intensifies. The game theory is unforgiving: when everyone can produce 10x more, the baseline resets, leaving us all running faster just to stay in place.
Consequences
-
-
www.sahilbloom.com www.sahilbloom.com
-
Life is a game of awareness and action: Awareness to understand something's importance and action to execute on that importance.
Awareness and action are key
-
-
snooplyrics.com snooplyrics.com
-
I’ve been hot for five years, ab karun paanch saal aur (Yeah)
- "Hot" here means successful, popular, or at the top of his game.
- "Ab karun paanch saal aur" is Hindi for "now I'll do five more years."
Literal Meaning So, he's saying he's been consistently successful for five years and he plans to keep that momentum going for at least another five years. It's a statement of ambition and confidence in his longevity.
-
-
snooplyrics.com snooplyrics.com
-
Pehla rap daala jab DHH wasn’t a thing
"Pehla rap daala jab DHH wasn’t a thing"
-
Raftaar is saying, "I've been in the game since before Desi Hip Hop was even popular, I was rapping when it was still new and underground."
-
With this line, Raftaar is asserting himself as a pioneer and veteran in the Indian hip-hop scene. He's making a strong statement about his longevity and his contribution to the genre's growth
Source: 1. LINK 2. LINK 3. [LINK](https://en.wikipedia.org/wiki/Raftaar#:~:text=Dilin%20Nair%20(born%2016%20November,into%20the%20mainstream%20music%20industry.)
-
-
Got the game from Kane, Wayne se bling
"Got the game from Kane, Wayne से bling"
-
"Kane" = likely refers to Big Daddy Kane, a pioneer of 80s–90s lyrical rap — Raftaar salutes OG lyrical technique here.

-
"Wayne से bling" = Lil Wayne, the bling era, punchlines, and swagger-filled wordplay — he took style from Wayne, substance from Kane.

-
Duality shown here: Knowledge from Kane, flash from Wayne — bars + buzz.
-
Subliminal flex: He didn’t just mimic desi rappers — he studied the game from the source.
-
-
Banna chahta tha main baller, jaise rappers in the game
"Banna chahta tha main baller, jaise rappers in the game"
Here, "Baller" means a wealthy and successful person who lives lavishly.
KR$NA desired the feeling that comes with fame and wanted to be a "baller" .
-
Have the game on lock aur yahi tha plan mera (Plan mera)
"Have the game on lock aur yahi tha plan mera (Plan mera)"
-
KR$NA completely control and dominate the rap industry, and achieving this level of power was always his deliberate intention and strategy.
-
Name-dropping "The Game", is a prominent American rapper from Compton, California. He rose to fame in the mid-2000s and is largely credited with helping to revitalize the West Coast hip-hop scene.

-
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Index on Censorship. Interview with a troll. Index on Censorship, September 2011. URL: https://www.indexoncensorship.org/2011/09/interview-with-a-troll/
The interview with the anonymous troll really stood out to me. What surprised me most was how casually he talked about causing distress online—almost like it was a game. He admitted to targeting people not out of deep personal hatred, but just to provoke a reaction or gain attention. This made me think about how anonymity can remove a sense of responsibility, and how moderation online has to deal with behavior that’s intentionally disruptive but not always illegal.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fold-It [p19] is a game that lets players attempt to fold proteins. At the time, researchers were having trouble getting computers to do this task for complex proteins, so they made a game for humans to try it. Researchers analyzed the best players’ results for their research and were able to publish scientific discoveries based on the contributions of players.
I remember playing a game very similar to this on xbox as a kid
-
Researchers analyzed the best players’ results for their research and were able to publish scientific discoveries based on the contributions of players.
I think it's really interesting that a video game like Fold-It helped scientists make real discoveries. It shows how powerful crowdsourcing can be, even for serious scientific problems. I’ve never thought of games being useful in science before, but now I wonder if more scientific research could be turned into fun challenges for regular people to help with.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
[p19] Foldit. September 2023. Page Version ID: 1175905648. URL: https://en.wikipedia.org/w/index.php?title=Foldit&oldid=1175905648 (visited on 2023-12-08).
Foldit is a game where players fold protein structures as best as possible, and the answers they create are checked against real world proteins as it's not possible to automatically create solutions. This allowed solutions to protein structures to be crowd sourced, and many proteins were actually solved through this. Its a great example of crowdsourcing scientific knowledge in a way that makes use of things humans can do that computers couldn't.. Now I believe that there is AI that can fold protein structures.
-
Mike Gavin. Canucks' staffer uses social media to find fan who saved his life.
I saw this awhile ago when it initially came out. At a game, a fan pointed out a mole on the neck of a Canuck staff and saved his life. He went to the doctor to check it out and it was cancer. The doctor told him he would've only had 4-5 years left if gone unchecked. The staff member ended up finding the fan through social media when his story went viral.
-
Kickstarter. URL: https://www.kickstarter.com/ (visited on 2023-12-08).
I am not too familiar with Kickstarter, but I do know that it is a great platform to fund various projects through crowdfunding. Some examples probably include technology, art, or film production. There have been some pretty cool projects that have come from Kickstarter too, such as game "Exploding Kittens".
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In what ways do you think you’ve participated in any crowdsourcing online?
I've participated in crowdsurfing on a Fandom Wiki for a game called Warframe. For context, Warframe is a game similar to Halo but more fast-paced and consisting of its own complex lore. The game still gets consistent updates to this day, updates that add to both the gameplay and lore. I've added a few of my own entries into the wiki involving the lore to help other people get a better idea of what-means-what, mainly due to how complicated and extensive the devs make it out to be.
-
-
dynomight.net dynomight.net
-
We all play the game we think we can do better at.
Is that actually true? Surely there are examples where people play the game that they're less suited for—where the decision is driven by desire?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Spamming. December 2023. Page Version ID: 1187995774. URL: https://en.wikipedia.org/w/index.php?title=Spamming&oldid=1187995774 (visited on 2023-12-08).
One of the sources mentioned in the Wikipedia article is Spam Kings by Brian S. McWilliams (2005), which deals with the growth of spam operations and the individuals involved in them. Something that caught my attention among review and summary descriptions of this book is how spam is not merely a technical problem—it’s a human one. McWilliams tracks real-life spammers and anti-spammers alike, demonstrating the cat-and-mouse game that developed in the early 2000s. Something that surprised me in reading descriptions of this book is that a number of spammers lived with a sense of pride and even regarded themselves as entrepreneurs, not scammers. It challenged me to consider how ethics of online practice might differ profoundly based on perspective and how some individuals might justify nefarious digital practices as innovative or innocuous business tactics. It also relates back to coursewide topics of online regulation and the fuzzy line between “free enterprise” and exploitative practice online.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Have you ever reported a post/comment for violating social media platform rules? Have you ever faced consequences for breaking social media rules (or for being accused of it)? In unmoderated online spaces who has the most power and ability to speak and be heard? Who has the least power and ability to speak and be heard?
I have reported post/comment for violating social media platform rule, I assume video game is a form of social media, and I reported a lor of players for being toxic or racist or harassment. I never got banned for saying inappropriate stuff but I did get banned for quit in the middle of the game. which technically count as violation of the rules. Whoever has the ability to ban others has the most power, the rest has the least power.
-
Have you ever reported a post/comment for violating social media platform rules?
Yes I always do this in League of Legend, and I always get report feedback. Recently Riot are getting serious to any comment that is rude. I have been banned talking and typing in game, because I borrow my account to other people. They are less good at English and communicate more aggresively
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
We will revise the statements of novelty in the introduction by more clearly emphasizing how our model addresses gaps in the existing literature. In addition, we will clarify the description of the dispersal process. Briefly, we use the same dispersal gene β to represent the likelihood an individual will either leave or join a group, thereby quantifying both dispersal and immigration using the same parameter. Specifically, individuals with higher β are more likely to remain as floaters (i.e., disperse from their natal group to become a breeder elsewhere), whereas those with lower β are either more likely to remain in their natal group as subordinates (i.e., queue in a group for the breeding position) or join another group if they dispersed. Immigrants that join a group as a subordinate help and queue for a breeding position, as does any natal subordinate born into the group. To follow the suggestion of the referee and more fully explore the impact of competition between subordinates born in the group and subordinate immigrants, we will explore extending our model to allow dispersers to leave their natal group and join another as subordinates, by incorporating a reaction norm based on their age or rank (D = 1 / (1 + exp (β<sub>t</sub> * t – β<sub>0</sub>)) . This approach will allow individuals to adjust also their dispersal strategy to their competitiveness and to avoid kin competition by remaining as a subordinate in another group.
We apologize that there was some confusion with terminology. We use the term “disperser” to describe individuals that disperse from their natal group. Dispersers can assume one of three roles: (1) they can migrate to another group as "subordinates"; (2) they can join another group as "breeders" if they successfully outcompete other candidates; or (3) they can remain as "floaters" if they fail to join a group. "Floaters" are individuals who persist in a transient state without access to a breeding territory, waiting for opportunities to join a group in an established territory. Therefore, dispersers do not work when they are floaters, but they may later help if they immigrate to a group as a subordinate. Consequently, immigrant subordinates have no inherent competitive advantage over natal subordinates (as step 2.2. “Join a group” is followed by step 3. “Help”, which occurs before step 5. “Become a breeder”). Nevertheless, floaters can potentially outcompete subordinates of the same age if they attempt to breed without first queuing as a subordinate (step 5) when subordinates are engaged in work tasks. We believe that this assumption is realistic and constitutes part of the costs associated with work tasks. However, floaters are at a disadvantage for becoming a breeder because: (1) floaters incur higher mortality than individuals within groups (eq. 3); and (2) floaters may only attempt to become breeders in some breeding cycles (versus subordinate groups members, who are automatically candidates for an open breeding position in the group in each cycle). Therefore, due to their higher mortality, floaters are rarely older than individuals within groups, which heavily influences dominance value and competitiveness. Additionally, any competitive advantage that floaters might have over other subordinate group members is unlikely to drive the kin selection-only results because subordinates would preferably choose defense tasks instead of work tasks so as not to be at a competitive disadvantage compared to floaters.
We note that reviewers also mention that floaters often aren't usually high resource holding potential (RHP) individuals and, therefore, our assumptions might be unrealistic. As we explain above, floaters are not inherently at a competitive advantage in our model. In any case, empirical work in a number of species has shown that dispersers are not necessarily those of lower RHP or of lower quality. In fact, according to the ecological constraints hypothesis, one might predict that high quality individuals are the ones that disperse because only individuals in good condition (e.g., larger body size, better energy reserves) can afford the costs associated with dispersal (Cote et al., 2022). By adding a reaction norm approach to explore the role of age or rank in the revised version, we can also determine whether higher or lower quality individuals are the ones dispersing. We will address the issues of terminology and clarity of the relative competitive advantage of floaters versus subordinates, and also include more information in the Supplementary Tables (e.g., the number of floaters). As a side note, the “scramble context” we mention was an additional implementation that we decided to remove from the final manuscript, but we forgot to remove from Table 1 before submission.
The reviewers also raised a question about asexual reproduction and relatedness more generally. As we showed in the Supplementary Tables and the section on relatedness in the SI (“Kin selection and the evolution of division of labor"), high relatedness does not appear to explain our results. In evolutionary biology generally and in game theory specifically (with the exception of models on sexual selection or sex-specific traits), asexual reproduction is often modelled because it reduces unnecessary complexity. To further study the effect of relatedness on kin structures more closely resembling those of vertebrates, however, we will create an additional “relatedness structure level”, where we will shuffle half of the philopatric offspring using the same method used to remove relatedness completely. This approach will effectively reduce relatedness structure by half and overcome the concerns with our decision to model asexual reproduction.
Briefly, we will elaborate on the concept of division of labor and the tasks that cooperative breeders perform. In nature, multiple tasks are often necessary to successfully rear offspring. For example, in many cooperatively breeding birds, the primary reasons that individuals fail to produce offspring are (1) starvation, which is mitigated by the feeding of offspring, and (2) nest depredation, which is countered by defensive behavior. Consequently, both types of tasks are necessary to successfully produce offspring, and focusing solely on one while neglecting the other is likely to result in lower reproductive success than if both tasks are performed by individuals within the group. We simplify this principle in the model by maximizing reproductive output when both tasks are carried out to a similar extent, allowing for some flexibility from the mean. In response to the reviewer suggestion about making fecundity a function of work tasks and offspring survival as a function of defensive tasks, these are actually equivalent in model terms, as it’s the same whether breeders produce three offspring and two die, or if they only produce one. This represents, of course, a simplification of the natural context, where breeding unsuccessfully is more costly (in terms of time and energy investment) than not breeding at all, but this is approach is typically used in models of this sort.
The scope of this paper was to study division of labor in cooperatively breeding species with fertile workers, in which help is exclusively directed towards breeders to enhance offspring production (i.e., alloparental care). Our focus is in line with previous work in most other social animals, including eusocial insects and humans, which emphasizes how division of labor maximizes group productivity. Other forms of “general” help are not considered in the paper, and such forms of help are rarely considered in cooperatively breeding vertebrates or in the division of labor literature, as they do not result in task partitioning to enhance productivity.
How do we model help? Help provided is an interaction between H (total effort) and T (proportion of total effort invested in each type of task). We will make this definition clearer in the revised manuscript. Thank you for pointing out an error in Eq. 1. This inequality was indeed written incorrectly in the paper (but is correct in the model code); it is dominance rank instead of age (see code in Individual.cpp lines 99-119). We will correct this mistake in the revision.
There was also a question about bounded and unbounded helping costs. The difference in costs is inherent to the nature of the different task (work or defense): while survival is naturally bounded, with death as the lower bound, dominance costs are potentially unbounded, as they are influenced by dynamic social contexts and potential competitors. Therefore, we believe that the model’s cost structure is not too different to that in nature.
Thank you for your comments about the parameter landscape. It is important to point out that variations in the mutation rate do not qualitatively affect our results, as this is something we explored in previous versions of the model (not shown). Briefly, we find that variations in the mutation rates only alter the time required to reach equilibrium. Increasing the step size of mutation diminishes the strength of selection by adding stochasticity and reducing the genetic correlation between offspring and their parents. Population size could, in theory, affect our results, as small populations are more prone to extinction. Since this was not something we planned to explore in the paper directly, we specifically chose a large population size, or better said, a large number of territories (i.e. 5000) that can potentially host a large population.
During the exploratory phase of the model development, various parameters and values were also assessed. However, the manuscript only details the ranges of values and parameters where changes in the behaviors of interest were observed, enhancing clarity and conciseness. For instance, variation in y<sub>h</sub> (the cost of help on dominance when performing “work tasks”) led to behavioral changes similar to those caused by changes in x<sub>h</sub> (the cost of help in survival when performing “defensive tasks”), as both are proportional to each other. Specifically, since an increase in defense costs raises the proportion of work relative to defense tasks, while an increase in the costs of work task has the opposite effect, only results for the variation of x<sub>h</sub> were included in the manuscript to avoid redundancy. We will make this clearer in the revision.
Finally, following the advice from the reviewers, we will add the symbols of the variables to the figure axes, and clarify whether the values shown represent a genetic or phenotypic trait. In Figure 2, the x-axis is H and the y-axis is T. In Figure 3A, the subindex t in x-axis is incorrect; it should be subindex R (reaction norm to dominance rank instead of age), the y-axis is T. In Figure 3B, the x-axis is R, and the y-axis is T. All values of T, H and R are phenotypic expressed values (see Table 1). For instance, T values are the phenotypic expressed values from the individuals in the population according to their genetic gamma values and their current dominance rank at a given time point.
References
Cote, J., Dahirel, M., Schtickzelle, N., Altermatt, F., Ansart, A., Blanchet, S., Chaine, A. S., De Laender, F., De Raedt, J., & Haegeman, B. (2022). Dispersal syndromes in challenging environments: A cross‐species experiment. Ecology Letters, 25(12), 2675–2687.
-
-
pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
-
“Oh, you needs tuh learn how. ’Tain’t no need uh you not knowin’ how tuh handle shootin’ tools. Even if you didn’t never find no game, it’s always some trashy rascal dat needs uh good killin’,”
If you know you know
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Mike Masnick, Randy Lubin, and Leigh Beadon. Moderator Mayhem: A Content Moderation Game. URL: https://moderatormayhem.engine.is/ (visited on 2023-12-17).
This game is very interesting and gives an insight into what content moderation might be. The time limit kept me stressed while trying to quickly read through the card prompts and waiting a few seconds to find out more info. Seeing how many requests were coming in also gave me more stress and forced me to be even more sloppy with my job. I can see why it would be stressful to be a content moderator aside from the NSFW/NSFL material that they may have to come across. The speed and accuracy that is needed is stressful and not something that can be perfected.
-
-
easternpeak.com easternpeak.com
-
The way these platforms adjust content in real time based on each student’s progress feels like a real game-changer for education
-
-
www.biorxiv.org www.biorxiv.org
-
Reviewer #3 (Public review):
Summary:
A very thorough technical report of a new standalone, open-source software for microscopy image processing and analysis (MorphoNet 2.0), with a particular emphasis on automated segmentation and its curation to obtain accurate results even with very complex 3D stacks, including timelapse experiments.
Strengths:
The authors did a good job of explaining the advantages of MorphoNet 2.0, as compared to its previous web-based version and to other software with similar capabilities. What I particularly found more useful to actually envisage these claimed advantages is the five examples used to illustrate the power of the software (based on a combination of Python scripting and the 3D game engine Unity). These examples, from published research, are very varied in both types of information and image quality, and all have their complexities, making them inherently difficult to segment. I strongly recommend the readers to carefully watch the accompanying videos, which show (although not thoroughly) how the software is actually used in these examples.
Weaknesses:
Being a technical article, the only possible comments are on how methods are presented, which is generally adequate, as mentioned above. In this regard, and in spite of the presented examples (chosen by the authors, who clearly gave them a deep thought before showing them), the only way in which the presented software will prove valuable is through its use by as many researchers as possible. This is not a weakness per se, of course, but just what is usual in this sort of report. Hence, I encourage readers to download the software and give it time to test it on their own data (which I will also do myself).
In conclusion, I believe that this report is fundamental because it will be the major way of initially promoting the use of MorphoNet 2.0 by the objective public. The software itself holds the promise of being very impactful for the microscopists' community.
-
Author response:
eLife Assessment
This work presents an important technical advancement with the release of MorphoNet 2.0, a user-friendly, standalone platform for 3D+T segmentation and analysis in biological imaging. The authors provide convincing evidence of the tool's capabilities through illustrative use cases, though broader validation against current state-of-the-art tools would strengthen its position. The software's accessibility and versatility make it a resource that will be of value for the bioimaging community, particularly in specialized subfields.
We would like to thank the editors and reviewers for their careful and constructive evaluation of our manuscript “MorphoNet 2.0: An innovative approach for qualitative assessment and segmentation curation of large-scale 3D time-lapse imaging datasets”. We are grateful for the positive assessment of MorphoNet 2.0 as a valuable and accessible tool for the bioimaging community, and for the recognition of its technical advancements, particularly in the context of complex 3D+t segmentation tasks.
The reviewers have highlighted several important points that we will address in the revised manuscript. These include:
- The need for a clearer demonstration that improvements in unsupervised quality metrics correspond to actual improvements in segmentation quality. In response, we will provide comparisons with gold standard annotations where available and clarify how to interpret metric distributions.<br /> - The potential risk of circular logic when using unsupervised metrics to guide model training. We now explicitly discuss this limitation and emphasize the importance of external validation and expert input.<br /> - The value of comparing MorphoNet 2.0 to other tools such as FIJI and napari. We will include a comparative table to help readers understand MorphoNet’s positioning and complementarity.<br /> - The importance of clearer documentation and terminology. We will overhaul the help pages, standardize plugin naming, and add a glossary-style table to the manuscript.<br /> - Suggestions for future developments, such as mesh export and interoperability with napari, which we will explore for the revision.
We appreciate the detailed feedback on both scientific and editorial aspects, including corrections to figures and text, and we will integrate all suggested revisions to improve the manuscript’s clarity and impact. We are confident that these changes will strengthen the manuscript and enhance the utility of MorphoNet 2.0 for the community.
Public Reviews:
Reviewer #1 (Public review):
The authors present a substantial improvement to their existing tool, MorphoNet, intended to facilitate assessment of 3D+t cell segmentation and tracking results, and curation of high-quality analysis for scientific discovery and data sharing. These tools are provided through a user-friendly GUI, making them accessible to biologists who are not experienced coders. Further, the authors have re-developed this tool to be a locally installed piece of software instead of a web interface, making the analysis and rendering of large 3D+t datasets more computationally efficient. The authors evidence the value of this tool with a series of use cases, in which they apply different features of the software to existing datasets and show the improvement to the segmentation and tracking achieved.
While the computational tools packaged in this software are familiar to readers (e.g., cellpose), the novel contribution of this work is the focus on error correction. The MorphoNet 2.0 software helps users identify where their candidate segmentation and/or tracking may be incorrect. The authors then provide existing tools in a single user-friendly package, lowering the threshold of skill required for users to get maximal value from these existing tools. To help users apply these tools effectively, the authors introduce a number of unsupervised quality metrics that can be applied to a segmentation candidate to identify masks and regions where the segmentation results are noticeably different from the majority of the image.
This work is valuable to researchers who are working with cell microscopy data that requires high-quality segmentation and tracking, particularly if their data are 3D time-lapse and thus challenging to segment and assess. The MorphoNet 2.0 tool that the authors present is intended to make the iterative process of segmentation, quality assessment, and re-processing easier and more streamlined, combining commonly used tools into a single user interface.
We sincerely thank the reviewer for their thorough and encouraging evaluation of our work. We are grateful that they highlighted both the technical improvements of MorphoNet 2.0 and its potential impact for the broader community working with complex 3D+t microscopy datasets. We particularly appreciate the recognition of our efforts to make advanced segmentation and tracking tools accessible to non-expert users through a user-friendly and locally installable interface, and for pointing out the importance of error detection and correction in the iterative analysis workflow. The reviewer’s appreciation of the value of integrating unsupervised quality metrics to support this process is especially meaningful to us, as this was a central motivation behind the development of MorphoNet 2.0. We hope the tool will indeed facilitate more rigorous and reproducible analyses, and we are encouraged by the reviewer’s positive assessment of its utility for the community.
One of the key contributions of the work is the unsupervised metrics that MorphoNet 2.0 offers for segmentation quality assessment. These metrics are used in the use cases to identify low-quality instances of segmentation in the provided datasets, so that they can be improved with plugins directly in MorphoNet 2.0. However, not enough consideration is given to demonstrating that optimizing these metrics leads to an improvement in segmentation quality. For example, in Use Case 1, the authors report their metrics of interest (Intensity offset, Intensity border variation, and Nuclei volume) for the uncurated silver truth, the partially curated and fully curated datasets, but this does not evidence an improvement in the results. Additional plotting of the distribution of these metrics on the Gold Truth data could help confirm that the distribution of these metrics now better matches the expected distribution.
Similarly, in Use Case 2, visual inspection leads us to believe that the segmentation generated by the Cellpose + Deli pipeline (shown in Figure 4d) is an improvement, but a direct comparison of agreement between segmented masks and masks in the published data (where the segmentations overlap) would further evidence this.
We agree that demonstrating the correlation between metric optimization and real segmentation improvement is essential. We will add new analysis comparing the distributions of the unsupervised metrics with the gold truth data before and after curation. Additionally, we will provide overlap scores where ground truth annotations are available, confirming the improvement. We will also explicitly discuss the limitation of relying solely on unsupervised metrics without complementary validation.
We would appreciate the authors addressing the risk of decreasing the quality of the segmentations by applying circular logic with their tool; MorphoNet 2.0 uses unsupervised metrics to identify masks that do not fit the typical distribution. A model such as StarDist can be trained on the "good" masks to generate more masks that match the most common type. This leads to a more homogeneous segmentation quality, without consideration for whether these metrics actually optimize the segmentation
We thank the reviewer for this important and insightful comment. It raises a crucial point regarding the risk of circular logic in our segmentation pipeline. Indeed, relying on unsupervised metrics to select “good” masks and using them to train a model like StarDist could lead to reinforcing a particular distribution of shapes or sizes, potentially filtering out biologically relevant variability. This homogenization may improve consistency with the chosen metrics, but not necessarily with the true underlying structures.
We fully agree that this is a key limitation to be aware of. We will revise the manuscript to explicitly discuss this risk, emphasizing that while our approach may help improve segmentation quality according to specific criteria, it should be complemented with biological validation and, when possible, expert input to ensure that important but rare phenotypes are not excluded.
In Use case 5, the authors include details that the errors were corrected by "264 MorphoNet plugin actions ... in 8 hours actions [sic]". The work would benefit from explaining whether this is 8 hours of human work, trying plugins and iteratively improving, or 8 hours of compute time to apply the selected plugins.
We will clarify that the “8 hours” refer to human interaction time, including exploration, testing, and iterative correction using plugins.
Reviewer #2 (Public review):
Summary:
This article presents Morphonet 2.0, a software designed to visualise and curate segmentations of 3D and 3D+t data. The authors demonstrate their capabilities on five published datasets, showcasing how even small segmentation errors can be automatically detected, easily assessed, and corrected by the user. This allows for more reliable ground truths, which will in turn be very much valuable for analysis and training deep learning models. Morphonet 2.0 offers intuitive 3D inspection and functionalities accessible to a non-coding audience, thereby broadening its impact.
Strengths:
The work proposed in this article is expected to be of great interest to the community by enabling easy visualisation and correction of complex 3D(+t) datasets. Moreover, the article is clear and well written, making MorphoNet more likely to be used. The goals are clearly defined, addressing an undeniable need in the bioimage analysis community. The authors use a diverse range of datasets, successfully demonstrating the versatility of the software.
We would also like to highlight the great effort that was made to clearly explain which type of computer configurations are necessary to run the different datasets and how to find the appropriate documentation according to your needs. The authors clearly carefully thought about these two important problems and came up with very satisfactory solutions.
We would like to sincerely thank the reviewer for their positive and thoughtful feedback. We are especially grateful that they acknowledged the clarity of the manuscript and the potential value of MorphoNet 2.0 for the community, particularly in facilitating the visualization and correction of complex 3D(+t) datasets. We also appreciate the reviewer’s recognition of our efforts to provide detailed guidance on hardware requirements and access to documentation—two aspects we consider crucial to ensuring the tool is both usable and widely adopted. Their comments are very encouraging and reinforce our commitment to making MorphoNet 2.0 as accessible and practical as possible for a broad range of users in the bioimage analysis community.
Weaknesses:
There is still one concern: the quantification of the improvement of the segmentations in the use cases and, therefore, the quantification of the potential impact of the software. While it appears hard to quantify the quality of the correction, the proposed work would be significantly improved if such metrics could be provided.
The authors show some distributions of metrics before and after segmentations to highlight the changes. This is a great start, but there seem to be two shortcomings: first, the comparison and interpretation of the different distributions does not appear to be trivial. It is therefore difficult to judge the quality of the improvement from these. Maybe an explanation in the text of how to interpret the differences between the distributions could help. A second shortcoming is that the before/after metrics displayed are the metrics used to guide the correction, so, by design, the scores will improve, but does that accurately represent the improvement of the segmentation? It seems to be the case, but it would be nice to maybe have a better assessment of the improvement of the quality.
We thank the reviewer for this constructive and important comment. We fully agree that assessing the true quality improvement of segmentation after correction is a central and challenging issue. While we initially focused on changes in the unsupervised quality metrics to illustrate the effect of the correction, we acknowledge that interpreting these distributions may not be straightforward, and that relying solely on the metrics used to guide the correction introduces an inherent bias in the evaluation.
To address the first point, we will revise the manuscript to provide clearer guidance on how to interpret the changes in metric distributions before and after correction, with additional examples to make this interpretation more intuitive.
Regarding the second point, we agree that using independent, external validation is necessary to confirm that the segmentation has genuinely improved. To this end, we will include additional assessments using complementary evaluation strategies on selected datasets where ground truth is accessible, to compare pre- and post-correction segmentations with an independent reference. These results reinforce the idea that the corrections guided by unsupervised metrics generally lead to more accurate segmentations, but we also emphasize their limitations and the need for biological validation in real-world cases.
Reviewer #3 (Public review):
Summary:
A very thorough technical report of a new standalone, open-source software for microscopy image processing and analysis (MorphoNet 2.0), with a particular emphasis on automated segmentation and its curation to obtain accurate results even with very complex 3D stacks, including timelapse experiments.
Strengths:
The authors did a good job of explaining the advantages of MorphoNet 2.0, as compared to its previous web-based version and to other software with similar capabilities. What I particularly found more useful to actually envisage these claimed advantages is the five examples used to illustrate the power of the software (based on a combination of Python scripting and the 3D game engine Unity). These examples, from published research, are very varied in both types of information and image quality, and all have their complexities, making them inherently difficult to segment. I strongly recommend the readers to carefully watch the accompanying videos, which show (although not thoroughly) how the software is actually used in these examples.
We sincerely thank the reviewer for their thoughtful and encouraging feedback. We are particularly pleased that the reviewer appreciated the comparative analysis of MorphoNet 2.0 with both its earlier version and existing tools, as well as the relevance of the five diverse and complex use cases we selected. Demonstrating the software’s versatility and robustness across a variety of challenging datasets was a key goal of this work, and we are glad that this aspect came through clearly. We also appreciate the reviewer’s recommendation to watch the accompanying videos, which we designed to provide a practical sense of how the tool is used in real-world scenarios. Their positive assessment is highly motivating and reinforces the value of combining scripting flexibility with an interactive 3D interface.
Weaknesses:
Being a technical article, the only possible comments are on how methods are presented, which is generally adequate, as mentioned above. In this regard, and in spite of the presented examples (chosen by the authors, who clearly gave them a deep thought before showing them), the only way in which the presented software will prove valuable is through its use by as many researchers as possible. This is not a weakness per se, of course, but just what is usual in this sort of report. Hence, I encourage readers to download the software and give it time to test it on their own data (which I will also do myself).
We fully agree that the true value of MorphoNet 2.0 will be demonstrated through its practical use by a wide range of researchers working with complex 3D and 3D+t datasets. In this regard, we will improve the user documentation and provide a set of example datasets to help new users quickly familiarize themselves with the platform. We are also committed to maintaining and updating MorphoNet 2.0 based on user feedback to further support its usability and impact.
In conclusion, I believe that this report is fundamental because it will be the major way of initially promoting the use of MorphoNet 2.0 by the objective public. The software itself holds the promise of being very impactful for the microscopists' community.
-
-
faculty.washington.edu faculty.washington.edu
-
There are two adjacent commands that do very different things.
I observed this design problem on several instances in different applications. Recently, I was playing a video game where tried purchasing a new car and I was one click away from destroying my previous car that I worked hard for (you're only allowed to have one). There was just one warning which looked like any other generic warning in the game that this would happen and I almost pressed yes which would've been pitiful. It's really important when designing interfaces to pay attention to these little things that can make or break the user experience.
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
n schools where dance and cheer is given less status than football and basketball and young women’s bodies are perceived as fair game for commentary
There truly is a blueprint for how to act and look in regards to women rather for men because in my opinion society still suffers from seeking women as "decor" against a man to help him look better. Muscular or "bigger" sized women should be normalized not just in sports or such groups but also in the media and society as a whole.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What experiences do you have of social media sites making particularly good recommendations for you?
One algorithm that I really enjoy is the daily music recommendations on Netease music, and I knew its development trajectory really well. The recommendation algorithm was first introduced into the music app about 6-7 years ago, and only have a feature of "daily recommendation" that contains 30 songs based on the songs the users listen and added to playlists before. It wasn't a big feature and was not so advanced back then, but the algorithm was rapidly evolved and start to hit more and more people's sweet spot by assesing both short term data (like the songs the user listened to the day before) and long term data (like the song genres the user is interested in this month), and adding more accurate and precise labels to songs. Sometimes the labels are so precise that the algorithms can deduce the game the songs I'm listening to come from and recommend other pieces from the same game. About 2 years ago, the algorithm experiences another great upgrade and start to have sub-category recommendations. For example, there are classic and J-pop recommendations that only provides these kinds of pieces based on my interest, yet the effectiveness of the recommendation is still top-tier. It's not an exagerration to say that the recommendation algorithm of Netease music perfectly addresses the pain of finding new songs that fit my taste. I also really appreciate the data privacy of the algorithm, since it does not ask for privacy information like location, contacts, etc. Even a newly established account with no info can use the algorithm.
-
-
learnenglish.britishcouncil.org learnenglish.britishcouncil.org
-
quiz
a game or competition in which you answer questions 問答比賽,智力競賽
- UK A lot of pubs have quiz nights once or twice a week. 許多酒吧每週都會有一兩次的智力競賽之夜。
-
-
-
Mangione took a software programming internship after high school at Maryland-based video game studio Firaxis, where he fixed bugs on the hit strategy game Civilization 6, according to a LinkedIn profile. Firaxis’ parent company, Take-Two Interactive, said it would not comment on former employees.
I fail to see how this is relevant to the discussion. I feel like his software programming internship doesn’t explain why he ended up "killing" Brian Thompson.
-
-
hiphopdx.com hiphopdx.com
-
You going against Compton????
Kendrick automatically has the whole west coast on his side
-
loyalty to his hometown
loyalty is seemingly very important in Compton culture
-
-
www.theringer.com www.theringer.com
-
Jay and Nas needed to overcome each other to elevate themselves to the top of the pile.
Drake and Kendrick are undoubtedly among the top of the game right now
-
height of Drake’s and Kendrick’s respective careers?
very debatable, both have grown a ton since 2013 only getting better. Kendrick has been putting out music less consistently and has been experimenting more but that doesn't mean he's not at the top of his game now
-
-
news.northeastern.edu news.northeastern.edu
-
“sneak dissing.”
extremely popular tactic in todays rap game
-
Competition is “intrinsic to hip-hop” culture, Forman says.
it has been said before that hip hop is a game, aka the rap "game"
-
-
people.com people.com
-
Lamar denounced the notion of himself and Drake being on the same level.
kendrick rapped, "motherfuck the big 3, it's just big me" the big 3 being drake, kendrick, j cole. all of them are considered to be the best rappers in game by a long shot
-
-
www.rollingstone.com www.rollingstone.com
-
evoking other people in a beef, especially a dead person, is fair game.
article said earlier, "there are no rules in rap beef", contradiction
-
why artists are coming out against him
drake has made a lot of enemies in his time in the rap game, his song "Fake Love" says, "i got fake people showing fake love to me" implying that he knows some of his friends aren't really his friends
-
Drake has long been accused of being a culture vulture
longstanding diss of drake, who is lightskin, in one of his songs he says, "i used to get bullied for being black and when i get here i'm not black enough", "here" being the rap game
-
-
www.biorxiv.org www.biorxiv.org
-
Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.
Learn more at Review Commons
Reply to the reviewers
Reviewer #1 (Evidence, reproducibility and clarity)
*This study examines the reorganization of the microtubule (MT) cytoskeleton during early neuronal development, specifically focusing on the establishment of axonal and dendritic polarity. Utilizing advanced microscopy techniques, the authors demonstrate that stable microtubules in early neurites initially exhibit a plus-end-out orientation, attributed to their connection with centrioles. Subsequently, these microtubules are released and undergo sliding, resulting in a mixed-polarity orientation in early neurites. Furthermore, the study elegantly illustrates the spatial segregation of microtubules in dendrites based on polarity and stability. The experiments are rigorously executed, and the microscopy data are presented with exceptional clarity. The following are my primary concerns that warrant further consideration by the authors. *
-
Potential Bias in the MotorPAINT Assay: Kinesin-1 and kinesin-3 motors exhibit distinct preferences for post-translationally modified (PTM) microtubules. Given that kinesin-1 preferentially binds to acetylated microtubules over tyrosinated microtubules in the MotorPAINT assay, the potential for bias in the results arises. Have the authors explored the use of kinesin-3, which favors tyrosinated microtubules, to corroborate the observed microtubule polarity?
We thank the reviewer for the careful assessment of our manuscript. As the reviewer noted, it has indeed been demonstrated that kinesin-1 prefers microtubules marked by acetylation (Cai et al., PLoS Biol 2009; Reed et al., Curr Biol 2006) and kinesin-3 prefers microtubules marked by tyrosination in cells (Guedes-Dias et al., Curr Biol 2019; Tas et al., Neuron 2017); however, these preferences are limited in vitro, as demonstrated for example in Sirajuddin et al. (Nat Cell Biol 2014). When motor-PAINT was introduced, it was verified that purified kinesin-1 moves over both acetylated and tyrosinated microtubules with no apparent preference in this assay (Tas et al., Neuron 2017). This could be due to the more in vitro-like nature of the motor-PAINT assay (e.g. some MAPs may be washed away) and/or because of the addition of Taxol during the gentle fixation step, which converts all microtubules into those preferred by kinesin-1. We will clarify this in the text.
Planned revisions:
- We will clarify the lack of kinesin-1 selectivity in motor-PAINT assays in the text by adding the following sentence in the main text when introducing motor-PAINT: Importantly, while kinesin-1 has been shown to selectively move on stable, highly-modified microtubules in cells (Cai et al., PLoS Biol 2009; Reed et al., Curr Biol 2006), this is not the case after motor-PAINT sample preparation (Tas et al., Neuron 2017).
Axon-Like Neurites in Stage 2b Neurons: The observation of axon-like neurites in Stage 2b neurons, characterized by an (almost) uniformly plus-end-out microtubule organization, is noteworthy. Have the authors confirmed this polarity using end-binding (EB) protein tracking (e.g., EB1, EB3) in Stage 2b neurons? Do these neurites display distinct morphological features, such as variations in width? Furthermore, do they consistently differentiate into axons when tracked over time using live-cell EB imaging, rather than the MotorPAINT assay? Could stable microtubule anchoring impede free sliding in these neurites or restrict sliding into them? Investigating microtubule sliding dynamics in these axon-like neurites would provide valuable insights.
We thank the reviewer for highlighting this finding. Early in development, cultured neurons are known to transiently polarize and have axon-like neurites that may or may not develop into the future axon (Burute et al., Sci Adv 2022; Schelski & Bradke, Sci Adv 2022; Jacobson et al., Neuron 2006). In the absence of certain molecular or physical factors (e.g. Burute et al., Sci Adv 2022; Randlett et al., Neuron 2011), this transient polarization is seemingly random and as such, we do not expect the axon-like neurites in stage 2b neurons to necessarily become the axon. Interestingly, anchoring stable microtubules in a specific neurite using cortically-anchored StableMARK (Burute et al., Sci Adv 2022) or stabilizing microtubules in a specific neurite using Taxol (Witte et al., JCB 2008) has been shown to promote axon formation, but these stable microtubules have slower turnover (perhaps necessitating the use of laser severing as in Yau et al., J Neurosci 2016) and may not always bear EB comets given that EB comets are less commonly seen at the ends of stable microtubules (Jansen et al., JCB 2023).
Planned revision:
- We will add additional details to the text to clarify the likely transient nature of this polarization in agreement with previous literature and specify that they are otherwise not morphologically distinct.
- We will perform additional EB3 tracking experiments in Stage 2b neurons to examine potential differences between neurites.
*Taxol and Microtubule Sliding: Taxol-induced microtubule stabilization is known to induce the formation of multiple axons. Does taxol treatment diminish microtubule sliding and prevent polarity reversal in minor neurites, thereby facilitating their development into axons? *
We thank the reviewer for this interesting suggestion. Taxol converts all microtubules into stable microtubules. Given that the initial neurites tend to be of mixed polarity, having stable microtubules pointing the "wrong" way may impede sliding and polarity sorting. Alternatively, since it is precisely the stable microtubules that we see sliding between and within neurites using StableMARK, Taxol may also increase the fraction of microtubules undergoing sliding. Because of this, it is not straightforward to predict how Taxol affects microtubule (re-)orientation and sliding. Preliminary motor-PAINT experiments do suggest that the multiple axons induced by Taxol treatment all contain predominantly plus-end-out microtubules, as expected, and that this is the case from early in development. We will further develop these findings to include them in our manuscript.
Planned revision:
- We have already performed some experiments in which we treat neurons with 10 nM Taxol and verify that we observe the formation of multiple axons by motor-PAINT. We will perform additional experiments in which we add this low dose of Taxol to the cells and determine its effect on microtubule sliding dynamics.
*Sorting of Minus-End-Out Microtubules (MTs) in Developing Axons: Traces of minus-end-out MTs are observed proximal to the soma in both Stage 2b axon-like neurites and Stage 3 developing axons (Figure S4). Does this indicate a clearance mechanism for misoriented MTs during development? If so, is this sorting mechanism specific to axons? Could dynein be involved? Pharmacological inhibition of dynein (e.g., ciliobrevin-D or dynarrestin) could assess whether blocking dynein disrupts uniform MT polarity and axon formation. *
We indeed think that a clearance mechanism is involved for removing misoriented microtubules in the axon after axon specification. Many motor proteins have been implicated in the polarity sorting of microtubules in neurons and for axons, dynein is believed to play a role (Rao et al., Cell Rep 2017; del Castillo et al., eLife 2015; Schelski & Bradke, Sci Adv 2022). A few of these studies already employed ciliobrevin, noting that it increases the fraction of minus-end-out microtubules in axons (Rao et al., Cell Rep 2017) and reduces the rate of retrograde flow of microtubules in immature neurites (Schelski & Bradke, Sci Adv 2022). These findings are in line with the suggestion of the reviewer. Interestingly, however, as we highlight in the discussion, the motility we observe for polarity reversal is extremely slow on average (~60 nm/minute) because the microtubule end undergoes bursts of motility and periods in which it appears to be tethered and rather immobile. Given that most neurites are non-axon-like, we assume these sliding events are mostly not taking place in axons or axon-like neurites. These events may thus be orchestrated by other motor proteins (e.g. kinesin-1, kinesin-2, kinesin-5, kinesin-6, and kinesin-12) that have been implicated in microtubule polarity sorting in neurons. We do observe retrograde sliding of stable microtubules in these neurites at a median speed of ~150 nm/minute, which is again much slower than typical motor speeds and occurs in almost all neurites and not specifically in one or two axon-like neurites. It is thus unclear which motors may be involved, and it is difficult to predict how any drug treatments would affect microtubule polarity.
Dissecting the mechanisms of microtubule sliding will require many more experiments and will first require the recruitment and training of a new PhD student or postdoc. Therefore, we feel this falls outside the scope of the current work, which carefully maps the microtubule organization during neuronal development and demonstrates the active polarity reversal of stable microtubules during this process.
Planned revision:
- We will expand our discussion of the potential mechanisms facilitating polarity sorting in axons and axon-like neurites in the discussion.
Impact of Kinesin-1 Rigor Mutants on MT Polarity and Dynamics: Would the expression of kinesin-1 rigor mutants alter MT dynamics and polarity? Validation with alternative methods, such as microtubule photoconversion, would be beneficial.
It is important to note that StableMARK and its effects on microtubule stability have been extensively verified in the paper in which it was introduced (Jansen et al., JCB 2023). At low expression levels (where StableMARK has a speckled distribution along microtubules), StableMARK does not alter the stability of microtubules (e.g., they are still disassembled in response to serum starvation), alter their post-translational modification status or their distribution in the cell, or impede the transport of cargoes along them. Given that we chose to image neurons with very low expression levels of StableMARK (as inferred by the speckled distribution along microtubules), we expect its effects on the microtubule cytoskeleton to be minimal.
Planned revision:
- We will clarify the potential effects of StableMARK in the manuscript. We will perform experiments with photoactivatable tubulin to examine whether we still see microtubules that live for over 2 hours. We will furthermore examine whether it allows us to see microtubule sliding between neurites similar to work performed in the Gelfand lab (Lu et al., Curr Biol 2013).
*Molecular Motors Driving MT Sliding: Which specific motors drive MT sliding in the soma and neurites? If a motor drives minus-end-out MTs into neurites, it must be plus-end-directed. The discussion should clarify the polarity of the involved motors to strengthen the conclusions. *
We thank the reviewer for highlighting this point and will improve our discussion to clarify the polarity of the involved motors.
Planned revision:
- We will expand our discussion of the motors potentially involved in sliding microtubules when revising the manuscript.
Stability of Centriole-Derived Microtubules: Microtubules emanating from centrioles are typically young and dynamic. How do they acquire acetylation and stability at an early stage? Do centrioles exhibit active EB1/EB3 comets in Stage 1/2a neurons? If these microtubules are severed from centrioles, could knockdown of MT-severing proteins (e.g., Katanin, Spastin, Fidgetin) alter microtubule polarity during neuronal development? A brief discussion would be valuable.
We thank the reviewer for raising these interesting questions and suggestions. As suggested, we will include a brief discussion of these issues. What is known about the properties of stable microtubules is limited, so it is currently unclear how they are made. For example, we do not know if they are converted from labile microtubules or nucleated by a distinct pathway. If they are nucleated by a distinct pathway, do these microtubules grow in a similar manner as labile microtubules and do they have EB comets at their plus-ends (given that EB compacts the lattice (Zhang et al., Cell 2015, PNAS 2018) and stable microtubules have an expanded lattice in cells (de Jager et al., JCB 2025))? If they are converted, does something first cap their plus-end to limit further growth (given that EB comets are rarely observed at the ends of stable microtubules (Jansen et al., JCB 2023))?
We also do not know how the activity of the tubulin acetyltransferase αTAT1 is regulated. Is its access to the microtubule lumen regulated or is its enzymatic activity stimulated by some means (e.g., microtubule lattice conformation or a molecular factor)?
We find the possibility that microtubule severing enzymes release these stable microtubules from the centrioles very exciting and hope to test the effects of their absence on microtubule polarity in the future. We will discuss this in the manuscript as suggested.
Planned revision:
-
We will expand our discussion about the centriole-associated stable microtubules in the revised manuscript. Minor Points
-
In Movies 3 and 4, please use arrowheads or pseudo-coloring to highlight microtubules detaching from specific points. In Movie 5, please mark the stable microtubule that rotates within the neurite. These annotations would enhance clarity.
Planned revision:
- We will add arrowheads/traces to the movies to enhance clarity.* *
The title states: 'Stable microtubules predominantly oriented minus-end-out in the minor neurites of Stage 2b and 3 neurons.' However, given that the minus-end-out percentage increases after nocodazole treatment but only reaches a median of 0.48, 'predominantly' may be an overstatement. Please consider rewording.
We thank the reviewer for catching this mistake and will adjust the statement to better reflect the median value.
Planned revision:
- We will reword this statement in the revised text.
*Please compare the StableMARK system with the K560Rigor-SunTag approach described by Tanenbaum et al. (2014). What are the advantages of StableMARK over the SunTag method? *
While the SunTag is certainly a powerful tool to visualize molecules at low copy number, we believe that StableMARK is more appropriate than the K560Rigor-SunTag tool for our assays due to two main reasons. Firstly, K560Rigor-SunTag is based on the E236A kinesin-1 mutation, while StableMARK is based on the G234A mutation. These are both rigor mutations of kinesin-1 but behave differently; the E236A mutant is strongly bound to the microtubule in an ATP-like state (neck linker docked), while the G234A mutant is also strongly bound, but not in an ATP-like state (Rice et al., Nature 1999). This means that they may have different effects on or preferences of the microtubule lattice. Indeed, while StableMARK (G234A) has been shown to preferentially bind microtubules with an expanded lattice (Jansen et al., JCB 2023; de Jager et al., JCB 2025), this may not be the case for the E236A mutant. In support of this, it has been shown that, while nucleotide free kinesin-1 can expand the lattice of GDP-microtubules at high concentrations (>10% lattice occupancy) in vitro (Peet et al., Nat Nanotechnol 2018; Shima et al., JCB 2018), kinesin-1 in the ATP-bound state does not maintain this expanded lattice (Shima et al., JCB 2018). Thus, we expect the kinesin-1 rigor used by Tanenbaum et al. (Cell 2014) to not be specific for stable microtubules (with an expanded lattice) in cells. In addition, given the dense packing of microtubules in neurites (not well-established in developing neurites, but with an inter-microtubule distance of ~25 nm in axons and ~65 nm in dendrites (Chen et al., Nature 1992)), the very large size of the SunTag could be problematic. The K560Rigor-SunTag tool from Tanenbaum et al. (Cell 2014) is bound by up to 24 copies of GFP (each ~3 nm in size), meaning that it may obstruct or be obstructed by the dense microtubule network in neurites.
Planned revision:
- Given that, unlike the K560Rigor-SunTag construct, StableMARK has been carefully validated as a live-cell marker for stable microtubules, we believe that the above discussion goes beyond the scope of the manuscript.* *
Microscopy data (Movies 2, 3, and 4) show microtubule bundling with StableMARK labeling, which is absent in tubulin immunostaining. Could this be an artifact of ectopic StableMARK expression? If so, a brief note addressing this potential effect would be beneficial.
As with any overexpression, there is a risk of artifacts. We feel that in the cells presented, the risk of artifacts is limited because we have chosen neurons expressing StableMARK at very low levels. Prior work has demonstrated that in cells where StableMARK has a speckled appearance on microtubules, it has limited undesired effects on stable microtubules or the cargoes moving along them (Jansen et al., JCB 2023). Perhaps some of the apparent differences in the amount of bundling can be explained in that the expansion microscopy images shown may have less apparent bundling because of the improved z-resolution and thus optical sectioning. Any z-slice imaged using expansion microscopy will contain fewer microtubules, so bundling may be less obvious. If we compare the amount of bundling seen in StableMARK expressing cells with the amount of bundling of acetylated microtubules (a marker for stable microtubules) in DMSO/nocodazole treated (non-electroporated) cells imaged by confocal microscopy in Figure S7, we feel that the difference is not so large. Nonetheless, we can briefly address this potential effect in the text.
Planned revision:
- We will improve the transparency of the manuscript by briefly mentioning this in the text. Reviewer #1 (Significance)
It is an important paper challenging established ideas of microtubule organization in neurons. It is important to the wide audience of cell and neurobiologists.__ __
Reviewer #2 (Evidence, reproducibility and clarity)
*The manuscript uses state-of-the-art microscopy (e,g. expansion microscopy, motorPAINT) to observe microtubule organization during early events of differentiation of cultured rat hippocampal neurons. The authors confirm previous work showing that microtubules in neurites and dendrites are of mixed polarity whereas they are of uniform plus-end-out polarity in axons. They show that stable microtubules (labeled with antibody against acetylated tubulin) are located in the central region of neurite cross-section across all differentiation stages. They show that acetylated microtubules are associated with centrioles early in differentiation but less so at later stages. And they show that stable microtubules can move from one neurite to another, presumably by microtubule sliding. *
Comments
-
*I found the manuscript difficult to read. There are lots of "segregations" of microtubules occurring over these stages of neuronal differentiation: segregation between the center of a neurite and the outer edge with respect to neurite cross-section, segregation between the region proximal to the cell body and the region distal to the cell body, and segregation over time (stages). The authors don't do a good job of distinguishing these and reporting the major findings in a way that is clear and straightforward. *
We thank the reviewer for their feedback and will go over the text to make it easier to read. Within neurites, we use the word 'segregated' in the manuscript to mean that the microtubules form two spatially separate populations across the width of the neurites (i.e., their cross-section if viewed in 3D). Because of variability seen in the neurites of this stage, this segregation does not always present as a peripheral vs. central enrichment of the different populations of microtubules as we sometimes observed two side-by-side populations instead. We will make sure that we properly define this in the manuscript to avoid any confusion.
When discussing other types of segregation, we tried to use different wording such as when discussing the proximal-distal distribution of microtubules with different orientations in axon-like neurites in this excerpt:
Sometimes these axons and axon-like neurites had a small bundle of minus-end-out microtubules proximal to the soma (Figure S4). This suggests that plus-end-out uniformity emerges distally first in these neurites, perhaps by retrograde sliding of these minus-end-out microtubules (see Discussion).
When discussing changes related to a particular stage, we instead aimed to list which stage we were talking about, such as seen in the discussion:
Emerging neurites of early stage 2 neurons already contain microtubules of both orientations and these are typically segregated. These emerging neurites also contain segregated networks of acetylated (stable) and tyrosinated (labile) microtubules. In later stage 2, stage 3, and stage 4 neurons, stable (nocodazole-resistant) microtubules are oriented more minus-end-out compared to the total (untreated) population of microtubules; however, in early stage 2 neurons, stable microtubules are preferentially oriented plus-end-out, likely because their minus-ends are still anchored at the centrioles at this stage. The fraction of anchored stable microtubules decreases during development, while the appearance of short stumps of microtubules attached to the centrioles suggests that these microtubules may be released by severing.
We appreciate the reviewer's concerns and will review the text carefully for clarity.
Planned revision:
- We will carefully go through the text when revising the manuscript to ensure that these distinctions are clear and consider using synonyms or other descriptors where they would enhance clarity.
*The major focus is on microtubule changes between stages 2a and 2b. This is introduced in the text and in the methods but not reflected in Figure 1A which should serve as an orientation of what is to come. It would be helpful to move the information about stages to the main text and/or Figure 1A. *
We thank the reviewer for pointing this out and will be more explicit about the distinction between stages 2a and 2b in the main text and make the suggested change to Figure 1A.
Planned revision:
- We will incorporate the suggested changes in the revised manuscript.
For Figure 1, the conclusions are generally supported by the data with the exception of the data for stage 2b in 1D and 1H. The images in D and the line scan in H suggest that for stage 2b, minus-end-out are on one edge whereas the plus-end-out are on the other edge of the neurite cross-section. But this is only true for one region along this example neurite. If the white line in D was moved proximal or distal along the neurite, the line scan for stage 2b would look like those of stages 2a and 3.
We thank the reviewer for noting this in the figure. For these earlier stages in neuronal development, the distribution of different types of microtubules within the neurite is more variable and does not always adhere to the central-peripheral distribution described for more mature neurons (Tas et al., Neuron 2017). We did not intend to suggest that neurites of stage 2b neurons consistently have a different radial distribution of microtubules of opposite orientation, but rather that microtubules of the same orientation tend to bundle together. Sometimes this bundling produces a central or peripheral enrichment, as described for mature neurons (Tas et al., Neuron 2017) and as seen in Figure 1D-F at certain points along the length of the neurites, and sometimes the bundling simply produces two side-by-side populations. To reflect this diversity, we chose two different examples in the figure. The line scans presented in Figure 1H were taken approximately at the midpoint of the presented ROIs. In addition, as our imaging in this case is two-dimensional, we do not want to make explicit claims about the radial distribution of the different populations of microtubules.
Planned revision:
- We will adjust our description of this figure in the main text to be more explicit about how we interpret these results. We will ensure that it is apparent that we do not think there is a specific radial distribution of microtubules depending on the developmental stage.
*For Figure 2, I found it difficult to relate panels A-F to panels G-J. I recommend combining 2G-J with 3A-B for a separate figure focused on the orientation of stable microtubules across different stages. *
We thank the reviewer for this suggestion and will take it into consideration when preparing the revised manuscript, making sure that our figure organization is well justified.
For Figure 3, it is difficult to reconcile the traces with the corresponding images - that is, there are many acetylated microtubules in the top view image that appear to contact centrioles but are not in the tracing. Perhaps the tracings would more accurately reflect the localization of the acetylated microtubules in the top view images if a stack of images was shown rather than the max projections. Or if the authors were to stain for CAMSAPs to identify non-centrosomal microtubules. I find the data unconvincing but I do believe their conclusion because it is consistent with published data in the field. The data need to be quantified.
We thank the reviewer for noting this. Importantly, the tracing was done on a three-dimensional stack of images, whereas we present maximum projections of a few slices in Figure 3C for easy visualization. Projection artifacts indeed make it look as though some additional microtubules are attached to the centrioles, whereas in the three-dimensional stacks it is apparent that they are not. We can include the z-stacks as supplementary material so that readers can also verify this themselves. We will additionally clarify that this is the case in the text related to Figure 3C.
Planned revision:
- We will better explain how the tracing was done in the methods section and make a brief note of the projection artifacts in the main text.
- We will also include the z-stacks as supplementary data.
*I have a major concern with the conclusions of Figure 4. Here the authors use StableMARK to argue that microtubules do not depolymerize in one neurite and then repolymerize in another neurite but rather can be moved (presumably by sliding) from one neurite to another. The problem is that StableMARK-decorated microtubules do not depolymerize. So yes, StableMARK-decorated microtubules can move from one neurite to another but that does not say anything about what normally happens to microtubules during neuronal differentiation. In addition, the text says that the focus on Figure 4 is on how microtubules change between stages 2a and 2b but data is only shown for stage 2b. *
As noted by the reviewer, StableMARK can indeed hyperstabilize microtubules when over-expressed; however, it is important to note that this strongly depends on the level of overexpression of the marker. This is discussed in detail in the paper introducing StableMARK, where it is described that at low expression levels, StableMARK does not alter the stability of microtubules (i.e., StableMARK decorated microtubules can still depolymerize/disassemble and they are disassembled in response to serum starvation), alter their post-translational modification status or their distribution in the cell, or impede the transport of cargoes along them (Jansen et al. JCB 2023). Despite this, we agree that it is important to validate these findings in our experimental system (primary rat hippocampal neurons) and so we plan to perform experiments with photoactivatable tubulin to verify the long lifetime of stable microtubules and aim to also observe microtubule sliding (similar to assays performed in the Gelfand lab (Lu et al., Curr Biol 2013)) in the absence of StableMARK.
Planned revision:
- We will confirm our findings using photoactivatable tubulin. We hope to demonstrate the long lifetime of the microtubules in this case and observe the sliding of microtubules by another means.
- We will also revise the text to better explain the potential impacts of StableMARK and that we chose the lowest expressing cells we could find so early after electroporation.
*The data are largely descriptive and it is of course important to first describe things before one can dive into mechanism. But most of the findings confirm previous work and new findings are limited to showing that e.g. microtubule segregation appears earlier than previously observed. *
Our study is the first to use Motor-PAINT to carefully map changes in microtubule orientations during neuronal development. Furthermore, it is the first to use the recently introduced live-cell marker for stable microtubules to directly demonstrate the active polarity reversal of stable microtubules during this process.
Optional: It would be nice if the authors could investigate some potential mechanisms. For example, does knockdown or knockout of severing enzymes prevent the loss of centriolar microtubules shown in Figure 3? Does knockdown or knockout of kinesin-2 or EB1 prevent the reorientation of microtubules (Chen et al 2014)?
We agree with the reviewer that these are exciting experiments to perform, and we hope to unravel the mechanisms underlying microtubule reorganization in future work. However, this will require many more experiments, as well as the recruitment and training of a new PhD student or postdoc, given that the first author has left the lab. Therefore, we feel that this falls outside the scope of the current work, which carefully maps the microtubule organization during neuronal development and demonstrates the active polarity reversal of stable microtubules during this process.
*Overall, the methods are presented in such a way that they can be reproduced. One exception is in the motor paint sample prep section: is it three washes for 1 min each or three washes over 1 min? *
We thank the reviewer for pointing out this mistake and will adjust this step in the methods section accordingly.
Planned revision:
- We will revise the methods section to read 'washed three times for 1 minute each'.
*No statistical analysis is provided. The spread of the data in the violin plots is very large and it is difficult to ascertain how strongly one should make conclusions based on different data spreads between different conditions. *
We thank the reviewer for noting this and will add statistical tests to the graphs showing the fraction of minus-end-out microtubules in different stages/conditions.
Planned revision:
- We will include statistical tests in the specified graphs.
For Figure S5, the excluded data (axons and axon-like neurites) should also be shown.
We thank the reviewer for this suggestion and will include this data.
Planned revision:
- We will adjust this supplemental figure to also include the specified data.
*For the movies, it would be helpful to have the microtubule moving from one neurite to another identified in some way as it is difficult to tell what is going on. *
We thank the reviewer for pointing this out.
Planned revision:
- We will trace the microtubule in this movie to enhance clarity.* * Reviewer #2 (Significance)
A strength of the study is the state-of-the-art microscopy (e,g. expansion microscopy, motorPAINT) and its application to a classic experimental model (rat hippocampal neurons). The information will be useful to those interested in the details of neuronal differentiation. A limitation of the study is that it appears to mostly confirm previous findings in the field (microtubule segregation, loss of centriolar anchoring, microtubule sliding). The advance to the field is that the manuscript shows that these events occur earlier in differentiation that previously known.
- *
Reviewer #3 (Evidence, reproducibility and clarity)
*The study by Iwanski and colleagues explores the establishment of the specific organisation of the neuronal microtubule cytoskeleton during neuronal differentiation. They use cultures of dissociated primary hippocampal rat neurons as a model system, and apply the optimised motor-PAINT technology, expansion microscopy/immunofluorescence and live cell imaging to investigate the polarity establishment and the distribution of differentially modified microtubules during early development. *
They show that in young neurons microtubules are of mixed polarity, but at this stage already the stable (acetylated) microtubules are preferentially oriented plus-end-out, and are connected to the centrioles. In later stages, the stable microtubules are released from the centrioles and reverse their orientation by moving around inside the cell body and the neurites.
*Overall, the conclusions are well supported by the presented data. The experiments are conducted thoroughly, the figures are clearly presented (for minor comments, see below) and the manuscript is well and clearly written. *
Major comments
-
What is the proportion of neurons with different types of neurites (axon-like, non-axon-like) in stage 2b? (middle paragraph page 5 and Fig 1E). Please provide a quantification. * How was the quantification in Fig 2B-D-F done? Why do the curves all start at 0? Please provide a scheme explaining these measurements. Furthermore, the data in Fig 2B do not reflect the statement "the segregation (...) was less evident" than in later stages (top of page 6): while it is less evident than in stage 2b, it is extremely similar to stage 3. Please revise accordingly.*
We thank the reviewer for pointing out these important details. We will make the suggested changes in the text, adding the proportion of neurons with different types of neurites and adjusting statement mentioned.
The radial intensity distributions were quantified as described in Katrukha et al. (eLife 2021). In the methods section, we describe the process in brief:
To analyze the radial distribution of acetylated and tyrosinated microtubules in expanded neurites, deconvolved image stacks were processed using custom scripts in ImageJ (v1.54f) and MATLAB (R2024b) as described in detail elsewhere (Katrukha et al., 2021). Briefly, on maximum intensity projections (XY plane), we drew polylines of sufficient thickness (300 px) to segment out neurite portions 44 µm (10 µm when corrected for expansion factor) in length proximal to the cell soma. Using Selection > Straighten on the corresponding z-stacks generated straightened B-spline interpolated stacks of the neurite sections. These z-stacks were then resliced perpendicularly to the neurite axis (YZ-plane) to visualize the neurite cross-section. From this, we could semi-automatically find the boundary of the neurite in each slice using first a bounding rectangle that encompasses the neurite (per slice) and then a smooth closed spline (approximately oval). To build a radial intensity distribution from neurite border to center, closed spline contours were then shrunken pixel by pixel in each YZ-slice while measuring ROI area and integrated fluorescence intensity. From this, we could ascertain the average fluorescence intensity per contour iteration, allowing us to calculate a radial intensity distribution by calculating the radius corresponding to each area (assuming the neurite cross-section is circular).
The curves thus all start at 0 because no intensity "fits" into a circle of radius 0 and then gradually increase because very few microtubules "fit" into circles with the smallest radii.
Planned revision:
- We will revise the text to include the suggested changes and add a brief statement to the methods section to explain why the curves start at 0.* *
*It should be stressed in the text, that the modification-specific antibodies only detect modified microtubules. Thus, in figure 3, in the absence of total tubulin staining, it is possible that there are more microtubules than revealed with the anti-acetylated tubulin antibody. A possible explanation should be discussed. *
We thank the reviewer for highlighting this point and will adjust the text accordingly.
Planned revision:
- We will clarify this in the revised text by adding the following sentence: In addition, given that we specifically stained for acetylated tubulin (a marker for stable microtubules), it is possible that other non-acetylated and thus perhaps dynamic microtubules are also associated with the centrioles.* *
*OPTIONAL: As discussed in the manuscript's discussion, testing some of the proposed mechanisms regulating microtubule cytoskeleton architecture in development (motors, crosslinkers, severing enzymes) would significantly increase the impact of this study. Exploring these phenomena in a more complex system (3D culture, brain explants) closer to the intricate character of the brain than the 2D dissociated neurons would be a real game-changer. *
We agree that sorting out the mechanisms driving microtubule reorganization would be very exciting. However, this will require many more experiments, as well as the recruitment and training of a new PhD student or postdoc, given that the first author has left the lab. Therefore, we feel this falls outside the scope of the current work, which carefully maps the microtubule organization during neuronal development and demonstrates the active polarity reversal of stable microtubules during this process.
Minor comments
-
*It could be useful to write on each panel whether the images were obtained with expansion or motor-PAINT technique: the rendering of the figures is very similar, and despite the different colour scheme can be confusing. *
We thank the reviewer for pointing this out.
Planned revision:
- We will incorporate this suggestion when revising our manuscript.
Reviewer #3 (Significance)
This manuscript provides insights into the establishment of the microtubule cytoskeleton architecture specific to highly polarised neurons. The imaging techniques used, improved from the ones published before (motor-PAINT: Kapitein lab in 2017, U-ExM: Hamel/Guichard lab in 2019), yield beautiful and convincing data, marking an improvement compared to previous studies.
*However, the novelty of some of the findings is relatively limited. Indeed, a mixed microtubule orientation in very young neurites has already been shown (Yau et al, 2016, co-authored by Kapitein), as has the separate distribution of acetylated and tyrosinated / stable and labile / plus-end-out and plus-end-in microtubules in dendrites (Tas, ..., Kapitein, 2017). *
*On the other hand, observation of the live movement of microtubules with the resolution allowing to see single (stable) microtubules is new and important. It provides an exciting setup to explore the mechanisms of polarity reversal of microtubules in neuronal development and it is regrettable that these mechanisms have not been explored further. *
*The association of (stable) microtubules with the centrioles is also a technically challenging analysis. Despite not being able to visualise all microtubules, but only acetylated ones, these data are novel and exciting. *
*This work will be of interest for neuronal cell biologists, developmental neurobiologists. The impact would be larger if the mechanistic questions were addressed using these sophisticated methodologies. *
*This reviewer's expertise is the regulation of the microtubule cytoskeleton and its impact on molecular, cellular and organism levels. *
- *
-
-
Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
Referee #3
Evidence, reproducibility and clarity
The study by Iwanski and colleagues explores the establishment of the specific organisation of the neuronal microtubule cytoskeleton during neuronal differentiation. They use cultures of dissociated primary hippocampal rat neurons as a model system, and apply the optimised motor-PAINT technology, expansion microscopy/immunofluorescence and live cell imaging to investigate the polarity establishment and the distribution of differentially modified microtubules during early development. They show that in young neurons microtubules are of mixed polarity, but at this stage already the stable (acetylated) microtubules are preferentially oriented plus-end-out, and are connected to the centrioles. In later stages, the stable microtubules are released from the centrioles and reverse their orientation by moving around inside the cell body and the neurites.
Major comments:
-
Overall, the conclusions are well supported by the presented data.
-
What is the proportion of neurons with different types of neurites (axon-like, non-axon-like) in stage 2b? (middle paragraph page 5 and Fig 1E). Please provide a quantification. How was the quantification in Fig 2B-D-F done? Why do the curves all start at 0? Please provide a scheme explaining these measurements. Furthermore, the data in Fig 2B do not reflect the statement "the segregation (...) was less evident" than in later stages (top of page 6): while it is less evident than in stage 2b, it is extremely similar to stage 3. Please revise accordingly.
-
It should be stressed in the text, that the modification-specific antibodies only detect modified microtubules. Thus, in figure 3, in the absence of total tubulin staining, it is possible that there are more microtubules than revealed with the anti-acetylated tubulin antibody. A possible explanation should be discussed.
-
OPTIONAL: As discussed in the manuscript's discussion, testing some of the proposed mechanisms regulating microtubule cytoskeleton architecture in development (motors, crosslinkers, severing enzymes) would significantly increase the impact of this study. Exploring these phenomena in a more complex system (3D culture, brain explants) closer to the intricate character of the brain than the 2D dissociated neurons would be a real game-changer.
Minor comments:
-
The experiments are conducted thoroughly, the figures are clearly presented (for minor comments, see below) and the manuscript is well and clearly written.
-
It could be useful to write on each panel whether the images were obtained with expansion or motor-PAINT technique: the rendering of the figures is very similar, and despite the different colour scheme can be confusing.
Significance
-
This manuscript provides insights into the establishment of the microtubule cytoskeleton architecture specific to highly polarised neurons. The imaging techniques used, improved from the ones published before (motor-PAINT: Kapitein lab in 2017, U-ExM: Hamel/Guichard lab in 2019), yield beautiful and convincing data, marking an improvement compared to previous studies.
-
However, the novelty of some of the findings is relatively limited. Indeed, a mixed microtubule orientation in very young neurites has already been shown (Yau et al, 2016, co-authored by Kapitein), as has the separate distribution of acetylated and tyrosinated / stable and labile / plus-end-out and plus-end-in microtubules in dendrites (Tas, ..., Kapitein, 2017).
-
On the other hand, observation of the live movement of microtubules with the resolution allowing to see single (stable) microtubules is new and important. It provides an exciting setup to explore the mechanisms of polarity reversal of microtubules in neuronal development and it is regrettable that these mechanisms have not been explored further.
-
The association of (stable) microtubules with the centrioles is also a technically challenging analysis. Despite not being able to visualise all microtubules, but only acetylated ones, these data are novel and exciting.
-
This work will be of interest for neuronal cell biologists, developmental neurobiologists. The impact would be larger if the mechanistic questions were addressed using these sophisticated methodologies.
-
This reviewer's expertise is the regulation of the microtubule cytoskeleton and it's impact on molecular, cellular and organism levels.
-
-
-
blog.sens-public.org blog.sens-public.org
-
une définition non ambigüe de ce qu’est penser
Pas forcément de « penser », mais « jouer » – Turing abandonne la première dans son texte (“The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion.”, p. 442):
“We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’” (p. 434).
Ce qui compte en pratique (pas juste pour Turing: pour nous aussi aujourd’hui), c’est : est-ce que la machine peut faire ce qu’on veut qu’elle fasse (jouer, parler, écrire sans fautes, bref correspondre à nos attentes d’intelligence).
May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.<br /> (p. 435)
-
- Apr 2025
-
www.reddit.com www.reddit.com
-
embodyingcyberspace.com embodyingcyberspace.com
-
One moment you’re flying high and everything is as clear as it can be. You’re experiencing a surge of unadulterated pleasure, a burst of sheer delight. But then, all of a sudden, the situation flips and you’re down in the dumps feeling the nagging necessity for another game, another sweet, another hit, another shot.
for - adjacency - cyber ghosts - hungry ghosts - the hunger is temporarily satisfied, but the hunger pangs start again - cycling in samsara - consumerism - David Loy - inability for consumerism to fill our sense of lack - https://hyp.is/WuaFQCKZEfCFA-eSTwduzg/www.youtube.com/watch?v=yWRA4cUCid8
-
-
www.linkedin.com www.linkedin.com
-
navigating the inner terrain of long-game work.
long game work
-
-
dev.to dev.to
-
How I designed an abuse-resistant, fault-tolerant, zero cost, multiplayer online game
for trystero

-
How I designed an abuse-resistant, fault-tolerant, zero cost, multiplayer online game

-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What was accurate, inaccurate, or surprising about your ad profile?
My ad profile was pretty accurate except for that it thinks I'm in a relationship yet I'm currently not. I went to the page about my recent ad topics and this part is more inaccurate. It has been providing me with ads about gaming yet I don't usually game. I'm just curious how the system would assume that I'm in a relationship and how it thinks that I'm into gaming. I would hope that google could provide more information to what data they have collected to reach this conclusion.
-
What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?
It is mostly inaccurate, because I'm an international student, who has ramdom demand on online shopping. Sometime I choose jewelry for friends birthday gift, and sometime I buy internet stuff to make my pc game smoother. I feel comfortable with Google knowing this. I’m generally not interested in most of the advertisements I see while studying abroad. I usually shop based on my own needs, so I find most ads quite meaningless to me.
-
-
-
Coming to grips with the nature of asynchronicity can prove very demandingfor conference and forum participants.All new online learners and e-moderatorshave some problems with it during their training (or if you allow them to workuntrained directly with participants).'There is no quick and easy way around thisproblem. They really do need to experience it for themselves. For instance,participants ‘post’ contributions to one conference then immediately readmessages from others, or vice versa.A participant might read all his or her unreadmessages in several conferences and then post several responses and perhaps postsome topics to start a new theme. In any conference, this reading and postingof messages by a number of individuals can make the sequencing difficult tofollow.All the messages are available for any participant (or researcher) to view online,so the sequencing of messages, when viewed after an e-tivity is completed, looksrather more ordered than during the build-up. Yet trying to understand themafterwards is rather like following the moves of a chess or bridge game after itis over. When participants start using e-tivities, this apparent confusion causes awide range of responses. The twists of time and complexity can elicit quiteuncomfortable, confused reactions from participants and severe anxiety in a few.Although many people are now familiar with email, they are not used to thecomplexity of online conferences, bulletin boards or forums. I suggest that goodstructure, pacing and clear expectations of participants should be provided, notonly for the scaffolding process as a whole but for each e-tivity. In addition, thee-moderator, or his or her delegate, should summarize after 10 or 20 messages.
Esta parte do texto parece-me fundamental e relaciona-se com o anteriormente abordado, salvo erro, na primeira semana de aprendizagem deste curso. Relaciona-se diretamente com a necessidade crítica de ambientação dos estudantes aos ambientes digitais, salientando que a experiência prática da assincronia é fundamental para ultrapassar dificuldades iniciais. O excerto evidencia como a complexidade inerente às interações online pode causar confusão, desconforto ou mesmo ansiedade, especialmente em utilizadores pouco familiarizados com dinâmicas digitais síncronas e assíncronas.
Neste sentido, reforça-se a importância da emissão prévia de guias pedagógicos semanais (GPS) estruturantes (tal como indicado no artigo que nos foi oferecido a ler anteriormente), que orientem de forma explícita e detalhada os estudantes sobre como devem navegar e participar nestes contextos de ensino-aprendizagem. Estes guias devem indicar claramente quais são as expectativas relativamente à participação, ao ritmo de interação e ao tipo de contribuições esperadas, para que os estudantes se sintam seguros, orientados e capazes de gerir a sua aprendizagem de forma autónoma e eficaz no meio digital. A recomendação expressa no texto, para que o e-moderador realize periodicamente sínteses das mensagens (a cada 10 ou 20 intervenções), parece-me um exemplo prático e eficaz de orientação estruturante que facilita a compreensão e o acompanhamento dos conteúdos discutidos, mitigando dificuldades decorrentes da complexidade e da assincronia característica destes ambientes digitais. No entanto vou ao encontro daquilo que foi dito por um colega na sessão síncrona sobre populações de ensino muitos alargadas. Para o e-moderador - e a menos que possa utilizar atores de inteligência artificial para o ajudar neste contexto - será complexo gerir toda a informação gerada pela estudantes.
António Lista
-
-
theanarchistlibrary.org theanarchistlibrary.org
-
It is, presumably, to preserve the possibility of winning the game that intellectuals insist, in discussing each other, on continuing to employ just the sort of Great Man theories of history they would scoff at in just about any other context: Foucault’s ideas, like Trotsky’s, are never treated as primarily the products of a certain intellectual milieu, as something that emerged from endless conversations and arguments involving hundreds of people, but always, as if they emerged from the genius of a single man (or, very occasionally, woman).
Marxism and the academy seem to operate by Carlyle's "great man" theory of history.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from.
I think this is really interesting coming from someone with a lot of gaming background. I think now a days, trolling has more negative implications than it originally did. Most people who are considered to be "trolling" are doing it to spark a reaction out of other players. It usually involving griefing other players and ruining the game experience rather than something like forum boards of naive questions. It makes me wonder when this change happened and why.
-
These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS [g15]).
Reading about how MUDs evolved into MMORPGs made me think about how much online gaming has changed over time. I remember playing games like World of Warcraft when I was younger, and it’s interesting to realize that those games came from such simple text-based beginnings. It’s kind of mind-blowing how far we've come in terms of game design and online interaction.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this video, I want to cover the high-level architecture of the Amazon Translate product. This is another machine learning product available within AWS, and if you need any other knowledge beyond architecture, there will be additional videos following this one. If you only see this video, don't worry, it just means that this is the only knowledge that you need. Now, let's just jump in and get started straight away.
Amazon Translate, as the name suggests, is a text translation service based on machine learning. It translates text from a native language to other languages, one word at a time. The translation process is actually two parts. First, we have the encoder, which reads the source text and then outputs a semantic representation, which you can think of as the meaning of that source text. Remember that the way certain points are conveyed between languages differs. It's not always about direct translation of the same words between two different languages. So, the encoder takes the native source text and outputs a semantic representation or meaning, and then the decoder reads in that meaning and writes it to the target language.
Now, there's something called an attention mechanism, and Amazon Translate uses this to understand context. It helps decide which words in the source text are the most relevant for generating the target output, ensuring that the whole process correctly translates any ambiguous words or phrases. The product is capable of auto-detecting the source text language. So, you can explicitly state what language the source text is in, or you can allow that to be auto-detected by the product.
In terms of some of the use cases for Amazon Translate, it can offer a multilingual user experience. All the documents within businesses are generally going to be stored in the main language of that business. However, this allows you to offer those same documents, such as meeting notes, posts, communications, and articles, in all the languages that staff within your business speak. This can make it much easier for organizations with offices in different countries to operate more efficiently. This also means that you can offer things like emails, in-game chat, or customer live chat in the native language of the person you're communicating with, which can increase the operational efficiency of your business processes. It also allows you to translate incoming data, such as social media, news, and communications, from the language they're written in into the native language of the staff interpreting those incoming communications.
More commonly, Amazon Translate can also offer language independence for other AWS services. For example, you might have services such as Comprehend, Transcribe, and Polly, which operate on information, and Translate offers the ability for these services to operate in a language-independent way. It can also be used to analyze data stored in S3, RDS, DynamoDB, and other AWS data stores.
Generally, with this product, you're going to find that it's used more commonly as an integration product. So, rather than using it directly, it's more common to see it integrated with other AWS services, other applications (including ones that you develop), and other platforms. So, in the exam, if you see any type of scenario that requires text-to-text translation, think Translate. If you see any scenario that might need text-to-text translation as part of a process, Translate can form part of that process. You might want to translate one language into another and then speak that language, or you might want to take audio in one language, output text, and then translate that to a different textual language.
Keep in mind that Translate is often used as a component of a business process. So, really keep that in mind. It's not always used in isolation. Now, with that being said, that is everything I wanted to cover in this video. Go ahead and complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video, I want to talk about another really cool AWS product called Amazon Recognition with a K. Now let's jump in and get started because I'm actually super excited to step through this product and how it works. Recognition is a deep learning-based image and video analysis product. Deep learning is a subset of machine learning. So this is to say that Recognition can look at images or videos and intelligently do or identify things based on those images or videos.
Specifically, it can identify objects in images and videos such as cats, dogs, chickens, or even hot dogs. It can identify specific people, text—for example, license plates—activities, and what people are doing in images and videos. It can help with content moderation, so identifying if something is safe or not. It can detect faces, analyze faces for emotion and other visual indications, and compare faces, checking images and videos for identified individuals. It can do pathing—so identify movements of people in videos—and an example of this might be post-game analysis on sports games, and much, much more. It's actually one of the coolest machine intelligence services that AWS has, and that's saying a lot.
The product is also pay-as-you-use, with per-image pricing or per-minute-of-video pricing. It integrates with applications via APIs and it's event-driven, so it can be invoked, say, when an image is uploaded to an S3 bucket. But one of its coolest features is that it can analyze live video by integrating with Kinesis video streams. This might include doing facial recognition on security camera footage for security-type situations, distinguishing between the owner of a property and somebody who's attempting to commit a crime. All in all, it's a super flexible product.
Now generally, for all of the AWS exams, you will need to have a basic understanding of the architecture. There are some AWS exams—for example, machine learning—where you might need additional understanding. And if you're studying a course where that additional understanding is required, there will be follow-up videos. In general, though, it's only a high-level architecture understanding. And one example architectural flow might look something like this: An image containing Whiskers and Woofy is uploaded to S3. Now we've configured S3 events, and so this invokes a Lambda function. The Lambda function calls Recognition to analyze the image. It returns the results, and then the Lambda function stores the metadata together with a link to the image into DynamoDB for further business processes.
To give some context as to the other things that Recognition can do, let's just take an entirely random selection of images from the internet. Recognition can identify celebrities such as Ironman. It can also identify mic chambers, that I think as a machine learning service, it might be slightly biased. It can identify text in images or videos, such as license plates on cars or other internet memes. It can even identify objects, animals, or people in those same memes. For faces specifically, it can identify emotions or other attributes. So, for example, identifying this random doctor is a male who currently has his eyes open and is looking very, very serious rather than being happy in any way.
So that's Recognition. If you have questions in the exam which need general analysis performing on images or videos for content, emotion, text, activities—anything I've mentioned in this lesson—then you should default to picking Recognition. It's probably going to be the correct answer. Now with that being said, that is everything I wanted to cover in this video. Go ahead and complete this video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this lesson I want to cover the ElastiCache product. This is one which features relatively often in all of the associate AWS exams and fairly often at a professional level. It’s a product that you’ll need to understand if you’re delivering high performance applications. It’s one of a small number of products which allows your applications to scale to truly high-end levels of performance. So let’s jump in and take a look.
So what is ElastiCache? Well, at a high level it’s an in-memory database for applications which have high-end performance requirements. If you think about RDS, that’s a managed product which delivers database servers as a service. Databases generally store data persistently on disk. Because of this, they have a certain level of performance. No matter how fast the disk is, it’s always going to be subject to performance limits. An in-memory cache holds data in memory, which is orders of magnitude faster, both in terms of throughput and latency. But it’s not persistent and so it’s used for temporary data. ElastiCache provides two different engines, Redis and Memcache D, and both of them are delivered as a service.
Now, in terms of what you’d use the product for—well, if you need to cache data for workloads which are read heavy (read heavy being the key term that you need to remember at this point), or if you have low latency requirements, then using ElastiCache is a viable option. For read-heavy workloads, ElastiCache can reduce the workloads on a database. And this is important because databases aren’t the best at scaling, especially relational databases. Databases are also expensive relative to the data that they store and the performance that they deliver. So for heavy reads, offloading these to a cache can really help reduce costs—so it’s cost-effective. Remember that for the exam.
ElastiCache can also be used as a place to store session data for users of your application, which can help to make your application servers stateless. This is used in most highly available and elastic environments—those that use load balancers and auto scaling groups. But for any systems which need to be fault tolerant, where users can’t notice if components fail, then generally everything needs to be stateless, and so ElastiCache can help with this type of architecture.
Now, one really important thing to understand for the exam is that using ElastiCache means that you need to make application changes. It’s not something that you can just use. Your application needs to understand a caching architecture. It needs to know to use a cache to check for data, and if data isn’t in the cache, then it needs to check the underlying database. Applications need to be able to write data and understand cache invalidation. This functionality doesn’t come for free, and so if you’re answering any exam questions which state no application changes, then ElastiCache probably won’t be a suitable solution.
So let’s have a look visually at how some of these architectures work. Architecturally, let’s say that you have an application—obviously the Categorum application—and this application is being accessed by a customer, in this case, Bob. The application uses Aurora as its back-end database engine and it’s been adjusted to use ElastiCache. The first time that Bob queries the application, the Categorum application will check the cache for any data. It won’t be there though, because it’s the first time it’s been accessed, and so this will be a cache miss. That means the application will need to go to the database for the data, which is slower and more expensive. When it’s accessed this data for the first time, the application will write the data it just queried the database for into the cache. If Bob queries the same data again, then it will be retrieved directly from ElastiCache and no database reads are required. This is known as a cache hit. It will be faster and cheaper because the database won’t be used for the query.
Now with this small-scale interaction, it’s hard to see the immediate architectural benefit of using ElastiCache. But what if there are more users? What if instead of one Bob, we have many Bobs? Assuming the patterns of data access are the same or similar, then we’ll have a lot more cache hits and a much smaller increase in the number of database reads. This will allow us to scale our application and accept more customers. If the application data access patterns of our user base is similar at scale, then it will mean that most of the increased load placed on the application will go directly onto ElastiCache. We won’t have a proportional increase in the number of direct accesses against our database. This will allow us to scale the architecture in a much more cost-effective way than if everything used direct database access. We can deliver much higher levels of read workload and offer performance improvements at scale. This is a caching architecture and a very typical architecture that ElastiCache will be used for.
Let’s take a look at another use case, and this is using the product to help us with session state data for our users. Let’s say again that we’re looking at our Categorum application, but now it’s running within an auto scaling group with three EC2 instances and a load balancer. It’s using Aurora for the persistent data layer. Again, we have a user of our application—Bob. The application I’m demoing in this part of the lesson is actually the fault tolerant extreme edition of Categorum. Even when components of the system fail, the application can continue operating without disrupting our user Bob. The way it does this is by using ElastiCache to store session data. This means that when Bob first connects to any of the application instances, his session data is written by that instance into ElastiCache. It’s kept updated if Bob purchases any limited edition cat prints. So the first time Bob connects to any of the instances, that instance writes and maintains the session data for Bob using ElastiCache.
If at any point the application needs to deal with the failure of an instance—where previously the session data would be lost and the application functionality disrupted—the Categorum extreme edition can tolerate this. If this occurs, Bob’s connection is moved to another instance by the load balancer, and his experience continues uninterrupted because the session data is loaded by the new instance from ElastiCache. This is another common use case for ElastiCache: storing user session data externally to application instances, allowing the application to be built in a stateless way. This in turn allows it to go beyond simple high availability and move towards being a fault tolerant application.
ElastiCache commonly helps with either read-heavy performance improvements, cost reductions, or session state storage for users. What’s also important for the exam is that ElastiCache actually provides access to two different engines: Redis and Memcache D. It’s important that you understand the differences between these two engines at a high level. So let’s look at that next.
The differences between Memcache D and Redis start with the fact that both engines offer sub-millisecond access to data. They both support a range of programming languages, so no matter what your application uses, you can use either engine. But they diverge when it comes to the data structures each supports. Memcache D supports simple data structures only, such as strings. Redis, on the other hand, supports much more advanced types of data, including lists, sets, sorted sets, hashes, bit arrays, and many more. For example, an application could use Redis to store data related to a game leaderboard and keep a list sorted by rank. Redis can store both the data and the order of the data, significantly improving application performance.
Another difference is that Redis supports replication of data across multiple availability zones, making it highly available by design. This can also be used to scale reads using those replicas. Memcache D doesn’t support replication in the same way. While you can create multiple nodes and manually shard your data—such as storing certain usernames in one node and others in another—Redis supports true replication across instances for scalability. So in the exam, if you see questions about multi-availability zones or high availability and resilience, Redis should be your likely answer.
Additionally, Redis supports backup and restores, meaning that a cache can be restored to a previous state after a failure. Memcache D does not support that; it lacks persistence. So if the exam question asks about recovery of a cache without data loss, Redis is your best choice. Memcache D does have an advantage in that it’s multi-threaded by design and can better utilize multi-core CPUs, offering significantly more performance on that front. A notable Redis-only feature is transactions—this is where multiple operations are treated as a single unit, meaning either all succeed or none do. This is useful when strict consistency is required.
Both of these engine types can use a range of instance types and sizes. I’ve included a link in the lesson description that provides an overview of the different resources that can be allocated to both caching engines. You don’t need to know the specifics for the exam, but architecturally, be aware that instance types with more or faster memory will offer an advantage when running ElastiCache.
For the exam, focus on recognizing the types of architectures that benefit from an in-memory cache. These include systems with read-heavy workloads, needs for cost reduction when accessing databases, sub-millisecond access requirements, or systems that require external session state storage. Just remember—it doesn’t come for free. You need to make application changes. This is not a plug-and-play solution for apps that can’t be modified. With that being said, that’s everything I wanted to cover from a theory perspective in this lesson. Go ahead, complete the lesson, and when you’re ready, I’ll look forward to you joining me in the next.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What do you think is the best way to deal with trolling?
While I partially agree with Film Crit Hulk that "skilled moderation" should be utilized in the disempowerment of online trolls, I remain of the persuasion that the best way someone can deal with a troll is to report them (or directly contact an administrator), block them, and not respond to them. I've had my fair share of experiences with trolls in the past and it took me a minute to figure out that it's easy to preserve your own mental well-being when you detach yourself from online interactions and not engage in the mind game the troll wishes to play. The amusement they derive from the reaction you give them is intoxicating and addictive; you wouldn't give an alcoholic more liquor, don't give a troll your time.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion
This section reminds me of times when I’ve seen people post obviously fake or mean comments just to make others upset, especially in livestream chats or game forums. I used to think they were just joking, but now I realize it was trolling. It’s interesting to learn that some people do it to feel powerful or smart. I’ve even seen people try to “troll the newbies,” which I didn’t know was an actual term before reading this.
-
-
www.reddit.com www.reddit.com
-
https://www.reddit.com/r/typewriters/comments/1k2rus1/typewriter_in_singularity/
Dirty/rusted typewriter seen in the video game Singularity
-
-
www.reddit.com www.reddit.com
-
https://www.reddit.com/r/typewriters/comments/1k322dc/mafia_2002/
Typewriter in video game Mafia (2002).
-
-
learn.cantrill.io learn.cantrill.ioCloudHSM1
-
Welcome back and in this lesson I want to talk about CloudHSM. Now this is a product which is similar to KMS in terms of the functionality which it provides, in that it's an appliance which creates, manages and secures cryptographic material or keys. Now there are a few key differences and you need to know these differences because it will help you decide on when to use KMS and when to use CloudHSM. And you might face an exam question where you need to select between these two. So let's jump in and get started.
Now I promised you at the start of the course I wouldn't use facts and figures in lessons unless absolutely required. You shouldn't have to remember lots of different facts and figures unless they influence the architecture. Now this unfortunately is going to be one of the lessons where I do have to introduce some keywords that you simply need to remember. Because in this lesson the detail, the difference between CloudHSM and KMS really matters.
Now let's start by quickly talking about KMS. KMS is the key management service within AWS. So it's used essentially for encryption within AWS and it integrates with other AWS products. So it can generate keys, it can manage keys, other AWS services integrate with it for their encryption. But it has one security concern, at least if you operate in a really demanding security environment. And that's that it's a shared service. While your part of KMS is isolated, under the covers you're using a service which other accounts within AWS also use. What's more, while the permissions within AWS are strict, AWS do have a certain level of access to the KMS product. They manage the hardware and the software of the systems which provide the KMS product to you as a customer.
Now behind the scenes KMS uses what's called a HSM which stands for Hardware Security Module. And these are actually industry standard pieces of hardware which are designed to manage keys and perform cryptographic operations. Now you can actually run your own HSM on-premise. Cloud HSM is essentially a true single tenant HSM that's hosted within the AWS cloud. So if you hear the term HSM mentioned, it could refer to both Cloud HSM which is hosted by AWS or an on-premise HSM device.
Now specifically focusing on Cloud HSM, AWS provision it and they're responsible for hardware maintenance. But they have no access to the part of the unit where the keys are stored and managed. It's actually a physically tamper resistant piece of hardware. So it's not something that they can gain access to. Generally if you as the customer lose access to a HSM, that's it, game over. You can reprovision them but there's no easy way to recover data.
Now there's actually a well-known standard for these cryptographic modules. It's called the Federal Information Processing Standard Publication 140-2. You can easily determine the capability of any HSM modules based on their compliance with this standard. And I've included a link in the lesson description with additional information. But Cloud HSM is FIPS 140-2 Level 3 compliant and it's the Level 3 which really matters in the context of this lesson. KMS in comparison is overall 140-2 Level 2 compliant and some of the areas of the KMS product are also compliant with Level 3.
Now this matters. This is really important. If you see an exam question or if you're in a real world production situation which requires 140-2 Level 3 overall, then you have to use Cloud HSM or your own on-premises HSM device. And that's a fact that you really need to remember for the exam.
Another important distinction between KMS and Cloud HSM is how you access the product. With KMS, all operations are performed with AWS standard APIs and all permissions are also controlled with IAM permissions. Now Cloud HSM isn't so integrated with AWS and this is by design. With Cloud HSM, you access it with industry standard APIs. Now examples of this are PKCS 11, the JCE extensions or the CryptoNG extensions. And I've highlighted the keywords that you should try to build up an association with Cloud HSM. So if you see any of these keywords listed in the exam or in production situations, then you know you need a HSM appliance, either on-premise or Cloud HSM hosted by AWS.
Now it used to be that there was no real overlap between Cloud HSM and KMS. They were completely different. But more recently, you can use a feature of KMS called a custom key store. And this custom key store can actually use Cloud HSM to provide this functionality, which means that you get many of the benefits with Cloud HSM together with the integration with AWS. So when you're facing any exam questions, you still should be able to look for these keywords to distinguish between situations when you use KMS versus Cloud HSM.
Now just to summarize before we move on from this screen, I want you to focus on doing your best to remember all of the three key points that are highlighted with the exam power-up icon. If you can remember those, then you should be in a really good position to determine whether to use KMS or Cloud HSM within exam questions. Now I want to look at the architecture of Cloud HSM as a product, and I think it's best that we do that visually.
Now architecturally, Cloud HSMs are not actually deployed inside a VPC that you control. They're deployed into an AWS managed Cloud HSM VPC that you have no visibility of. So architecturally, this is how that looks. So on the left, we've got a customer managed VPC. On the right, we've got the Cloud HSM VPC that's managed by AWS. We're using two availability zones, and inside the customer managed VPC, we've gone ahead and created two private subnets, one in availability zone A and one in availability zone B.
Now inside the Cloud HSM VPC, to achieve high availability, you need to deploy multiple HSMs and configure them as a cluster. So a HSM by default is not a highly available device. It's a physical network device that runs within one availability zone. So in order to provide a fully highly available system, we need to create a cluster and have at least two HSMs in that cluster, one of them in every availability zone that you use within a VPC.
Now once HSM devices are configured to be in a cluster, then they replicate any keys, any policies, or any other important configuration between all of the HSM devices in that cluster. So that's managed by default, by the appliances themselves. That's not something that you need to configure. So the HSMs operate from this AWS managed VPC, but they're injected into your customer managed VPC via elastic network interfaces. So you get one elastic network interface for every HSM that's inside the cluster injected into your VPC. Once these interfaces have been injected into your customer managed VPC, then any services which are also inside that VPC can utilize the HSM cluster by using these interfaces. And if you want to achieve true high availability, then logically instances will need to be configured to low balance across all of the different interfaces.
Now also in order to utilize the cloud HSM devices, then a client needs to be installed on the EC2 instances, which are going to be configured to access the cloud HSM. So this is a background process known as the cloud HSM client. And this needs to be installed on the EC2 instance in order for it to access the HSM appliances. And then once the cloud HSM client is installed, then you can utilize industry standard API's such as PK, CS11, JCE and crypto NG to access the HSM cluster.
Now a really important thing to understand about cloud HSM, because this is a distinguishing factor between it and KMS, is that while AWS do provision the HSM, they're actually partitioned and they're tamper resistant. So AWS have no access to the area of the HSM appliances which store the keys. Only you can control these. You manage them, you're responsible for them. Now AWS can perform things like software updates and other maintenance tasks, but these don't take place on the area of the HSM which is used to perform cryptographic operations. Only you as an administrator or anyone that you delegate that to has the ability to interact with the secure area of the HSM devices.
Now before we finish this lesson, there are a few more things that I want to cover. So these are points that I think you should be aware of. So some of these are use cases, some of these are limitations that will help you select between using cloud HSM and using something like KMS.
So first, by default there's no native integration between cloud HSM and any AWS products. So one example of this is that you can't use cloud HSM in conjunction with S3 server-side encryption. That's not a capability that it has. Cloud HSM is not accessed using AWS standard APIs at least by default and so you can't integrate it directly with any AWS services. Now you could, for example, use cloud HSM to perform client-side encryption. So if you've got an encryption library on a particular local machine and you want to encrypt objects before you upload them to S3, then you can use it to perform that encryption on the object before you upload it to the S3 service. But this is not integrated with S3. You're just using it to perform encryption on the objects before you provide them to S3.
Now a cloud HSM can also be used to offload SSL or TLS processing from web servers. And if you do that, then the web servers can benefit from A, not having to perform those cryptographic operations, but also the cloud HSM is a custom designed piece of hardware that accelerates those processes. So it's much more economical and efficient to have a cloud HSM device performing those cryptographic operations versus doing it on a general purpose EC2 instance. So that's something that a cloud HSM can do for you, but KMS natively cannot.
Now other products that you might use inside AWS can also benefit from cloud HSM, products which are able to interact using these industry standard APIs. And this includes products like Oracle databases. So they can utilize cloud HSM for performing transparent data encryption or TDE. So this is a method that Oracle has for encrypting data that it manages on your behalf. And it can utilize a cloud HSM device to perform the encryption operations and to manage the keys. Now this does mean that because a cloud HSM device is something that's entirely managed by you, you're the only entity that initially starts off with access to be able to interact with the encryption materials. So the keys, it means that if you use a cloud HSM and integrate it with an Oracle database, then you're doing so in a way which means that AWS have no ability to decrypt that data. And so if you're operating in a highly restricted regulatory environment where you really need to use strong encryption and verify exactly who has the ability to perform encryption operations, then generally cloud HSM is an ideal product to support that.
And then lastly in a similar way, cloud HSM can also be used to protect the private keys for a certificate authority. So if you're running your own certificate authority, you can utilize cloud HSM to manage the private keys for that certificate authority.
Now just to summarize at this point, the overall theme is that for anything which isn't specific to AWS, for anything which expects to have access to a hardware security module using industry standard APIs, then the ideal product for that is cloud HSM. For anything that uses standards for anything that has to integrate with products which aren't AWS, then cloud HSM is ideal. For anything which does require AWS integration, then natively cloud HSM isn't suitable. If FIPS 140-2 Level 3 is mentioned, then it's cloud HSM. If integration with AWS is mentioned, then it's probably going to be KMS. If you need to utilize industry standard encryption APIs, then it's likely to be cloud HSM.
Now that's everything that we need to cover. I just wanted you to be able to handle any curveball HSM style questions that you might encounter in the exam. So thanks for watching, go ahead and complete this video and then when you're ready, I'll look forward to you joining me in the next one.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover Amazon Kinesis Data Analytics. This is a real-time data processing product, and it's critical that you understand its features together with when you should and shouldn't use it for the exam. Before I start talking about Kinesis Data Analytics, I want to position the product relative to everything else. Kinesis data streams are used to allow the large-scale ingestion of data into AWS and the consumption of that data by other compute resources known as consumers. Kinesis Data Firehose provides delivery services. It accepts data in and then delivers it to supported destinations in near real-time and it can also use Lambda to perform transformation of that data as it passes through. Kinesis Data Analytics is a service that provides real-time processing of data which flows through it using the structured query language known as SQL. Data inputs at one side, queries run against that data in real-time, and then data is output to destinations at the other.
The product ingests from either Kinesis data streams or Kinesis Firehose and can optionally pull in static reference data from S3, but I'll show you how that works visually in a moment. Now after data is processed, it can be sent on in real-time to destinations, and currently, the supported destinations are Firehose and indirectly any of the destinations which Firehose supports. But keep in mind, if you're using Firehose, then the data becomes near real-time rather than real-time. The product also directly supports AWS Lambda as a destination, as well as Kinesis Data Streams, and in both of those cases, the data delivery is real-time. So you only have near real-time if you choose Firehose or any of those indirect destinations. If you use Lambda or Kinesis Data Streams, then you keep the real-time nature of the data. Conceptually, the product fits between two streams of data: input streams and output streams, and it allows you, in real-time, to use SQL queries to adjust the data from the input to the output.
Now let's look at it visually because it will be easier to see how all of the various components fit together. So on the left, we start with the inputs, the source streams, and this can be Kinesis Streams or Kinesis Firehose. In the middle, we create a Kinesis Analytics application; this is a real-time product, and I'll explain what that means in a second. The Kinesis Analytics application can also take data in from a static reference source, an S3 bucket, and then the Kinesis Analytics application will output to destination streams on the right, so Kinesis Streams or Kinesis Firehose. Remember, all of these are external sources or destinations; they exist outside of Kinesis Data Analytics. Kinesis Data Analytics doesn't actually modify the sources in any way. What actually happens is this: inside the analytics application, you define sources and destinations known as inputs and outputs.
So conceptually, what happens is for the input side, objects called in-application input streams are created based on the inputs. Now you can think of these like normal database tables, but they contain a constantly updated stream of data from the input sources, the actual Kinesis Streams or Firehose. These exist inside the analytics application, but they always match what's happening on the streams which are outside of the application. Now the reference table is a table which matches data contained within an S3 bucket and it can be used to store static data which can enrich the data coming in over the streams. Consider the example of a popular online game where a Kinesis Stream has all of the data about player scores and player activities. In this particular case, the reference table might contain data on player information which can augment the stuff coming in via the stream. So if the stream only contains the raw score and activity data, then the reference data will contain other metadata about those players, so maybe player names, certain items the player has, or awards, and these can all be used to enrich the data that's coming in real-time from Kinesis Streams.
Now the core to the Kinesis Analytics application is the application code, and this is coded using the structured query language, or SQL. It processes inputs and it produces outputs. So in this case, it operates on data in the in-application input stream table and the reference table, and any output from the SQL statement is added to in-application output streams. Again, think of these like tables which exist within the Kinesis Analytics application; only these tables map onto real external streams, so any data that's outputted into those tables by the Kinesis Analytics application is entered onto the Kinesis Stream or Kinesis Firehose, and then these will feed into any consumers of the stream or destinations of the firehose. Additionally, any errors generated by the SQL query can be added to an in-application error stream, and all of this happens in real time. So data is captured from the source streams via the in-application input stream, the virtual tables. It's manipulated by the analytics application using the SQL query, and then stored into the in-application output streams which put that data into either the external Kinesis Stream or external Kinesis Firehose.
All of this just to stress it again happens in real time, and if the output data is delivered into a Kinesis Stream, then it stays real-time. If the output data is delivered into a Kinesis Firehose, then it becomes near real-time, delivering to all of those supported destinations. Now, you only pay for the data processed by the application, but it is not cheap, so you should only use it for scenarios which really fit this type of need. Before we finish this lesson, let's talk about the scenarios where you might choose to use Kinesis Data Analytics. There are some particular use cases or scenarios which fit using Kinesis Data Analytics. At a high level, this is anything which uses streaming data that needs real-time SQL-based processing, so things like time series analytics, maybe election data and e-sports, things like real-time dashboards for games, high score tables or leaderboards, and even things like real-time metrics for security and response teams. Anything which needs real-time stream-based SQL processing is an ideal candidate for Kinesis Data Analytics.
Now, I mentioned in the previous lesson that Data Firehose can also support transformation of data using Lambda, but remember the key differentiator is that Data Firehose is not a real-time product, and using Lambda you're restricted to relatively simple manipulations of data. Using Kinesis Data Analytics, you can create complex SQL queries and use those queries to manipulate input data into whatever format you want for the output data. So it has a lot more in terms of features than Data Firehose, so if you're dealing with any exam questions which need really complex manipulation of data in real-time, then Kinesis Data Analytics is the product to choose. Okay, so with that being said, that's everything that I wanted to cover in this theory lesson. Go ahead and complete the lesson, and then when you're ready, I look forward to you joining me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this lesson, I want to cover application and network load balances in a little bit more detail. It's critical for the exam that you understand when to pick application load balances and when to pick network load balances, as they're both used for massively different situations. Now, we do have a lot to cover, so let's jump in and get started.
I want to start by talking about consolidation of load balances. Historically, when using classic load balances, you connected instances directly to the load balancer or you integrated an auto scaling group directly with that load balancer, an architecture which looked something like this: a single domain name, categor.io, using a single classic load balancer with an attached single SSL certificate for that domain, and then an auto scaling group is attached to that, with the classic load balancer distributing connections over those instances.
The problem is that this doesn't scale because classic load balancers don't support SNI, and you can't have multiple SSL certificates per load balancer, meaning every single unique HTTPS application that you have requires its own classic load balancer, which is one of the many reasons that classic load balancers should be avoided. In this example, we have Catergram and Dogagram, both of which are HTTPS applications, and the only way to use these is to have two different classic load balancers.
Compare this to the same application architecture, with both applications—Catergram and Dogagram—this time using a single application load balancer. This one handles both applications, and we can use listener-based rules, which I’ll talk about later in the lesson, where each of these listener-based rules can have an SSL certificate handling HTTPS for both domains. Then we can have host-based rules which direct incoming connections at multiple target groups that forward these on to multiple auto scaling groups, which is a two-to-one consolidation—halving the number of load balancers required to deliver these two different applications.
But imagine how this would look if we had a hundred legacy applications and each of these used a classic load balancer; moving from version one to version two offers significant advantages, and one of those is consolidation. So now I just want to focus on some of the key points about application load balancers—things which are specific to the version two or application load balancer.
First, it's a true layer seven load balancer and it's configured to listen on either HTTP or HTTPS protocols, which are layer seven application protocols that an application load balancer understands and can interpret information carried using both. Now, the flip side is that the application load balancer can't understand any other layer seven protocols—so things such as SMTP, SSH, or any custom gaming protocols are not supported by a layer seven load balancer like the application load balancer, and that's important to understand.
Additionally, the application load balancer has to listen using HTTP or HTTPS listeners; it cannot be configured to directly listen using TCP, UDP, or TLS, and that does have some important limitations and considerations that you need to be aware of, which I’ll talk about later on in this lesson.
Because it's a layer seven load balancer, it can understand layer seven content—so things like the type of the content, any cookies used by your application, custom headers, user location, and application behavior—meaning the application load balancer is able to inspect all of the layer seven application protocol information and make decisions based on that, something that the network load balancer cannot do. It has to be a layer seven load balancer, like the application load balancer, to understand all of these individual components.
An important consideration about the application load balancer is that any incoming connections—HTTP or HTTPS (and remember HTTPS is just HTTP transiting using SSL or TLS)—in all of these cases, whichever type of connection is used, are terminated on the application load balancer. This means that you can't have an unbroken SSL connection from your customer through to your application instances—it’s always terminated on the load balancer, and then a new connection is made from the load balancer through to the application.
This matters to security teams, and if your business operates in a strict security environment, this might be very important and, in some cases, can exclude using an application load balancer. It can't do end-to-end unbroken SSL encryption between a client and your application instances, and it also means that all application load balancers which use HTTPS must have SSL certificates installed on the load balancer, because the connection has to be terminated there and then a new connection made to the instances.
Application load balancers are also slower than network load balancers because additional levels of the networking stack need to be processed, and the more levels involved, the more complexity and the slower the processing. So if you're facing any exam questions that are really strict on performance, you might want to look at network load balancers instead.
A benefit of application load balancers is that, because they're layer seven, they can evaluate the application health at layer seven—in addition to just testing for a successful network connection, they can make an application layer request to the application to ensure that it's functioning correctly.
Application load balancers also have the concept of rules, which direct connections arriving at a listener—so if you make a connection to a load balancer, what it does with that connection is determined by rules, which are processed in priority order. You can have many rules affecting a given set of traffic, and they’re processed in priority order, with the last one being the default catch-all rule, though you can add additional rules, each of which can have conditions.
Conditions inside rules include checking host headers, HTTP headers, HTTP request methods, path patterns, query strings, and even source IP, meaning these rules can take different actions depending on the domain name requested (like categor.io or dogogram.io), the path (such as images or API), the query string, or even the source IP address of any customers connecting to that application load balancer.
Rules can also have actions—these are the things the rules do with traffic: they can forward that traffic to a target group, redirect it to something else (maybe another domain name), provide a fixed HTTP response (like an error or success code), or perform authentication using OpenID or Cognito.
Visually, this is how it looks: a simple application load balancer deployment with a single domain, categor.io, using one host-based rule with an attached SSL certificate. The rule uses host header as a condition and forward as an action, forwarding any connections for categor.io to the target group for the categor application.
If you want additional functionality, let’s say that you want to use the same application load balancer for a corporate client trying to access categor.io—maybe users of Bowtie Incorporated using the 1.3.3.7 IP address are attempting to access it, and you want to present them with an alternative version of the application. You can handle that by defining a listener rule where the condition is the source IP address of 1.3.3.7, and the action forwards traffic to a separate target group—an auto scaling group handling a second set of instances dedicated to this corporate client.
Because the application load balancer is a layer seven device, it can see inside the HTTP protocol and make decisions based on anything in that protocol or up to layer seven. Also, the connection from the load balancer to the instances for target group two will be a separate set of connections—highlighted by a slightly different color—because HTTP connections from enterprise users are terminated on the load balancer, with a separate connection to the application instances. There’s no option to pass through encrypted connections to the instances—it must be terminated—so if you need unbroken encrypted connections, you must use a network load balancer.
Since it’s a layer seven load balancer, you can use rules that work on layer seven protocol elements, like routing based on paths or headers, or redirecting traffic at the HTTP level. For example, if this ALB also handles traffic for dogogram.io, you could define a rule that matches dogogram.io and, as an action, configure a redirect toward categor.io—the obviously superior website. These are just a small subset of features available within the application load balancer, and because it's layer seven, you can perform routing decisions based on anything observable at that level, making it a really flexible product.
Before finishing, let’s take a look at network load balancers. They function at layer four, meaning they can interpret TCP, TLS, and UDP protocols, but have no visibility or understanding of HTTP or HTTPS. They can't interpret headers, see cookies, or handle session stickiness from an HTTP perspective, as that requires cookie awareness—which a layer four device doesn’t have.
Network load balancers are incredibly fast, capable of handling millions of requests per second with about 25% of the latency of application load balancers, since they don't deal with upper layers of the stack. They’re ideal for non-HTTP or HTTPS protocols—such as SMTP (email), SSH, game servers, or financial applications that don’t use web protocols.
If exam questions refer to non-web or non-secure web traffic that doesn’t use HTTP/HTTPS, default to network load balancers. One downside is that health checks only verify ICMP and basic TCP handshaking, not application awareness, so no detailed health checking is possible.
A key benefit is that they can be allocated static IPs, which is useful for white-listing—corporate clients can white-list NLB IPs to let them pass through firewalls, which is great for strict security environments. They can also forward TCP directly to instances, and because network layers build on top of each other, the network load balancer doesn’t interrupt any layers above TCP, allowing unbroken encrypted channels from clients to application instances.
This is essential to remember for the exam—using network load balancers with TCP listeners is how you achieve end-to-end encryption. They're also used for PrivateLink to provide services to other VPCs—another crucial exam point.
To wrap up, let’s do a quick comparison. I find it easier to remember when to use a network load balancer, and if it’s not one of those cases, default to application load balancers for their added flexibility. Use network load balancers if you need unbroken encryption between client and instance, static IPs for white-listing, the best performance (millions of RPS and low latency), non-HTTP/HTTPS protocols, or PrivateLink.
For everything else, use application load balancers—their functionality is often valuable in most scenarios. That’s everything I wanted to cover about application and network load balancers for the exam. Go ahead and complete this video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to spend a few minutes just covering the evolution at the Elastic Load Balancer product; it's important for the exam and real world usage that you understand its heritage and its current state. Now this is going to be a super quick lesson because most of the detail I'm going to be covering in dedicated lessons which are coming up next in this section of the course, so let's jump in and take a look.
Now there are currently three different types of Elastic Load Balancers available within AWS; if you see the term ELB or Elastic Load Balancers then it refers to the whole family, all three of them. Now the load balancers are split between version 1 and version 2; you should avoid using the version 1 load balancer at this point and aim to migrate off them onto version 2 products which should be preferred for any new deployments, and there are no scenarios at this point where you would choose to use a version 1 load balancer versus one of the version 2 types.
Now the load balancer product started with the classic load balancer known as CLB which is the only version 1 load balancer and this was introduced in 2009, so it's one of the older AWS products. Now classic load balancers can load balance HTTP and HTTPS as well as lower level protocols but they aren't really layer 7 devices, they don't really understand HTTP and they can't make decisions based on HTTP protocol features; they lack much of the advanced functionality of the version 2 load balancers and they can be significantly more expensive to use.
One common limitation is that classic load balancers only support one SSL certificate per load balancer which means for larger deployments you might need hundreds or thousands of classic load balancers and these could be consolidated down to a single version 2 load balancer, so I can't stress this enough for any questions or any real world situations you should default to not using classic load balancers.
Now this brings me on to the new version 2 load balancers; the first is the application load balancer or ALB and these are truly layer 7 devices so application layer devices, they support HTTP, HTTPS and the web socket protocols, and they're generally the type of load balancer that you'd pick for any scenarios which use any of these protocols.
There's also network load balancers or NLBs which are also version 2 devices but these support TCP, TLS which is a secure form of TCP and UDP protocols, so network load balancers are the type of load balancer that you would pick for any applications which don't use HTTP or HTTPS; for example if you wanted to load balance email servers or SSH servers or a game which used a custom protocol so didn't use HTTP or HTTPS then you would use a network load balancer.
In general version 2 load balancers are faster and support target groups and rules which allow you to use a single load balancer for multiple things or handle the load balancing different based on which customers are using it. Now I'm going to be covering the capabilities of each of the version 2 load balancers separately as well as talking about rules but I wanted to introduce them now as a feature.
Now for the exam you really need to be able to pick between network load balancers or application load balancers for a specific situation so that's what I want to work on over the coming lessons; for now though this has just been an introduction lesson that talks about the evolution of these products and that's everything that I wanted to cover in this lesson, so go ahead complete lesson and when you're ready I'll look forward to you joining me in the next.
-
-
iskconeducation.org iskconeducation.org
-
WITH YUDHISHTHIRAFOR A HUSBANDL I WILL NEVER BEk FREE O F GRIEF.|V THE INSULT AFTERI k THE GAME OFDICE STILLRANKLES IN M
Enough is enough. Draupadi had to go through a lot because of the cowardice of her husband, Yudhishthira in particular. Thanks to Bheema, the problem of Keechaka had been resolved. Had it not been for him, Yudhishthira might have just watched her being disrobed again and again without even trying to help her.
-
0 KURU ELPERS,1 CANNOT BEARTH I5 PERSECUTIONANY LON&ER.AM IWON OR. N O T ?I SHALL ABIDE Ak BY YOURf k VERDICT
I would say that she was the bravest among the braves present in the room. Nobody thought that it was the best to interrupt the game when a woman was objectified and nobody questioned Duryodhana and Shakuni's play. They were all there for their petite entertainment. And when Dhritrarastra realized that it was wrong, it was too late and wanted to cover up the incident by fulfilling three of her wishes.
-
I SHALL, IN THEBATTLEFI ELD, TEAROPEN THE BREASTOF THI5 VILLAIN OFTHE BHARATARACE, AND DRINKHIS LIFEBLOOP
What he said eventually came true. However, I believe that he had the power to stop all this from happening by just advising his brother that it was enough when he bet Draupadi in the game. Where did his conscience and bravery go when Draupadi needed it the most?
-
Draupadi was the total wom an ; complex and yetfemi
Draupadi was far more intelligent than her husbands. When Yudhishthira messed up in the dice game, she had to take matters on her hands. She questioned her husbands, their cousins, uncles and everybody who witnessed the game about their morality and humanity. She vowed not to tend to her hair so that her husbands would be reminded of the injustice that she had to go through just because of them. In a sense, it was her way of getting justice herself that her husbands ignored.
-
D O N O T B E I M P E T U O U S . I TW O U L D B E A 6A IN S TP H A R M A ,W H IC H ISDIVINE A N D SU PER IO RT O L IF E ITSELF. IA G R E E D T O T H ES T A K E S T H O U G HI K N E W 5 H A K U N I
If he knew what was going to come, then why did he even do it? If playing the game of dice was his karma to gain dharma, then it does not make any sense at all. Personally, I do not wish to have a husband who's going to put me and his brothers through a lot of suffering just because he wanted to take a risk. And him advising Bheema to be patient is very hypocritical at this moment. I would like to comment that he failed as a husband and also as a brother, the moment he agreed to Shakuni's game knowing that he would be dishonest.
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
The following is the authors’ response to the original reviews
eLife Assessment
The authors present an algorithm and workflow for the inference of developmental trajectories from single-cell data, including a mathematical approach to increase computational efficiency. While such efforts are in principle useful, the absence of benchmarking against synthetic data and a wide range of different single-cell data sets make this study incomplete. Based on what is presented, one can neither ultimately judge if this will be an advance over previous work nor whether the approach will be of general applicability.
We thank the eLife editor for the valuable feedback. Both benchmarking against other methods and validation on a synthetic dataset (“dyntoy”) are indeed presented in the Supplementary Note, although this was not sufficiently highlighted in the main text, which has now been improved.
Our manuscript contains benchmarking against a challenging synthetic dataset in Figure 1; furthermore, both the synthetic dataset and the real-world thymus dataset have been analyzed in parallel using currently available TI tools (as detailed in the Supplementary Note). z other single-cell datasets (single-cell RNA-seq) were added in response to the reviewers' comments.
One of the reviewers correctly points out that tviblindi goes against the philosophy of automated trajectory inference. This is correct; we believe that a new class of methods, complementary to fully automated approaches, is needed to explore datasets with unknown biology. tviblindi is meant to be a representative of this class of methods—a semi-automated framework that builds on features inferred from the data in an unbiased and mathematically well-founded fashion (pseudotime, homology classes, suitable low-dimensional representation), which can be used in concert with expert knowledge to generate hypotheses about the underlying dynamics at an appropriate level of detail for the particular trajectory or biological process.
We would also like to mention that the algorithm and the workflow are not the sole results of the paper. We have thoroughly characterized human thymocyte development, where, in addition to expected biological endpoints, we found and characterized an unexpected activated thymic T-reg endpoint.
Public Reviews:
Reviewer #1 (Public Review):
Summary:
The authors present tviblindi, a computational workflow for trajectory inference from molecular data at single-cell resolution. The method is based on (i) pseudo-time inference via expecting hitting time, (ii) sampling of random walks in a directed acyclic k-NN where edges are oriented away from a cell of origin w.r.t. the involved nodes' expected hitting times, and (iii) clustering of the random walks via persistent homology. An extended use case on mass cytometry data shows that tviblindi can be used elucidate the biology of T cell development.
Strengths:
- Overall, the paper is very well written and most (but not all, see below) steps of the tviblindi algorithm are explained well.
- The T cell biology use case is convincing (at least to me: I'm not an immunologist, only a bioinformatician with a strong interest in immunology).
We thank the reviewer for feedback and suggestions that we will accommodate, we respond point-by-point below
Weaknesses:
- The main weakness of the paper is that a systematic comparison of tviblindi against other tools for trajectory inference (there are many) is entirely missing. Even though I really like the algorithmic approach underlying tviblindi, I would therefore not recommend to our wet-lab collaborators that they should use tviblindi to analyze their data. The only validation in the manuscript is the T cell development use case. Although this use case is convincing, it does not suffice for showing that the algorithms's results are systematically trustworthy and more meaningful (at least in some dimension) than trajectories inferred with one of the many existing methods.
We have compared tviblindi to several trajectory inference methods (Supplementary note section 8.2: Comparison to state-of-the-art methods, namely Monocle3 (v1.3.1) Cao et al. (2019), Stream (v1.1) Chen et al. (2019), Palantir (v1.0.0) Setty et al. (2019), VIA (v0.1.89) Stassen et al. (2021), StaVia (Via 2.0) Stassen et al. (2024), CellRank 2 (v2.06) Weiler et al. (2024) and PAGA (scanpy==1.9.3) Wolf et al. (2019). We added thorough and systematic comparisons to the other algorithms mentioned by reviewers. We included extended evaluation on publicly available datasets (Supplementary Note section 10).
Also, in the meantime we have successfully used tviblindi to investigate human B-cell development in primary immunodeficiency (Bakardjieva M, et al. Tviblindi algorithm identifies branching developmental trajectories of human B-cell development and describes abnormalities in RAG-1 and WAS patients. Eur J Immunol. 2024 Dec;54(12):e2451004. doi: 10.1002/eji.202451004.).
- The authors' explanation of the random walk clustering via persistent homology in the Results (subsection "Real-time topological interactive clustering") is not detailed enough, essentially only concept dropping. What does "sparse regions" mean here and what does it mean that "persistent homology" is used? The authors should try to better describe this step such that the reader has a chance to get an intuition how the random walk clustering actually works. This is especially important because the selection of sparse regions is done interactively. Therefore, it's crucial that the users understand how this selection affects the results. For this, the authors must manage to provide a better intuition of the maths behind clustering of random walks via persistent homology.
In order to satisfy both reader types: the biologist and the mathematician, we explain the mathematics in detail in the Supplementary Note, section 4. We improved the Results text to better point the reader to the mathematical foundations in the Supplementary Note.
- To motivate their work, the authors write in the introduction that "TI methods often use multiple steps of dimensionality reduction and/or clustering, inadvertently introducing bias. The choice of hyperparameters also fixes the a priori resolution in a way that is difficult to predict." They claim that tviblindi is better than the original methods because "analysis is performed in the original high-dimensional space, avoiding artifacts of dimensionality reduction." However, in the manuscript, tviblindi is tested only on mass cytometry data which has a much lower dimensionality than scRNA-seq data for which most existing trajectory inference methods are designed. Since tviblindi works on a k-NN graph representation of the input data, it is unclear if it could be run on scRNA-seq data without prior dimensionality reduction. For this, cell-cell distances would have to be computed in the original high-dimensional space, which is problematic due to the very high dimensionality of scRNA-seq data. Of course, the authors could explicitly reduce the scope of tviblindi to data of lower dimensionality, but this would have to be stated explicitly.
In the manuscript we tested the framework on the scRNA-seq data from Park et al 2020 (DOI: 10.1126/science.aay3224). To illustrate that tviblindi can work directly in the high-dimensional space, we applied the framework successfully on imputed 2000 dimensional data. Furthermore we successfully used tviblindi to investigate bone marrow atlas scRNA-Seq dataset Zhang et al. (2024) and atlas of mouse gastrulation Pijuan-Sala et al. (2019). The idea behind tviblindi is to be able to work without the necessity to use non-linear dimensionality reduction techniques, which reduce the dimensionality to a very low number of dimensions and whose effects on the data distribution are difficult to predict. On the other hand the use of (linear) dimensionality reduction techniques which effectively suppress noise in the data such as PCA is a good practice (see also response to reviewer 2). We have emphasized this in the revised version and added the results of the corresponding analysis (see Supplementary note, section 9).
- Also tviblindi has at least one hyper-parameter, the number k used to construct the k-NN graphs (there are probably more hidden in the algorithm's subroutines). I did not find a systematic evaluation of the effect of this hyper-parameter.
Detailed discussion of the topic is presented in the Supplementary Note, section 8.1, where Spearman correlation coefficient between pseudotime estimated using k=10 and k=50 nearest neighbors was 0.997. The number k however does affect the number of candidate endpoints. But even when larger k causes spurious connection between unrelated cell fates, the topological clustering of random walks allows for the separation of different trajectories. We have expanded the “sensitivity to hyperparameters” section 8.1 also in response to reviewer 2.
Reviewer #2 (Public Review):
Summary:
In Deconstructing Complexity: A Computational Topology Approach to Trajectory Inference in the Human Thymus with tviblindi, Stuchly et al. propose a new trajectory inference algorithm called tviblindi and a visualization algorithm called vaevictis for single-cell data. The paper utilizes novel and exciting ideas from computational topology coupled with random walk simulations to align single cells onto a continuum. The authors validate the utility of their approach largely using simulated data and establish known protein expression dynamics along CD4/CD8 T cell development in thymus using mass cytometry data. The authors also apply their method to track Treg development in single-cell RNA-sequencing data of human thymus.
The technical crux of the method is as follows: The authors provide an interactive tool to align single cells along a continuum axis. The method uses expected hitting time (given a user input start cell) to obtain a pseudotime alignment of cells. The pseudotime gives an orientation/direction for each cell, which is then used to simulate random walks. The random walks are then arranged/clustered based on the sparse region in the data they navigate using persistent homology.
We thank the reviewer for feedback and suggestions that we have accommodated, we responded point-by-point below.
Strengths:
The notion of using persistent homology to group random walks to identify trajectories in the data is novel.
The strength of the method lies in the implementation details that make computationally demanding ideas such as persistent homology more tractable for large scale single-cell data. This enables the authors to make the method more user friendly and interactive allowing real-time user query with the data.
Weaknesses:
The interactive nature of the tool is also a weakness, by allowing for user bias leading to possible overfitting for a specific data.
tviblindi is not designed as a fully automated TI tool (although it implements a fully automated module), but as a data driven framework for exploratory analysis of unknown data. There is always a risk of possible bias in this type of analysis - starting with experimental design, choice of hyperparameters in the downstream analysis, and an expert interpretation of the results. The successful analysis of new biological data involves a great deal of expert knowledge which is difficult to a priori include in the computational models.
tvilblindi tries to solve this challenge by intentionally overfitting the data and keeping the level of resolution on a single random walk. In this way we aim to capture all putative local relationships in the data. The on-demand aggregation of the walks using the global topology of the data allows researchers to use their expert knowledge to choose the right level of detail (as demonstrated in the Figure 4 of the manuscript) while relying on the topological structure of the high dimensional point cloud. At all times tviblindi allows to inspect the composition of the trajectory to assess the variance in the development, possible hubs on the KNN-graph etc.
The main weakness of the method is lack of benchmarking the method on real data and comparison to other methods. Trajectory inference is a very crowded field with many highly successful and widely used algorithms, the two most relevant ones (closest to this manuscript) are not only not benchmarked against, but also not sited. Including those that specifically use persistent homology to discover trajectories (Rizvi et.al. published Nat Biotech 2017). Including those that specifically implement the idea of simulating random walks to identify stable states in single-cell data (e.g. CellRank published in Lange et.al Nat Meth 2022), as well as many trajectory algorithms that take alternative approaches. The paper has much less benchmarking, demonstration on real data and comparison to the very many other previous trajectory algorithms published before it. Generally speaking, in a crowded field of previously published trajectory methods, I do not think this one approach will compete well against prior work (especially due to its inability to handle the noise typical in real world data (as was even demonstrated in the little bit of application to real world data provided).
We provided comparisons of tviblindi and vaevictis in the Supplementary Note, section 8.2, where we compare it to Monocle3 (v1.3.1) Cao et al. (2019), Stream (v1.1) Chen et al. (2019), Palantir (v1.0.0) Setty et al. (2019), VIA (v0.1.89) Stassen et al. (2021), StaVia (Via 2.0) Stassen et al. (2024), CellRank 2 (v2.06) Weiler et al. (2024) and PAGA (scanpy==1.9.3) Wolf et al. (2019). We added thorough and systematic comparisons to the other algorithms mentioned by reviewers. We included extended evaluation on publicly available datasets (Supplementary Note section 10).
Beyond general lack of benchmarking there are two issues that give me particular concern. As previously mentioned, the algorithm is highly susceptible to user bias and overfitting. The paper gives the example (Figure 4) of a trajectory which mistakenly shows that cells may pass from an apoptotic phase to a different developmental stage. To circumvent this mistake, the authors propose the interactive version of tviblindi that allows users to zoom in (increase resolution) and identify that there are in fact two trajectories in one. In this case, the authors show how the author can fix a mistake when the answer is known. However, the point of trajectory inference is to discover the unknown. With so much interactive options for the user to guide the result, the method is more user/bias driven than data-driven. So a rigorous and quantitative discussion of robustness of the method, as well as how to ensure data-driven inference and avoid over-fitting would be useful.
Local directionality in expression data is a challenge which is not, to our knowledge, solved. And we are not sure it can be solved entirely, even theoretically. The random walks passing “through” the apoptotic phase are biologically infeasible, but it is an (unbiased) representation of what the data look like based on the diffusion model. It is a property of the data (or of the panel design), which has to be interpreted properly rather than a mistake. Of note, except for Monocle3 (which does not provide the directionality) other tested methods did not discover this trajectory at all.
The “zoom in” has in fact nothing to do with “passing through the apoptosis”. We show how the researcher can investigate the suggested trajectory to see if there is an additional structure of interest and/or relevance. This investigation is still data driven (although not fully automated). Anecdotally in this particular case this branching was discovered by a bioinformatician, who knew nothing about the presence of beta-selection in the data.
We show that the trajectory of apoptosis of cortical thymocytes consists of 2 trajectories corresponding to 2 different checkpoints (beta-selection and positive/negative selection). This type of a structure, where 2 (or more) trajectories share the same path for most of the time, then diverge only to be connected at a later moment (immediately from the point of view of the beta-selection failure trajectory) is a challenge for TI algorithms and none of tested methods gave a correct result. More importantly there seems to be no clear way to focus on these kinds of structures (common origin and common fate) in TI methods.
Of note, the “zoom in” is a recommended and convenient method to look for an inner structure, but it does not necessarily mean addition of further homological classes. Indeed, in this case the reason that the structure is not visible directly is the limitation of the dendrogram complexity (only branches containing at least 10% of simulated random walks are shown by default). In summary, tviblindi effectively handled all noise in the data that obscured biologically valid trajectories for other methods. We have improved the discussion of the robustness in the current version.
Second, the paper discusses the benefit of tviblindi operating in the original high dimensions of the data. This is perhaps adequate for mass cytometry data where there is less of an issue of dropouts and the proteins may be chosen to be large independent. But in the context of single-cell RNA-sequencing data, the massive undersampling of mRNA, as well as high degree of noise (e.g. ambient RNA), introduces very large degree of noise so that modeling data in the original high dimensions leads to methods being fit to the noise. Therefore ALL other methods for trajectory inference work in a lower dimension, for very good reason, otherwise one is learning noise rather than signal. It would be great to have a discussion on the feasibility of the method as is for such noisy data and provide users with guidance. We note that the example scRNA-seq data included in the paper is denoised using imputation, which will likely result in the trajectory inference being oversmoothed as well.
We agree with the reviewer. In our manuscript we wanted to showcase that tviblindi can directly operate in high-dimensional space (thousands of dimensions) and we used MAGIC imputation for this purpose. This was not ideal. More standard approach, which uses 30-50 PCs as input to the algorithm resulted in equivalent trajectories. We have added this analysis to the study (Supplementary note, section 9).
In summary, the fact that tviblindi scales well with dimensionality of the data and is able to work in the original space does not mean that it is always the best option. We have added a corresponding comment into the Supplementary note.
Reviewer #3 (Public Review):
Summary:
Stuchly et al. proposed a single-cell trajectory inference tool, tviblindi, which was built on a sequential implementation of the k-nearest neighbor graph, random walk, persistent homology and clustering, and interactive visualization. The paper was organized around the detailed illustration of the usage and interpretation of results through the human thymus system.
Strengths:
Overall, I found the paper and method to be practical and needed in the field. Especially the in-depth, step-by-step demonstration of the application of tviblindi in numerous T cell development trajectories and how to interpret and validate the findings can be a template for many basic science and disease-related studies. The videos are also very helpful in showcasing how the tool works.
Weaknesses:
I only have a few minor suggestions that hopefully can make the paper easier to follow and the advantage of the method to be more convincing.
(1) The "Computational method for the TI and interrogation - tviblindi" subsection under the Results is a little hard to follow without having a thorough understanding of the tviblindi algorithm procedures. I would suggest that the authors discuss the uniqueness and advantages of the tool after the detailed introduction of the method (moving it after the "Connectome - a fully automated pipeline".
We thank the reviewer for the suggestion and we have accommodated it to improve readability of the text.
Also, considering it is a computational tool paper, inevitably, readers are curious about how it functions compared to other popular trajectory inference approaches. I did not find any formal discussion until almost the end of the supplementary note (even that is not cited anywhere in the main text). Authors may consider improving the summary of the advantages of tviblindi by incorporating concrete quantitative comparisons with other trajectory tools.
We provided comparisons of tviblindi and vaevictis in the Supplementary Note, section 8.2, where we compare it to Monocle3 (v1.3.1) Cao et al. (2019), Stream (v1.1) Chen et al. (2019), Palantir (v1.0.0) Setty et al. (2019), VIA (v0.1.89) Stassen et al. (2021), StaVia (Via 2.0) Stassen et al. (2024), CellRank 2 (v2.06) Weiler et al. (2024) and PAGA (scanpy==1.9.3) Wolf et al. (2019). We added thorough and systematic comparisons to the other algorithms mentioned by reviewers. We included extended evaluation on publicly available datasets (Supplementary Note section 10).
(2) Regarding the discussion in Figure 4 the trajectory goes through the apoptotic stage and reconnects back to the canonical trajectory with counterintuitive directionality, it can be a checkpoint as authors interpret using their expert knowledge, or maybe a false discovery of the tool. Maybe authors can consider running other algorithms on those cells and see which tracks they identify and if the directionality matches with the tviblindi.
We have indeed used the thymus dataset for comparison of all TI algorithms listed above. Except for Monocle 3 they failed to discover the negative selection branch (Monocle 3 does not offer directionality information). Therefore, a valid topological trajectory with incorrect (expert-corrected) directionality was partly or entirely missed by other algorithms.
(3) The paper mainly focused on mass cytometry data and had a brief discussion on scRNA-seq. Can the tool be applied to multimodality data such as CITE-seq data that have both protein markers and gene expression? Any suggestions if users want to adapt to scATAC-seq or other epigenomic data?
The analysis of multimodal data is the logical next step and is the topic of our current research. At this moment tviblindi cannot be applied directly to multimodal data. It is possible to use the KNN-graph based on multimodal data (such as weighted nearest neighbor graph implemented in Seurat) for pseudotime calculation and random walk simulation. However, we do not have a fully developed triangulation for the multimodal case yet.
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
Suggestions for improved or additional experiments, data or analyses:
- Benchmark against existing trajectory inference methods.
- Benchmark on scRNA-seq data or an explicit statement that, unlike existing methods, tviblindi is not designed for such data.
We provided comparisons of tviblindi and vaevictis in the Supplementary Note, section 8.2, where we compare it to Monocle3 (v1.3.1) Cao et al. (2019), Stream (v1.1) Chen et al. (2019), Palantir (v1.0.0) Setty et al. (2019), VIA (v0.1.89) Stassen et al. (2021), StaVia (Via 2.0) Stassen et al. (2024), CellRank 2 (v2.06) Weiler et al. (2024) and PAGA (scanpy==1.9.3) Wolf et al. (2019). We added thorough and systematic comparisons to the other algorithms mentioned by reviewers. We included extended evaluation on publicly available datasets (Supplementary Note section 10).
- Systematic evaluation of the effetcs of hyper-parameters on the performance of tviblindi (as mentioned above, there is at least one hyper-parameter, the number k to construct the k-NN graphs).
This is described in Supplementary Note section 8.1
Recommendations for improving the writing and presentation:
- The GitHub link to the algorithm which is currently hidden in the Methods should be moved to the abstract and/or a dedicated section on code availability.
- The presentation of the persistent homology approach used for random walk clustering should be improved (see public comment above).
This is described extensively in Supplementary Note
- A very minor point (can be ignored by the authors): consider renaming the algorithm. At least for me, it's extremely difficult to remember.
We choose to keep the original name
Minor corrections to the text and figures:
- Labels and legend texts are too small in almost all figures.
Reviewer #2 (Recommendations For The Authors):
(1) On page 3: "(2) Analysis is performed in the original high-dimensional space avoiding artifacts of dimensionality reduction." In mass cytometry data where there is no issue of dropouts, one may choose proteins such that they are not correlated with each other making dimensionality reduction techniques less relevant. But in the context of an unbiased assays such as single-cell RNA-sequencing (scRNA-seq), one measures all the genes in a cell so dimensionality reduction can help resolve the redundancy in the feature space due to correlated/co-regulated gene expression patterns. This assumption forms the basis of most methods in scRNA-seq. More importantly, in scRNA-seq data the dropouts and ambient molecules in mRNA counts result in so much noise that modeling cells in the full gene expression is highly problematic. So the authors are requested to discuss in detail how they would propose to deal with noise in scRNA-seq data.
On this note, the authors mention in Supplementary Note 9 (Analysis of human thymus single-cell RNA-seq data): "Imputed data are used as the input for the trajectory inference, scaled counts (no imputation) are shown in line plots". The line plots indicate the gene expression trends along the obtained pseudotime. The authors use MAGIC to impute the data, and we request the authors to mention this in the Methods section (currently one must look through the code on Supplementary Note 1.3 to find this). Data imputation in single-cell RNA-seq data are intended to enable quantification of individual gene expression distribution or pairwise gene associations. But when all the genes in an imputed data are used for visualization, clustering or trajectory inference, the averaging effect will compound and result in severely smoothed data that misses important differences between cell states. Especially, in the case of MAGIC, which uses a transition matrix raised to a power, it is over-smoothing of the data to use a transition matrix smoothed data to obtain another transition matrix to calculate the hitting time (or simulate random walks). Second, the authors' proposal to use scaled counts to study gene trends cannot be generalized to other settings due to drop out issue. Given the few genes (and only one branch) that are highlighted in Figure 7D-G and Figure 31 in Supplementary Note, it is hard to say if scaling raw values would pick up meaningful biology robustly here for other branches.
We recommend that this data be reanalyzed with non-imputed data used for trajectory inference and imputed gene expression used for line plots.
As stated above in the public review, we reanalyzed the scRNA Seq data using a more standard approach (first 50 principal components). We have also analyzed two additional scRNA Seq datasets (Section 1 and section 10 of Supplementary Note)
On the same note, the authors use Seurat's CellCycleScoring to obtain the cell cycle phase of each cell and later use ScaleData to regress them out. While we agree that it is valuable to remove cell cycle effect from the data for trajectory inference (and has been used previously in other methods), the regression approach employed in Seurat's ScaleData is not appropriate. It is an aggressive approach that severely changes expression pattern of many genes and can result in new artifacts (false positives) in the data. We recommend the authors to explore this more and consider using a more principled alternatives such as fscLVM (https://genomebiology.biomedcentral.com/articles/10.1186/s13059-017-1334-8).
Cell cycle correction is an open problem (Heumos, Nat Rev Genetics, 2023)
Here we use an (arguably aggressive) approach to make the presentation more straightforward. The cells we are interested here (end #6) are not dividing and the regression does not change the conclusion drawn in the paper
(2) The figures provided are extremely low in resolution that it is practically impossible to correctly interpret a lot of the conclusion and references made in the figure (especially Figure 3 in the main text).
Resolution of the Figures was improved
(3) There are many aspects of the method that enable easy user biases and can lead to substantial overfitting of the data.
a. On page 7: "The topology of the point cloud representing human T-cell development is more complex ... and does not offer a clear cutoff for the choice of significant sparse regions. Interactive selection allows the user to vary the resolution and to investigate specific sparse regions in the data iteratively." This implies that the method enables user biases to be introduced into the data analysis. While perhaps useful for exploration, quantitative trajectory assessment using such approach can be faulty when the user (A) may not know the underlying dynamics (B) forces preconceived notion of trajectory.
The authors should consider making the trajectory inference approach less dependent on interactive user input and show that the trajectory results are robust to any choices the user may make. It may also help if the authors provide an effective guide and mention clearly what issues could result due to the use of such thresholds.
As explained in the response in public reviews, tviblindi is not designed as a fully automated TI tool, but as a data driven framework for exploratory analysis of unknown data.
There is always a risk of possible bias in this type of analysis - starting with experimental design, choice of hyperparameters in the downstream analysis, and an expert interpretation of the results. The successful analysis of new biological data involves a great deal of expert knowledge which is difficult to a priori include in the computational models. To specifically address the points raised by the reviewer:
“(A) may not know the underlying dynamics” - tviblindi is designed to perform exploratory analysis of the unknown underlying dynamics. We showcase in the study how this can be performed and we highlight possible cases which can be resolved expertly (spurious connections (doublets), different scales of resolution (beta selection)). Crucially, compared to other TI methods, tviblindi offers a clear mechanism on how to discover, focus and resolve these issues which would (and do) contaminate the trajectories discovered fully automatically by tested methods (cf. the beta selection, or the development of plasmacytoid dendritic cells (PDCs) (Supplementary note, section 10.1).
“(B) forces preconceived notion of trajectory” - user interaction in tviblindi does not force a preconceived notion of the trajectory. The random walks are simulated before the interactive step in an unbiased manner. During the interactive step the user adjusts trajectory specific resolution - incorrect choice of the resolution may result in either merging distinct trajectories into one or over separating the trajectories (which is arguably much less serious). However the interactive step is designed to deal with exactly this kind of challenge. We showcase (e.g. beta selection, or PDCs development) how to address the issue - tviblindi allows us to investigate deeper structure in any considered trajectory.
Thus, tviblindi represents a new class of methods that is complementary to fully automated trajectory inference tools. It offers a semi-automated tool that leverages features derived from data in an unbiased and mathematically rigorous manner, including pseudotime, homology classes, and appropriate low-dimensional representations. These can be integrated with expert knowledge to formulate hypotheses regarding the underlying dynamics, tailored to the specific trajectory or biological process under investigation.
b. In Figure 4, the authors discuss the trajectory of cells emanating from CD3 negative double positive stage and entering apoptotic phase and mention tviblindi may give "the false impression that cells may pass through an apoptotic phase into a later developmental stage" and propose that the interactive version of tviblindi can help user zoom into (increase resolution) this phenomenon and identify that there are in fact two trajectories in one. Given this, how do the other trajectories in the data change if a user manually adjusts the resolution? A quantification of the robustness is important. Also, it appears that a more careful data clean up could avoid such pitfalls where the algorithm infers trajectory based on mixed phenotype and the user would not have to manually adjust the resolution to obtain clear biological conclusion. We not that the original publication of this data did such "data clean up" using simple diffusion map based dimensionality reduction which the authors boast they avoid. There is a reason for this dimensionality reduction (distinguishing signal from noise), even in CyTOF data, let alone its importance in single cell data.
The reviewer is concerned about two different, but intertwined issues we wish to untangle here. First, data clean-up is typically done on the premise that dead cells are irrelevant and they are a source of false signals. In the case of the thymocytes in the human thymus this premise is not true. Apoptotic cells are a legitimate (actually dominant) fate of the development and thus need to be represented in the TI dataset. Their biological behavior is however complex as they stop expressing proteins and thus lose their surface markers gradually, as dictated by the particular protein degradation kinetics. So can we clean up dead and dying cells better? Yes, but we don't want to do it since we would lose cells we want to analyze. Second, do trajectories change when we zoom into the data? No, only the level of detail presented visually changes. Since we calculate 5000 trajectories in the dataset, we need to aggregate them already for the hierarchical clustering visualization. Note that Figure 4, panel A highlights 159 trajectories selected in V. group. Zooming in means that the hierarchy of trajectories within V. group is revealed (panel D, groups V.a and Vb.) and can be interpreted on the vaevictis and lineplot graphs (panel E, F).
c. In the discussion, the authors write "[tviblindi] allows the selection and grouping of similar random walks into trajectories based on visual interaction with the data". This counters the idea of automated trajectory inference and can lead to severe overfitting.
As explained in reply to Q3, our aim was NOT to create a fully automated trajectory inference tool. Even more, in our experience we realized that all current tools are taking this fully automated approach with a search for an “ideal” set of hyperparameters. This, in our experience, leads to a “blackbox” tool that is difficult to interpret for the expert in the biological field. To respond to this need we designed a modular approach where the results of the TI are presented and the expert can interact with them to focus the visualization and to derive interpretation. Our interactive concept is based on 15 years of experience with the data analysis in flow cytometry, where neither manual gating nor full automation is the ultimate solution but smart integration of both approaches eventually wins the game.
Thus, tviblindi represents a new class of methods that is complementary to fully automated trajectory inference tools. It offers a semi-automated tool that leverages features derived from data in an unbiased and mathematically rigorous manner. These features include pseudotime, homology classes, and appropriate low-dimensional representations. These features can be integrated with expert knowledge to formulate hypotheses regarding the underlying dynamics, tailored to the specific trajectory or biological process under investigation.
d. The authors provide some comment on the robustness to the relaxation parameter for witness complex construction in Supplementary Note Section 8.1.2 but it is limited given the importance of this parameter and a more thorough investigation is recommended. We request the authors to provide concrete examples with figures of how changing alpha2 parameter leads to simplicial complexes of different sizes and an assessment of contexts in which the parameter is robust and when not (in both simulated and publicly available real data). Of note, giving the users a proper guide for parameter choice based on these examples and offering them ways to quantify robustness of their results may also be valuable.
Section 8 in Supplementary Note was extended as requested.
e. The authors are requested for an assessment of possible short-circuits (e.g. cells of two distantly related phenotypes that get connected erroneously in the trajectory) in the data, and how their approach based on persistent homology deals with it.
If a short circuit results in a (spurious) alternative trajectory, the persistent homology approach allows us to distinguish it from genuine trajectories that do not follow the short circuit. This prevents contamination of the inferred evolution by erroneous connections. The ability to distinguish and separate distinct trajectories with the same fate is a major strength of this approach (e.g., the trajectory through doublets or the trajectories around checkpoints in thymocytes’ evolution).
(4) The authors propose vaevictis as a new visualization tool and show its performance compared to the standard UMAP algorithm on a simulated data set (Figure 1 in Supplementary Notes). We recommend a more comprehensive comparison between the two algorithms on a wide array of publicly available single-cell datasets. As well as comparison to other popular dimensionality reduction approaches like force directed layouts, which are the most widely used tool specifically to visualize trajectories.
We added Section 10 to Supplementary Note that presents multiple comparisons of this kind. It is important to note that tviblindi works independently of visualization and any preferred visualization can be used in the interactive phase (multiple visualisation methods are implemented).
(5) In Supplementary Note 8.2, the authors compare tviblindi against the other methods. We recommend the authors to quantify the comparison or expand on their assesments in real biological data. For example, in comparison against Palantir and VIA the authors mention "... discovers candidate endpoints in the biological dataset but lacks toolbox to interrogate subtle features such as complex branching" and "fails to discover subtle features (such as Beta selection)" respectively. We recommend the authors to make these comparisons more precise or provide quantification. While the added benefit of interactive sessions of tviblindi may make it more user friendly, the way tviblindi appears to enable analysis of subtle features (e.g. Figure 1H) should be possible in Palantir or VIA as well.
We extended the comparisons and presented them in Section 8 and 10 in Supplementary Note.
(6) The notion of using random walk simulations to identify terminal (and initial states) has been previously used in single-cell data (CellRank algorithm: https://www.nature.com/articles/s41592-021-01346-6). We request the authors to compare their approach to CellRank.
We compared our algorithm to the CellRank successor CellRank 2 (see section 8.2, Supplementary Note)
(7) The notion of using persistent homology to discover trajectories has been previously used in single cell data https://pubmed.ncbi.nlm.nih.gov/28459448/. we request a comparison to this approach
The proposed algorithm was not able to accommodate the large datasets we used.
scTDA (Rizvi, Camara et al. Nat. Biotechnol. 2017) has not been updated for 6 years. It is not suited for complex atlas-sized datasets both in terms of performance and utility, with its limited visualization tools. It also lacks capabilities to analyze individual trajectories.
(8) In Figure 3B, the authors visualize the endpoints and simulated random walks using the connectome. There is no edge from start to the apoptotic cells here. It is not clear why? If they are not relevant based on random walks, can the user remove them from analysis? Same for the small group of pink cells below initial point.
The connectome is a fully automated approach (similar to PAGA) which gives a basic overview of the data. It is not expected to be able to compete with the interactive pipeline of tviblindi for the same reasons as the fully automated methods (difficult to predict the effect of hyperparameters).
(9) In Supplementary Figure 3, in relation to "Variants of trajectories including selection processes" the author mention that there is a spurious connection between CD4 single positive, and the doublet set of cells. The authors mention that the presence of dividing cells makes it difficult to remove the doublets. We request the authors to discuss why. For example, the authors seem to have cell cycle markers (e.g. Ki67, pH3, Cyclin) and one would think that coupled with DNA intercalator 191/193lr one could further clean-up the data. Can the authors employ alternative toolkits such as doublet detection methods?
To address this issue, we do remove doublets with illegitimate cell barcodes (e.g. we remove any two cells from two samples with different barcode which present with double barcode). Although there are computational doublet removal approaches for mass cytometry (Bagwell, Cytometry A 2020), mostly applied to peripheral blood samples (where cell division is not present under steady state immune system conditions), these are however not well suited for situations where dividing samples occur (Rybakowska P, Comput Struct Biotechnol J. 2021), which is the case of our thymocyte samples. Furthermore, there are other situations where doublet formation is not an accident, but rather a biological response (Burel JG, Cytometry A (2020). Thus, the doublet cell problem is similar to the apoptotic cell problem discussed earlier.
We could remove cells with the double DNA signal, but this would remove not only accidental doublets but also the legitimate (dividing) cells. So the question is how to remove the illegitimate doublets but not the legitimate?
Of note, the trajectory going through doublets does not affect the interpretation of other trajectories as it is readily discriminated by persistent homology and thus random walks passing through this (spurious) trajectory do not contaminate the markers’ evolution inferred for legitimate trajectories.
We therefore prefer to remove only the barcode illegitimate and keep all others in analysis, using the expert analysis step also to identify (using the cell cycle markers plus other features) the artificially formed doublets and thus spurious connections.
(10) The authors should discuss how the gene expression trend plots are made (e.g. how are the expression averaged? Rolling mean?).
The development of those markers is shown as a line plot connecting the average values of a specific marker within a pseudotime segment. By default, the pseudotime values are divided into uniform segments (each containing the same number of points) whose number can be changed in the GUI. To focus on either early or late stages of the development, the segment division can be adjusted in GUI. See section 6 of the Supplementary Note.
Reviewer #3 (Recommendations For The Authors):
The overall figures quality needs to be improved. For example, I can barely see the text in Figure 3c.
Resolution of the Figures was improved
-
-
go-gale-com.ezp.idm.oclc.org go-gale-com.ezp.idm.oclc.org
-
"tie back to some of those common themes that we experience at the beginning of Warcraft, when we're playing those original zones."
Shows a genuine intention to invoke nostalgic game elements
-
-
osf.io osf.io
-
Reviewer #3 (Public review):
In this paper, the authors use a three-phase economic game to examine the tendency to engage in prosocial versus competitive exchanges with three anonymous partners. In particular, they consider individual differences in the tendency to infer about others' tendencies based on one's preferences and to update one's preferences based on observations of others' behavior. The study includes a sample of individuals diagnosed with borderline personality disorder and a matched sample of psychiatrically healthy control participants.
On the whole, the experimental design is well-suited to the questions and the computational model analyses are thorough, including modern model-fitting procedures. I particularly appreciated the clear exposition regarding model parameterization and the descriptive Table 2 for qualitative model comparison. In the revised manuscript, the authors now provide a more thorough treatment of examining group differences in computational parameters given that the best-fitting model differed by group. They also examine the connection of their task and findings to related research focusing on self-other representation and mentalization (e.g., Story et al., 2024).
The authors note that the task does not encourage competition and instead captures individual differences in the motivation to allocate rewards to oneself and others in an interdependent setting. The paper could have been strengthened by clarifying how the Social Value Orientation framework can be used to interpret the motivations and behavior of BPD versus CON participants on the task. Although the authors note that their approach makes "clear and transparent a priori predictions," the paper could be improved by providing a clear and consolidated statement of these predictions so that the results could be interpreted vis-a-vis any a priori hypotheses.
Finally, the authors have amended their individual difference analyses to examine psychometric measures such as the CTQ alongside computational model parameter estimate differences. I appreciate that these analyses are described as exploratory. The approach of using a partial correlation network with bootstrapping (and permutation) was interesting, but the logic of the analysis was not clearly stated. In particular, there are large group (Table 1: CON vs. BPD) differences in the measures introduced into this network. As a result, it is hard to understand whether any partial correlations are driven primarily by mean differences in severity (correlations tend to be inflated in extreme groups designs due to the absence of observation in middle of scales forming each bivariate distribution). I would have found these exploratory analyses more revealing if group membership was controlled for.
Tags
Annotators
URL
-
-
templeu.instructure.com templeu.instructure.com
-
Cable has stood ready to supplant broadcasting from the very beginning of both radio and television; its failure so to is a further vivid example of the operation of the ‘law’ of the suppression of radical potential.
This sentence is saying that cable technology was always good enough to replace traditional radio and TV broadcasts, even from the start. But it didn’t happen—not because the technology wasn’t ready, but because powerful groups didn’t want it to. The author is pointing out how new, game-changing ideas or inventions often get blocked because they could shake up the way things already work or threaten people in charge.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
They also have a history of collectively choosing a target website or community and doing a “raid” where they all try to join and troll and offend the people in that community.
This part was very shocking for me; I found it disquieting that an anonymous group could coordinate attacks to hurt or offend others and treat it like a game. This made me reflect on how online spaces can encourage people to lose empathy when there's no accountability or prevent this kind of behavior without restricting free speech; similar to our discussions in class about the responsibility of platform design when anonymous platforms allow cruelty rather than actual freedom.
-
-
gamesfromwithin.com gamesfromwithin.com
-
all a game does is transform some data (assets, inputs, state) into some other data (graphics commands, new game states)
very profound statement.
-
minimize the amount of transformations
does "transformations" here mean, for example, loading some files into the game?
-
-
worldtreasures.org worldtreasures.org
-
10. True: Following the death of her husband, Eliza Hamilton took it upon herself to tell the story of her husband, which also benefited her! Following the Reynolds Pamphlet, Eliza’s story was exploited without her permission. By telling Hamilton’s story after his death, Eliza was able to reclaim her own narrative as well. As a reminder, the Reynolds Pamphlet was published by Alexander Hamilton to clear his name of being involved in political corruption after being blackmailed by James Reynolds. In doing so, he admitted to his romantic involvement with James’ wife, Maria, and thus humiliating Eliza in the process. Another truth to the musical is the depiction of Eliza burning love letters between her and Hamilton, though it cannot be known for certain her reasoning behind this. Finally, in the last song of the musical, Eliza sings of her philanthropy that she became involved in after Hamilton’s death. This was also true! Eliza founded The Hamilton Free School which was the first school in Washington Heights and became heavily involved in helping orphans and widows.
I didn't have much to say about this article but I like the layout of it and I feel like i could use this as a fun game during the interview, im having many ideas from this about things we could do!
-
-
docs.google.com docs.google.com
-
He loved boxing, though. He knew the names of all the Mexican fighters as if they lived here, as if they were Dodgers players, like Steve Sax or Steve Yeager, Dusty Baker, Kenny Landreaux or Mike Marshall, Pedro Guerrero. Roque did know about Fernando Valenzuela, as everyone did, even his mom, which is why she agreed to let Roque take them to a game
Roque, despite not being a huge fan of baseball, takes Erick out, in an attempt to connect with him but also show him that he loves not just Erick's mother, but Erick as well.
-
His mom was saying something, and Roque, too, and then, finally, it was just him and that ball and his stinging hands.
He was so shocked by his catch that everything froze around him, he wasn't just at a game, or just watching his favorite players, he was experiencing the feel, holding a ball they had played with, and he was able to experience this because Roque had shown him more actual care and attention than any of the other men had.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Affordances [e28] are what a user interface lets you do. In particular, it’s what a user interface makes feel natural to do. So for example, an interface might have something that looks like it should be pressed, or an interface might open by scrolling a little so it is clear that if you touch it you can make it scroll more (see a more nuanced explanation here [e29])
I've played a few games, and I think the design of some of the games' user interfaces could be applied to social platforms as well. Different buttons in a game interface lead to different functions, and the most important functions use enlarged fonts and frames, or prominent colors. To achieve the property of affordance, the design should be clean and clear, so that the user can see at a glance where the buttons lead to, and can quickly find the important buttons at any time.
-
-
nautil.us nautil.us
-
Artur Garcez and Luis Lamb wrote a manifesto for hybrid models in 2009, called Neural-Symbolic Cognitive Reasoning. And some of the best-known recent successes in board-game playing (Go, Chess, and so forth, led primarily by work at Alphabet’s DeepMind) are hybrids. AlphaGo used symbolic-tree search, an idea from the late 195
example symbols
-
A wakeup call came at the end of 2021, at a major competition, launched in part by a team of Facebook (now Meta), called the NetHack Challenge. NetHack, an extension of an earlier game known as Rogue, and forerunner to Zelda, is a single-user dungeon exploration game that was released in 1987.
example symbols
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about two volume types available within AWS GP2 and GP3. Now GP2 is the default general purpose SSD based storage provided by EBS, and GP3 is a newer storage type which I want to include because I expect it to feature on all of the exams very soon. Now let's just jump in and get started.
General Purpose SSD storage provided by EBS was a game changer when it was first introduced; it's high performance storage for a fairly low price. Now GP2 was the first iteration and it's what I'm going to be covering first because it has a simple but initially difficult to understand architecture, so I want to get this out of the way first because it will help you understand the different storage types.
When you first create a GP2 volume it can be as small as 1 GB or as large as 16 TB, and when you create it the volume is created with an I/O credit allocation. Think of this like a bucket. So an I/O is one input output operation, and an I/O credit is a 16 kb chunk of data. So an I/O is one chunk of 16 kilobytes in one second; if you're transferring a 160 kb file that represents 10 I/O blocks of data—so 10 blocks of 16 kb—and if you do that all in one second that's 10 credits in one second, so 10 I/Ops.
When you aren't using the volume much you aren't using many I/Ops and you aren't using many credits, but during periods of high disc load you're going to be pushing a volume hard and because of that it's consuming more credits—for example during system boots or backups or heavy database work. Now if you have no credits in this I/O bucket you can't perform any I/O on the disc.
The I/O bucket has a capacity of 5.4 million I/O credits, and it fills at the baseline performance rate of the volume. So what does this mean? Well, every volume has a baseline performance based on its size with a minimum—so streaming into the bucket at all times is a 100 I/O credits per second refill rate. This means as an absolute minimum regardless of anything else you can consume 100 I/O credits per second which is 100 I/Ops.
Now the actual baseline rate which you get with GP2 is based on the volume size—you get 3 I/O credits per second per GB of volume size. This means that a 100 GB volume gets 300 I/O credits per second refilling the bucket. Anything below 33.33 recurring GB gets this 100 I/O minimum, and anything above 33.33 recurring gets 3 times the size of the volume as a baseline performance rate.
Now you aren't limited to only consuming at this baseline rate—by default GP2 can burst up to 3000 I/Ops so you can do up to 3000 input output operations of 16 kb in one second, and that's referred to as your burst rate. It means that if you have heavy workloads which aren't constant you aren't limited by your baseline performance rate of 3 times the GB size of the volume, so you can have a small volume which has periodic heavy workloads and that's OK.
What's even better is that the credit bucket it starts off full—so 5.4 million I/O credits—and this means that you could run it at 3000 I/Ops, so 3000 I/O per second for a full 30 minutes, and that assumes that your bucket isn't filling up with new credits which it always is. So in reality you can run at full burst for much longer, and this is great if your volumes are used initially for any really heavy workloads because this initial allocation is a great buffer.
The key takeaway at this point is if you're consuming more I/O credits than the rate at which your bucket is refilling then you're depleting the bucket—so if you burst up to 3000 I/Ops and your baseline performance is lower then over time you're decreasing your credit bucket. If you're consuming less than your baseline performance then your bucket is replenishing, and one of the key factors of this type of storage is the requirement that you manage all of the credit buckets of all of your volumes, so you need to ensure that they're staying replenished and not depleting down to zero.
Now because every volume is credited with 3 I/O credits per second for every GB in size, volumes which are up to 1 TB in size they'll use this I/O credit architecture, but for volumes larger than 1 TB they will have a baseline equal to or exceeding the burst rate of 3000—and so they will always achieve their baseline performance as standard; they don't use this credit system. The maximum I/O per second for GP2 is currently 16000, so any volumes above 5.33 recurring TB in size achieves this maximum rate constantly.
GP2 is a really flexible type of storage which is good for general usage—at the time of creating this lesson it's the default but I expect that to change over time to GP3 which I'm going to be talking about next. GP2 is great for boot volumes, for low latency interactive applications or for dev and test environments—anything where you don't have a reason to pick something else. It can be used for boot volumes and as I've mentioned previously it is currently the default; again over time I expect GP3 to replace this as it's actually cheaper in most cases but more on this in a second.
You can also use the elastic volume feature to change the storage type between GP2 and all of the others, and I'll be showing you how that works in an upcoming lesson if you're doing the CIS Ops or developer associate courses. If you're doing the architecture stream then this architecture theory is enough.
At this point I want to move on and explain exactly how GP3 is different. GP3 is also SSD based but it removes the credit bucket architecture of GP2 for something much simpler. Every GP3 volume regardless of size starts with a standard 3000 IOPS—so 3000 16 kB operations per second—and it can transfer 125 MB per second. That’s standard regardless of volume size, and just like GP2 volumes can range from 1 GB through to 16 TB.
Now the base price for GP3 at the time of creating this lesson is 20% cheaper than GP2, so if you only intend to use up to 3000 IOPS then it's a no brainer—you should pick GP3 rather than GP2. If you need more performance then you can pay for up to 16000 IOPS and up to 1000 MB per second of throughput, and even with those extras generally it works out to be more economical than GP2.
GP3 offers a higher max throughput as well so you can get up to 1000 MB per second versus the 250 MB per second maximum of GP2—so GP3 is just simpler to understand for most people versus GP2 and I think over time it's going to be the default. For now though at the time of creating this lesson GP2 is still the default.
In summary GP3 is like GP2 and IO1—which I'll cover soon—had a baby; you get some of the benefits of both in a new type of general purpose SSD storage. Now the usage scenarios for GP3 are also much the same as GP2—so virtual desktops, medium sized databases, low latency applications, dev and test environments and boot volumes.
You can safely swap GP2 to GP3 at any point but just be aware that for anything above 3000 IOPS the performance doesn't get added automatically like with GP2 which scales on size. With GP3 you would need to add these extra IOPS which come at an extra cost and that's the same with any additional throughput—beyond the 125 MB per second standard it's an additional extra, but still even including those extras for most things this storage type is more economical than GP2.
At this point that's everything that I wanted to cover about the general purpose SSD volume types in this lesson—go ahead, complete the lesson and then when you're ready, I'll look forward to you joining me in the next.
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.netview1
-
children from families in the top 20 percent of the income distribution already outscore children from the bottom 20 percent by 106 points in early literacy.
When inequality appears this early, it invalidates the very idea of a level playing field. It makes the American dream feel like a rigged game pretending to be fair.
-
-
docdrop.org docdrop.org
-
Instead of sprawling console games, the team is “starting to realize it would just be better to create small, more impactful experiences.”
Highlights how smaller studios must limit scale—unlike miHoYo, which can build a massive game like Genshin thanks to its resources.
-
-
Local file Local file
-
We hear it, shrill and silver, an echo from a volleyballgame of long ago.
A comparison to sport, this vicious and powerful collective. To aunt lydia this is nothing but entertainment, but to the handmaid's this is a competitive game of volleyball, or killing.
-
-
templeu.instructure.com templeu.instructure.com
-
During the 1970s the game was changing
This quote discusses the shift in the cable industry's structure during the 70s. It explains that the early mom-and-pop cable businesses were being replaced by larger entities called Multiple System Operators or MSOs. By doing this it meant that the industry was becoming more centralized and that the more powerful and wealthy companies were starting to take control. This change in the landscape had significant consequences or just changes for the types of things that were then being broadcasted to consumers.
-
-
www.rollingstone.com www.rollingstone.com
-
Franchises will probably also get in on the action, delivering targeted ads. Say, for example, you download your favorite basketball team’s app; the app knows when you’ve gone to a game and shares that data with the Walmart app you happen to have on your phone.
This discusses how sports franchises can leverage mobile apps to deliver targeted advertisements by tracking user behavior. For example, data shared between a team’s app and other apps like Walmart allows brands to target fans with personalized ads based on their attendance at games and other activities. This practice uses data integration and real-time tracking to create highly relevant marketing strategies.
-
This is just the tip of the iceberg regarding sports tech — not to mention sports betting, video games and other modes of tech. There are a host of elements that come into play. Technology will continue to influence sports. Those looking to remain relevant and progressive within the industry should embrace the technology being developed. ADVERTISEMENT These trends offer countless ways for enterprising sports companies to invest, allowing brands to develop innovative ways to stay ahead of the rest. A competitive imbalance is on the horizon, and brands that embrace these powerful new tools will remain in the game.
In this conclusion, Schreiber considers the more expansive implications of emerging technologies on the sports industry, in that their influence extends well beyond the development of player performance to areas like gaming, sports betting, and brand strategy. The article contends that clubs, as well as other bodies, must adopt and invest in emerging technology to keep pace, with a coming gap between leaders and laggards. The perspective is particularly insightful for researchers and practitioners concerned with the strategic and financial elements of technology adoption within sports.
-
Technology is set to affect how athletes train, companies market, fans follow their favorite teams, games are broadcast and even how players interact with their followers. The stakeholders who get on board will likely see rapid success, while those who miss the boat might find it hard to catch up.
This is the main purpose of the article, technology whether we like it or not is going to revolutionize sports but not only the game but the management side. We are seeing this change in football specifically where next-gen stats and broadcasting are slowly evoloving the game. On the management side we are seeing major companies using technology that has been recently introduced to sports, with new technology that help athletes perform at a higher level more businesses are emerging claiming to have the best recovery tech, comfort for athletes, and even next gen stats.
-
-
lawenforcementtoday.com lawenforcementtoday.com
-
Destroyer represents a game-changing level of firepower being applied to combat drug and human smuggling operations by the Mexican Cartels.
Shows how the destroyer is going to add significant resources to the fight of gangs like the cartel infiltrating the United States at the southern border.
-
-
www.reddit.com www.reddit.com
-
If you're curious about some of the technical details and how the are affected by the distribution of typefaces and sizes, I laid out some of them the other day: https://www.reddit.com/r/BaseballScorecards/comments/1jn2475/comment/mks9rbc/
Lou also has some great examples of scorekeeping across display sizes and level of data in his offerings at https://thirty81press.com/.
The broader issue for most scorers is the limitation to 8.5 x 11" paper which is the most common page size for the ubiquitous portable and ultraportable typewriters from the mid-century. While there are some portables with carriages and platens that might accomodate up to 12" wide paper, they're not super common.
To get machines with wider platens to get 11x14 or 11x17, you're going to need the significantly larger standard machines and unless you're rich enough to have a suite that you can securely store one in or a journalist with your own booth, not many baseball fans are going to cart a 35-45+ pound typewriter with them to all their games. Though this wouldn't prevent the fan viewing at home from scoring this way easily. My example above was done on a standard width carriage on a standard machine, but I did have several options to do it on a 12", 14", and even two 16" standard typewriters. Interestingly, most of my larger carriage machines are elite 12/6 (12CPI with 6 lines/inch) formats, and I don't think Lou has designed yet for that standard which would allow for an additional 15 characters to be distributed amidst the columns (while still keeping a minimum of 1/2" margins for some balanced white space). I'll be tinkering around with some of this myself in the coming week or so on 11x14" paper using a 15" wide platen on an elite machine to see how things might look.
Perhaps a modified format at 8.5 x 11 that alternates the teams and splits a 12 inning game format across three sheets so that the typist can type down a single page without swapping sheets every half inning and realigning their page every time? But this would cause a lot of formating change versus traditional layouts to do so.
I've also been tinkering with using small space characters like the - and the _ to indicate data (with or without the use of the variable line spacing mechanism) for things like tracking RBIs. The underline is particularly useful for this in Lou's three space layout.
-
-
meresophistry.substack.com meresophistry.substack.com
-
Suggestive sentences can be interpreted in multiple ways, thereby increasing their opportunities to be cited by scholars. If a sentence is suggestive while remaining ambiguous, then future academics can use it for their own work. It’s the quoting game of modern academic scholarship. A game that is full of citation chains where the more citations in the chain, the better.
See what you want to see
-
-
remiller1450.github.io remiller1450.github.io
-
home_score ~ game_type
home score by game type
-
-
Local file Local file
-
jects which can be processed by distributing the game enginebetween multiple servers whilst maintaining 60 frames/s
Why does it jump from 1-2-4-6-9?
-
ypical video game engines must limit the num-ber of objects within their game worlds as beyond a certain number, the hardware thegame engine is running on prevents it from processing all updates within the 16.66mstime window required to maintain 60FPS.
clunky sentence
-
st, they are especially importan
Again, don't need to state. Just state
In this chapter, we study the empirical performance of the Distribute Worlds system to demonstrate its viability as a novel game engine architecture.
-
game engine
boundary? margin?
-
is permanentlydisconnected, and the player object can be removed from the game world
it is safe to remove the player object from this node?
not from the entire game world. I don't think you defined game world in the way you mean here anywhere.
-
game
Perhaps mention that this logic could be abstracted away if it were a sort of game engine, where the engine manages duplicated message sending and the user (game dev) just manages input message sending and position handling.
-
player
Would it not suffice to send input to any nodes which the player is very close to? Rather than EVERY connected node? This distance could be dependent on the maximum velocity in the game
-
4.2.1 Node boundaries and margins
Worth having a para on the fact that in 2D systems, you can use congruent squares as in Figure 4.1 but that your architecture generalises to any configuration e.g. hexagons, cubes in 3D or any shapes.
Also worth mentioning that the entire game world must be covered by nodes?
Such that your architecture allows for any collection of nodes whose union of boundaries covers the entire game world. And that the boundaries must be pairwise disjoint.
For example, a single node architecture fits into this conception.
-
To meet the goal of elastic scaling and handling large amounts of traffic from manyplayers, it must be possible to add more compute power to the system to cope withdemand. To be able to do this, the game world first needs to be broken down intoscalable units called nodes
This isn't strictly true. You could vertically scale. I think you could substantiate the second sentence more precisely to reflect this.
-
The functionality of scaling the world is then entirely contained within the DistributedWorlds system.
This sentence is a bit meaningless to me. Isn't the scaling of of any game world contained within its system? I think I see what you're trying to say but could reword this. I think my confusion mostly comes from the term Distributed Worlds system. It isn't well defined to me.
-
-
vsblog.netlify.app vsblog.netlify.app
-
challenges in game theory
скорее in social philosophy, ТИ тут инструмент
-
game
strategic game
-
