10,000 Matching Annotations
  1. Oct 2024
    1. Then Campbell arrived. Very visibly. Six minutes into the game, he walked up to his seat with Phyllis King, his secretary and fiancée, who had fielded the angry calls to his office earlier that day.

      His presence triggered people

    2. It is one of those moments when you realize you are part of something special, that this spontaneous moment is taking on a life of its own, and one of those moments that remind you sports can mean so much more than a game. He is giving them one final memory here in the Forum. The crowd begins to chant, “Ree-char, Ree-char!”

      Shows the importance and influence of just one player.

    3. You’ve never seen a hockey player like Maurice Richard. Not Crosby. Not Gretzky. Not Orr, Beliveau, Howe. None of them had the talent, the intensity, the will to take over a game like Richard. And none of them meant to their fans what le Rocket meant to Canadien fans.

      Remarkable player. Interesting that they compare Gretzky to Richard because Gretzky is the greatest of all time.

    4. He swivels and drops Thompson to the ice with a right to the face.

      Hit the official. In a hearing after the game it was stated that he thought Thompson was a Bruins player.

    5. Whether this type of conduct is the product of temperamental instability or willful defiance doesn’t matter. It’s a type of conduct that cannot be tolerated.” He suspended Richard for the final three games of the regular season and the entire playoffs.

      Sometimes I can appreciate when refs in big game time situations make calls like this for the sake of the safety of the others playing.

    6. He had started playing this game as a 4-year-old on the backyard rink his father Onésime, a machinist at the Canadian Pacific Railway, built for him. It was quickly apparent he could play in ways other boys could not. By the time he reached his teens, his skills were in such high demand he played as often as he could, sometimes four games in a weekend, using aliases to play for multiple teams, often against grown men. The oldest of eight children, he quit school at 16 to work with his father in the factory. He began playing junior hockey the following year.

      This visual impact makes the whole scene more dramatic, as if the audience is not just watching a hockey game, but witnessing a moment of one man against the whole world.

    7. It is one of those moments when you realize you are part of something special, that this spontaneous moment is taking on a life of its own, and one of those moments that remind you sports can mean so much more than a game. He is giving them one final memory here in the Forum. The crowd begins to chant, “Ree-char, Ree-char!”

      It is really cool when one person can get an entire city excited. Living in Cleveland, I see this often, which makes me often overlook how amazing it is. One person playing a sport is able to bring so many people together in the way that Richard did. This is definitely "something special."

    8. Back in Montreal the morning after the Bruins’ game, Richard showed up for practice despite a headache and upset stomach, likely suffering from a concussion. The team doctor sent him to the hospital for X-rays and other tests. Richard stayed overnight but left the next day to attend the hearing at the Sun Life Building.

      He should have been resting and yet he never stopped going.

    9. He had started playing this game as a 4-year-old on the backyard rink his father Onésime, a machinist at the Canadian Pacific Railway, built for him. It was quickly apparent he could play in ways other boys could not. By the time he reached his teens, his skills were in such high demand he played as often as he could, sometimes four games in a weekend, using aliases to play for multiple teams, often against grown men. The oldest of eight children, he quit school at 16 to work with his father in the factory. He began playing junior hockey the following year.

      His life was hockey. This would never be allowed now unless you went to one of those prestigious academies.

    10. “No one can know when the anger of men, whipped indefinitely, becomes sculpted into political revenge. And more, it is not just a matter of hockey.”

      Hockey is not a game, but a lifestyle for Canadians.

    11. “Because I always try so hard to win and had my troubles in Boston, I was suspended. At playoff time, it hurts not to be in the game with the boys. However, I want to do what is good for the people of Montreal and the team. So that no further harm will be done, I would like to ask everyone to get behind the team and to help the boys win from the Rangers and Detroit. I will take my punishment and come back next year to help the club and younger players to win the Cup.” His words had a palliative effect. The next night nobody threw galoshes, nobody broke any more windows, nobody stopped streetcars.

      I'm glad the people listened to his words and that he was so calm about it even after probably being upset that he was suspended.

    12. “Bailey tried to gouge his [Richard’s] eyes out,” Red Storey, who refereed that game, later told a reporter, “Rocket just went berserk.”

      Wow hockey is a lot more violent than I thought

    1. In this lesson, I want to introduce Cloud Computing.

      It's a phrase that you've most likely heard, and it's a term you probably think you understand pretty well.

      Cloud Computing is overused, but unlike most technical jargon, Cloud Computing actually has a formal definition, a set of five characteristics that a system needs to have to be considered cloud, and that's what I want to talk about over the next few minutes in this lesson.

      Understanding what makes Cloud 4 Cloud can help you understand what makes Cloud special and help you design cloud solutions.

      So let's jump in and get started.

      Now because the term Cloud is overused, if you ask 10 people what the term means, you'll likely get 10 different answers.

      What's scary is that if those 10 individuals are technical people who work with Cloud day to day, often some of those answers will be wrong.

      Because unlike you, these people haven't taken the time to fully understand the fundamentals of Cloud Computing.

      To avoid ambiguity, I take my definition of Cloud from a document created by NIST, a NIST at the National Institute of Standards and Technology, which is part of the US Department of Commerce.

      NIST creates standards documents, and one such document is named Special Publication 800-145, which I've linked in the lesson text.

      The document defines the term Cloud.

      It defines five things, five essential characteristics, which a system needs to meet in order to be cloud.

      So AWS, Azure, and Google Cloud, they all need to meet all five of these characteristics at a minimum.

      They might offer more, but these five are essential.

      Now some of these characteristics are logical, and so they may surprise you.

      So I've added a couple of things to the document, and I've added a couple of things to the document, and even though you and other business are probably sharing physical hardware, you would never know each other existed, and that's one of the benefits of Boolean.

      But on to characteristic number four, which is rapid elasticity.

      The NIST document defines this as capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly, outward, and inward, commensurate with demand.

      To the consumer, the capabilities available for provisioning often appear to be unlimited, and can be appropriated in any quantity at any time.

      Now I simplify this again into two points.

      First, capabilities can be elastically provisioned and released to scale rapidly, outward and inward with demand, and in this case, capabilities are just resources.

      And second, to the consumer, the capabilities available for provisioning often appear to be unlimited.

      Now when most people think about scaling in terms of IT systems, they see a system increasing in size based on organic growth.

      Elasticity is just an evolution of that.

      A system can start off small, and when system load increases, the system size increases.

      But, crucially, with elasticity, when system load decreases, the system can reduce in size.

      It means that the cost of a system increases as demand increases, and the system scales, and decrease as demand drops.

      Rapid elasticity is this process but automated, so the scaling can occur rapidly in real time with no human interaction.

      Cloud vendors need to offer products and features, which monitor load, and allow automated provisioning and termination as load increases and decreases.

      Now most businesses won't care about increased system costs.

      If, for example, during sale periods, their profits increase, and because the system scales, along with that increased load and increased profits, the customers are kept happy.

      Elasticity means that you don't have to, and indeed can't over provision, because over provisioning weighs money.

      It also means that you can't under provision and experience performance issues for your customers.

      It's how a company like Amazon.com or Netflix can easily handle holiday sales, or handle the load generated on the latest episode of Game of Thrones' release.

      The second part is related to that.

      A cloud environment shouldn't let you see capacity limits.

      If you need 100 virtual machines or 1000, you should be able to get access to them immediately when required.

      In the background, the provider is handling the capacity in a pooled way, but from your perspective, you should never really see any capacity limitations.

      Now this is, in my opinion, the most important benefit of cloud, systems which scale in size in response to load.

      So this is a really important one to make sure that a potential cloud environment offers in order to make sure that it is actually cloud.

      Okay, let's move on to the final characteristic, and that's measured service.

      Now this document defines this as cloud systems automatically control and optimize resource use by leveraging and metering capability at some level of abstraction appropriate to the type of service.

      And it says that resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the service.

      Now my simplified version of this is that resource usage can be monitored, controlled, reported, and built.

      Traditional and non-cloud infrastructure work to using capex.

      You pay for service and hardware in advance.

      In the beginning, you had more capacity than you needed, so money was wasted.

      Your demand grew over time, and eventually you purchased more service to cope with the demand.

      If you did that too slowly, you had performance issues or failures.

      With a true cloud environment, it offers on-demand billing.

      Your usage is monitored on a constant basis.

      You pay for that usage.

      This might be a certain amount per second, a minute, an hour, or per day usage of a certain service, for example virtual machines.

      Or it could be a certain cost for every gigabyte you store on a storage service for a given month.

      You generally pay nothing in advance if it truly is a cloud platform.

      If you consume a virtual server for a month, but then for 30 minutes at that month you use 100 virtual servers, then you should pay a small amount for the month and a much larger amount just for that 30 minutes.

      Legacy vendors will generally want to feed or buy or lease a server.

      If this is the case, they aren't cloud, and they probably don't support some of the massively flexible architectures the cloud allows you to build.

      With that being said, that is everything I wanted to cover, so go ahead, complete this video, and when you're ready, I'll join you in the next.

    1. ‘perfect’ is not even a realistic concept when it comes to how languages work

      This is crucial to understand. Language is constantly developing with new slang, phrases, and words with each new generation. You cant be perfect at a game when the rules are constantly changing.

    1. Her brain allows one half-formed thought to pass: 'Well now that’s done: and I’m glad it’s over.' When lovely woman stoops to folly and Paces about her room again, alone, She smoothes her hair with automatic hand, And puts a record on the gramophone.

      Lines 240-250 of “The Waste Land” describe the sexual assault of a woman by her “lover” describing how “flushed and decided, he assaults at once; Exploring hands encounter no defense”. An immediate connection can be made with Dr. Goldsmith’s “The Vicar of Wakefield”, where the theme of sexual assault is also brought up, specifically through the story of a girl named Olivia. On page 133, Goldsmith writes, “poor Olivia first met her seducer, and every object served to recall her sadness. But that melancholy … soothes the heart instead of corroding it. Her mother, too, upon this occasion, felt a pleasing distress”. Despite the gravity of Olivia’s experience, her melancholy is perversely received as a comfort by others, even to her own mother. This perversion can be compared to the disturbing association between sexual assault and pleasure, as both harm the victim, but satisfy those around them. Following this description, on page 133-134, Olivia shares a poem regarding her sexual assault, stating, “When lovely woman stoops to folly// And finds too late, that men betray … //The only art her guilt to cover //To hide her shame from ev’ry eye //To give repentance to her lover //And wring his bosom, is — to die”. In Olivia’s poem, she seeks comfort and healing from the trauma of her experience, describing the guilt and shame she felt. As a part of this restoration, or “art”, as it is named interestingly, she mentions repentance to her lover, or a “wringing of his bosom”. The act of wringing is one of significant strain, as it describes the squeezing of fabric to remove liquid, and the mention of the man’s bosom suggests pride or comfort, seen through the stereotypical “puffed out chest”. Thus, as another part of Olivia’s restoration, she also seeks to humble and in some way correct the male hubris and entitlement of her assaulter. Of course, her final state of restoration is revealed to be death. Olivia’s art of death evokes the idea of restoration and sacrifice, in line with previous themes of Eliot’s “The Fire Sermon”. In particular, I would like to return to Thomas Middleton’s “A Game of Chess”, where the “fall or prostitution our lust most violently rages for” is described in a letter by the Black King to his pawn. It seems like Olivia’s story falls into the forced prositution or sexual exploitation of women, and her subsequent death falls into the idea of a woman’s fall. Returning to the text of “The Waste Land”, the after effects of the assault on the nameless woman are described by Eliot on lines 251 to 255. Eliot writes, “Her brain allows one half-formed thought to pass: 'Well now that’s done: and I’m glad it’s over.' When lovely woman stoops to folly and paces about her room again, alone, She smoothes her hair with automatic hand". Eliot’s reference to Olivia’s poem suggests an overarching theme of coming to terms with trauma and, of course, death. However, this women isn’t seen physically dying as Olivia seems to describe. Instead she seems to enter a dissociated state, one in which her brain doesn’t seem to function properly, and where her body starts to pace and adopts automatic movements. Conceptually, this separation of the mind and body can be interpreted as a type of death - a profound disruption of the mind, body, and soul. In response to demand for the “fall or prostitution” of women that some men lust after, Olivia’s art of death is reflected by the nameless woman as a method of survival and sacrifice in which a victim hopes for the restoration of an unjust event.

    1. However, nothing may have happened if Campbell hadn't made a tactical error — he showed up to the game (10 minutes late) with his secretary (future wife) and took his regular place

      seems to me like he was instigating the problem.

    2. . However, nothing may have happened if Campbell hadn't made a tactical error — he showed up to the game (10 minutes late) with his secretary (future wife) and took his regular place.

      Fans cared so much about hockey that French and English were willing to come together and agree on it, which shows how insane it was.

    3. he showed up to the game (10 minutes late) with his secretary (future wife) and took his regular place.

      What would have happened if Campbell had not gone to the game? There was still so much anger towards him that I wonder if the riot would have just been delayed or if anything had happened at all.

    1. the spinning wheel was the addition of a foot treadle that powered the wheel.

      I wasn't aware how much the spinning wheel was a game changer for labor.

    1. "I hope I have enough students to make two hockey teams. Maybe enough to fit the Bell Centre," he joked.

      A neat field trip if this was successful would to be to have the class play a game of hockey one day.

    1. Smoke from a tear-gas canister haddriven thousands of hockey fans into the streets, sparking afour-hour rampage that yielded the requisite fires, shatteredwindows, looted stores, overturned cars and 137 arrests.

      I was not aware of how violent the riot became. This posed safety concerns for innocent fans trying to watch the game.

    2. There are moments when life gets in the way, when sports and thereal world collide at some intersection--

      Sports are a major part of peoples lives and society. People can be very big fans and when a team loses or wins a game it impacts them as a fan more than some people can imagine.

    1. Sanctions are now less a tool of behavioral change than one aimed at economic and technological attrition.

      Article: https://www2.deloitte.com/us/en/insights/economy/global-economic-impact-of-sanctions-on-russia.html Annotation #1: I think the authors are saying that these sanctions aren’t really about stopping Russia from acting right now but more about hurting their economy and tech long term. This connects to our question because it shows how these economic measures are designed to wear Russia down over time, making it harder for them to be a global power in the future. The Deloitte article also highlights this, explaining how sanctions on commodities, technology, and financial institutions are causing disruptions that could lead to higher inflation, supply chain issues, and weakened growth in Russia. It confirms that this strategy is more of a long-term game plan to erode Russia’s global influence.

    1. Mainstream game design has moved toward minimizing these down times, adding mechanics like fast travel or quest markers to get players straight to the next point of interest, another filing away of the adventure game’s rough corners.

      I think the lack of fast travel in Walking Simulators forces players to think more about the environment around them. This function reminds of Red Dead Redemption where for the early part of the game the player is forced to move through a large world on horseback. The lack of fast travel forces the player to explore the wild west and take time to see the environment around them. Walking Simulators believe in this experience as well as investigating every part of the environment is how players learn more about the games narrative.

    2. O’Connor’s review of the tourism Quake

      I don't think Quake is suitable as a game used for shooter --> walker comparison. It is very old and the graphics reflect that. Nowadays, shooters probably have more complex environments than old-school walkers. The limitations of the technology at the time shouldn't be an indication of how well a shooter could be transformed into a walker today, especially as the game development and modding community has exploded.

    3. The most visible difference between adventure games and walking sims is the removal of puzzles

      While I agree there are fewer physical challenges in a walking sim than an adventure game, I think the factor of predetermined choices/option is a big difference. These choices refer to more than typical multiple-choice answers, but also choices to change spaces and to do certain actions. In a walking sim, the player is free to do whatever. There's no need to record one's input in order to build the algorithm and advance in the game. A player can explore as many or as few areas of the game and can still make progress and finish the game.

    4. The most visible difference between adventure games and walking sims is the removal of puzzles, although this evolution has happened across many genres of game, as radically extending play time through mental frustration fell out of fashion (to be replaced, of course, by grinding, cooldown timers, and other more modern mechanisms of inflating playtimes).

      Adventure games and walking simulations vary in the gameplay. I like that walking simulations are more lowkey and don't always have a "purpose or goal" for the game, but they allow the player to truly immerse themselves into the simulation. Adventure games are often more violent, which is not something that every player enjoys.

    5. But the dominant genre during this period was the much more frenetic first-person shooter. With many shooter engines increasingly providing tools to build your own levels or otherwise modify game content, it’s no surprise that many gamemakers began using these tools for other purposes.

      I learned first person walking simulators led to the development of first person shooters. With new and improved engines for first person shooters, creators of walking simulators can create more advanced games. I find it interesting how they both help each-other, though many who play first person shooters and other games do not like the walking simulators built on the same platforms.

    6. To call something a “walking simulator” became not just a complaint about pacing but an existential fight for survival, spiraling to include larger and larger questions of who gets to be a gamer and what should be “counted” as a game

      A very important question is raised which really questions the definition of games and who gets to be a gamer. I think its important to question our definition of games in order better understand the content. Hence, this could initially be considered as a good step because it could potentially address our biases and give us some awareness. But too much critism and complete lockdown of your thoughts from listening is problematic.

    7. The player’s initial fear that they might need to act quickly to defend themselves from some lurking supernatural horror becomes transmuted, by the end of the story, into the inevitable realization that their character has already lost her chance to act

      I found this quote really interesting, because when I started the game I was afraid and trying to go through everything fast because I thought someone was in trouble or a ghost was going to haunt me. By the end of the game though, the player starts to realize whatever happened is already done, leading them to slow their playing down a lot in order to fully understand what happened.

    8. This is why most walking sims that descend from first-person shooters have been radical reimaginings taking years to produce, not merely removing enemies but crafting whole new environments, often with custom textures, objects, music, and narration: creating not just a new focus of interaction but an entirely different kind of world to support that focus.

      This excerpt is very interesting as it explains that it is not as simple as you may think to turn a first-person shooter into a first-person walker game. In my own experiences playing both types of game I have noticed how much more detail goes into adventure and walking simulators compared to a fast-paced combat game.

    9. Walter Benjamin’s portrait of the flâneur, the urban wanderer who walks without purpose other than keen observation through the city streets, and in whom “the joy of watching is triumphant” (1973): the connection between flâneurs and explorers of games has been noted by many games scholars (Kagen 2015; Carbo-Mascarell 2016). In games, walking connects to the adventure pillar of exploration, as well as the sense of immersive transportation and a focus on environmental storytelling: in adventure games specifically, it provides a space for thinking and reflecting, a necessary precursor to successfully overcoming obstacles.

      I find this first section introducing walking’s purpose in games and specifically as the base of “walking simulators” interesting because I had always viewed walking as a waste of time. I think it was an important thing to note that some people do feel this way, which has caused many games to include a “fast pass” that can be purchased or is a complete replacement for any walking. It’s especially interesting to look at how walking or the lack thereof can affect our “fun”, agency, and a sort of challenge. If we don’t have this break time to think and reflect, then it feels like it would be a lot harder to be able to overcome any obstacles we may face. I never understood the immersive power of walking through an environment for the player, but now that I think about it, having a “fast pass” model for the game feels like it would disconnect the player from the character they’re playing as. If we don’t get to experience the character’s entire journey, are we really in full control of the character? If we aren’t, how are we going to feel like we are the character themselves?

    10. Just as the environments in first-person shooters exist to support action-packed combat, the environments in most walking sims are designed to be platforms for understanding and empathizing with characters. In games like Dear Esther, Virginia (2016), What Remains of Edith Finch (2017), and many others, 3D game worlds come to be understood as metaphorical spaces offering windows into the minds and stories of the people within them.

      This very feature of these digital literature games that not only tells stories but also allows players to actively engage in them is what builds our understanding and empathy for the video game characters. As players, to fully understand the extent of someone else’s struggles, we must put ourselves into their shoes and this form of game literally allows this metaphor to come to life.

    11. most importantly for the way it foregrounds what these games make visible: a certain pace of storytelling, driven by navigation through an environment and without the frustrating challenges of other styles of gaming (including their ancestor, the adventure game)

      A lot of people forget that the point of these games are not extreme anything. It is a closed-minded and stubborn issue. People caring about to much about stuff that doesn’t affect them. In the closed-minded aspect, people can not wrap their head around a different stylistic choice. Not everything needs to be black and white, wet and dry. If you do not allow ambiguity, you can cause hateful environments with that separation. The separation can then come on though during difficulty of the game and how people wish to play. But perhaps there are others who don’t enjoy the fast play and wish to play for a calming enjoyment. The hateful community can move in again with diction like, “wuss mode;” This creates a harmful, unhappy and toxic environment. The author should go into the toxicity this environment creates in the gaming community because it would help explain why Gone Home was hated on so much. Also some opposing perspectives created by the author would be nice describing or putting a perspective in of someone who doesn't like the slow pace game.

    12. But the most famous and successful walking simulators are best understood as explorations not of environment, but of character. Just as the environments in first-person shooters exist to support action-packed combat, the environments in most walking sims are designed to be platforms for understanding and empathizing with characters.

      The confined and interactive environment nature of first person shooter games contradicts and mirrors the introspective and open world nature of walking simulator games. These two different perspectives develop different player approaches as with action-combat with first person shooter and empathy and understanding with walking sims. These illustrate the different natures of two game approaches as in Gone Home. The player is put and piece together and empathize and understand the situation and perspectives of the different family members and so an open world walking sim concept is best for this approach.

    13. Real games are difficult, goes this argument: you can die in them; you can take “real” actions (i.e., shooting and loot collecting, not walking or investigating). Real game heroes are powerful and effective. An ugly corollary to this argument, advanced by some, was that “real games” shouldn’t be about the disenfranchised.

      With the rise of popularity in walking simulators, new arguments over what should be considered a "real game" have come up. This is a very closed minded mindset held by gamers who think games should just be first person shooters, or have violence to be considered games. What they lack is the realization that many of the mechanics in the games they love are dependent on walking. Many actions in high action games often can not be performed when running. Walking simulators are just as much of a game as first person shooters or others alike, they are just on a different part of the spectrum.

    14. outsider genre have reclaimed the term, as have we in this chapter, for its embrace of qualities that would-be insiders despise. These games deemphasize traditional active game verbs to center more passive ones, especially movement, observation, and reflection. Different verbs can tell different kinds of stories, and these games have often told outsider stories about othered identities.

      Interesting to consider insiders and outsiders in gaming and the ways in which games might offer a space for so-called outsiders to express something about their identities. I'm trying to think of other examples in mainstream gaming that are like Gone Home.

    15. These games were originally dubbed “walking simulators” as an insult to exclude them and their creators from being considered “real games” or real game makers.

      People inside the gaming community were upset by this new genre and did not see the value in the games. They proceeded to label these games with a derogatory term.

    16. Myst’s images were largely static because it required offline rendering to create worlds beautiful enough to hold up to long scrutiny, while small bits of spot movement were added via Quicktime video to award attentiveness, creating interesting environmental details such as lapping waves or flittering butterflies. Doom’s engine, by contrast, creates lower-fidelity images in constant motion: the player will look at any one thing for only moments, while constantly racing through the environment fighting enemies who are also in constant motion.

      Its super interesting how the type of engine used creates different technical challenges that are addressed through different developmental choices. Compared to the past where every engine was unique and came with its own limits arising from issues with storage and usability on a variety of devices, newer game engines like Unreal Engine 5 and Godot show how these limitations have gradually disappeared. While the issue of storage is still prevalent, many other limitations arising from the engines are gone. This allows for new games to be far more robust compared to older ones like Myst and Quake which were limited by their engines.

    17. In both pieces, the readers or viewers are given agency to understand and gain perspective on the story through their interpretive decisions: about which threads are most interesting to follow, and about how to piece together the events they witness into a meaningful whole.

      I think that this quote is interesting because in most games, there is a specific point that the author/creator of the game is trying to get across and it is up to the reader/player to interpret that meaning.

    18. “the joy of watching is triumphant”

      In a sense, this statement proves a lack of agency throughout the duration of many walking simulators. They are not meant to provide a story that the player can affect or change, but instead make an illusion of agency by making it seem like they can. Instead of deriving joy from making the story the way they want it to go, they get joy from watching the story play out. This, even with the lack or illusion of agency, still allow for the game to be enjoyable.

    1. The most visible difference between adventure games and walking sims is the removal of puzzles,

      Walking sims can be a part of adventure game is so easy to be failed to notice. A combination of both storytelling and combat can make the game more intricate

    2. walking connects to the adventure pillar of exploration, as well as the sense of immersive transportation and a focus on environmental storytelling: in adventure games specifically, it provides a space for thinking and reflecting

      Exploration games allow for the player to have a more emotional connection to game characters. Walking simulators allow for more personal stories and experiences that can be explained in more detailed since these types of games are slow paced.

    3. Critics have drawn different conclusions about the role of Gone Home as (and alongside other) queer media. Zachary Harvat describes it as part of the tradition of “queer historical play,” which does not deny the challenges and trauma of queer history but also does not place it as the sole narrative emphasis (2018).

      This is trying to get the point across taht the game is not meant to be mostly about LGBTQ representation and revolve around it, however it does not deny the challenges and trauma of queer history and place it as the main narrative point. I believe that the game mostly revolved around the idea of unacceptance of Sam by her family because of the relationship and even though the LGBTQ was not the main theme, it definitely changed the scope of the game and added to the idea of why Sam had to suffer that much from the family.

    4. Coming to an understanding of a character or environment as a means of gaining control over it is a central tenet of environmental storytelling generally and walking simulators in particular. This kind of understanding connects more to cinematic than ludic traditions,

      The main point of walking simulators is to enable the player to gain control over a character or environment as a whole in order to add upon the theme if environmental storytelling in a more engaging way. In this way the walking simulators are exploring complex themes from the point of view of the character, opening up the possibilities for the player to engage with the game in a more cinematic way.

    5. the game create a sense of mystery more frequently associated with survival horror: the abandoned house is cast as unnatural and threatening, with the player invited to explore it suspiciously, suspecting some external danger behind the apparent disappearance of the family. That danger, of course, turns out to be internal, not external.

      The game Gone Homes plays with player agency because it wants us the players to think the game is a horror game based on the environment. It's raining outside the house is dark, the light flickering random noises going on. the game wants us to think something bad is going to happen it builds a lot of suspense keeping us on edge but in reality nothing happens. the real horror in the game is what the sister discovers about the family.

    6. To call something a “walking simulator” became not just a complaint about pacing but an existential fight for survival, spiraling to include larger and larger questions of who gets to be a gamer and what should be “counted” as a game (Chess and Shaw 2015). Real games are difficult, goes this argument: you can die in them; you can take “real” actions (i.e., shooting and loot collecting, not walking or investigating). Real game heroes are powerful and effective.

      I think this belief arises from the stereotypical belief that games have to have winning situations or violence, while most walking simulators do not have winning situations or violence. I don't really agree with what people are saying about walking simulators because I believe walking simulators are games just like any other video games. As long as the player performs actions with a purpose, it can be a game.

    7. The term didn’t really take off and become weaponized, however, until the growing resentment of “outsiders” and indie games that would culminate in Gamergate, after which it was retroactively applied with vitriol to games released much earlier like Dear Esther (originally 2008) and To the Moon (2011) (Clark 2017)

      Mainstream game companies slammed indie game companies for their format of having walking as an instrumental part of game play. Mainstream companies still use walking as a big part of their games in order to advance the plot even if they have other aspects in gameplay. Mainstream companies did not want to compete with indie games, so they made indie games undesirable to be played because they portrayed them as not games.

    8. In a game without spoken dialogue, these locations become a language for conveying meaning, and the player’s journey one of uncovering, step by literal step, the self of the character whose story is being told.

      While I agree that walking simulators allow players to grasp an understanding of the characters and story from solely exploring a physical space, the inclusion of Sam's dialogue in Gone Home definitely helped create a more immersive experience that was centered towards discovering Sam’s motivations.

    9. An ugly corollary to this argument, advanced by some, was that “real games” shouldn’t be about the disenfranchised. Game stories shouldn’t be about women or queer people—like Dear Esther, like Gone Home, like many of the games the genre would eventually include—nor should such people be included among their creators.

      Certain things shouldn't be included in games or else they are not considered real games. ----> What are considered real games? It's not like women and queer do not exist so why is including them make the game not "real." Wouldn't it make the game less realistic if it didn't include these things?

    10. Where procedural games differ from walking simulators is in their lack of curation: they let you walk wherever you want, including perhaps into uninteresting places, rather than down a well-prepared path that tells a particular story.

      Even though I am not a huge gamer myself, I have noticed that when I do play games, I often do find myself getting frustrated with parts of games that I deem “unnecessary” to the progression of the story. For example, even during my gameplay of “Gone Home,” I began to get frustrated when I couldn’t find the code to open the safe in the basement, which was making the gameplay take longer than anticipated. Overall, “Gone Home” is an uncomfortable game to play as it goes against many norms and conventions present in more typical online games, such as the existence of a clear and explicit objective or goal that keeps the player interested in the outcome (ex. “winning” the game, defeating a monster, etc.).

    11. The term didn’t really take off and become weaponized, however, until the growing resentment of “outsiders” and indie games that would culminate in Gamergate,

      The walking simulator term was popularized as an offensive against what many "gamers" saw as a threat to the traditional style of games that they had become familiar with and even integrated as a part of their identity. They saw "walking simulators" not as a new alternative form of game but as an outsider threat to what they knew.

    12. making similar demands on the reader/player to construct identities and narratives out of competing (and even conflicting) perspectives.

      I see Gone Home as working similarly to many mystery books, which could be the reason why many people craving the familiarity of structured games may not enjoy the walking game genre. The player is somewhat forced to piece things for themselves instead of to follow the game's plot. Even in horror games, the goal is often to "survive" or to "not get scared," yet Gone Home didn't fall into any obvious category at first glance.

    13. These games were originally dubbed “walking simulators” as an insult to exclude them and their creators from being considered “real games” or real game makers.

      The fact that "walking" games were called "walking simulators" in a negative way surprised me as "walking" games are so common nowadays I didn't even think that these new games would have faced pushback in the 2010s. I also didn't understand why these games would be excluded in the first place since it's a new form that will attract more gamers and people, thus bringing games more revenue and attention

    1. During the first round of this exercise, students inevitably take so many fish that there are none left in the lake. Students then discuss what has happened and what they ought to do differently in the next round. Some students have strong intuitions that everybody should take an equal amount, while others insist that all that matters is that in the end there are enough fish left to repopulate the lake. Not only is this exercise pedagogically engaging, but it leads students to develop proposals and to evaluate them critically.

      Here’s a more streamlined version of your scenario:

      In class, another student and I were presented with a pile of money and given three options. We could either steal the money or leave it. If we both chose to steal, we would both end up with nothing. If we both left the money, neither of us would get anything either. However, if one of us stole and the other left the money, the person who stole would get the money, while the other would walk away with nothing. For me, this scenario is a reflection of both philosophical dilemmas and social constructs, particularly in the context of game theory and moral philosophy. It echoes the Prisoner’s Dilemma, a situation where individual self-interest leads to worse outcomes for both parties than cooperation would. This scenario also reflects the role of trust in social interactions. If neither party trusts the other to act cooperatively, both might choose to steal, resulting in mutual loss. Social structures like laws, norms, and ethical guidelines exist to cultivate trust and reduce the risks of selfish behavior, enabling cooperation.

  2. Sep 2024
    1. However, nothing may have happened if Campbell hadn't made a tactical error — he showed up to the game (10 minutes late) with his secretary (future wife) and took his regular place.

      I wonder why he showed up. He had to know that something was going to happen if he did show up.

    1. I divide students into groups and ask them to imagine that each group is a family subsisting by fishing from a lake

      This could also be seen as Game Theory to an extent, which is more of an economical idea. Keeping the two fish in the lake is also an interesting concept. It creates an additional layer, where we could accomplish feeding everyone this year only to make everyone suffer the following year.

    2. The imagination allows Plato to crystalize his answer to the question of how we ought to live into a vision that we can subject to critical examination. This is the constructive step that we so often fail to take.

      I personally believe that Plato was able to crystalize his answer on how to live subject to a critical examination because he saw himself in the fantastic imagination in The Republic from the 3rd perspective, as a reader. When I was traveling Japan I came across this book which I forgot the name of, however, the book gave me an inspirational insight: to be the person watching your life as a movie in a theater. We often feel sentimental or jealous when we see others living their everyday life from a third perspective. From washing dishes in a dimmed light kitchen to just having a casual family dinner. Seeing yourself and your life as a movie lets you to see yourself objectively. Seeing the movie (your life) objectively allows the audience, who is only yourself, to critically analyze your actions, emotions and thoughts. Another method is to see yourself from an above, just like a drone flying above your head and seeing yourself as one of a player in a game. This may be more effective than imagining seeing your life as a movie in a theater, however, this method's risk becomes uncorrelated to the reward as you get older, because you have more responsibilities in your actions.

    3. The imagination allows Plato to crystalize his answer to the question of how we ought to live into a vision that we can subject to critical examination.

      I personally believe that Plato was able to crystalize his answer on how to live subject to a critical examination because he saw himself in the fantastic imagination in The Republic from the 3rd perspective, as a reader. When I was traveling Japan I came across this book which I forgot the name of, however, the book gave me an inspirational insight: to be the person watching your life as a movie in a theater. We often feel sentimental or jealous when we see others living their everyday life from a third perspective. From washing dishes in a dimmed light kitchen to just having a casual family dinner. Seeing yourself and your life as a movie lets you to see yourself objectively. Seeing the movie (your life) objectively allows the audience, who is only yourself, to critically analyze your actions, emotions and thoughts. Another method is to see yourself from an above, just like a drone flying above your head and seeing yourself as one of a player in a game. This may be more effective than imagining seeing your life as a movie in a theater, however, this method's risk becomes uncorrelated to the reward as you get older, because you have more responsibilities in your actions.

    1. After a successful season with Sporting that brought the young player to the attention of Europe’s biggest football clubs, Ronaldo signed with English powerhouse Manchester United in 2003. He was an instant sensation and soon came to be regarded as one of the best forwards in the game. His finest season with United came in 2007–08, when he scored 42 League and Cup goals and earned the Golden Shoe award as Europe’s leading scorer, with 31 League goals. After helping United to a Champions League title in May 2008, Ronaldo captured Fédération Internationale de Football Association (FIFA) World Player of the Year honors for his stellar 2007–08 season. He also led United to an appearance in the 2009 Champions League final, which they lost to FC Barcelona.

      ronaldos transfer to man united and achievements

    1. in

      I see this happening in the game industry and it actually somewhat pains me a lot. There is an argument to be made though about values. I remember in one game (Genshin) there are regions essentially representative of real life cultures and there was backlash whenever the African/Latin American region had zero dark skinned characters. I think in that situation, it is justifiable since these are real people being represented here even if it fictional. In most cases though if the artwork was never good in the first place, it is difficult to justify inclusion.

    1. Welcome back and in this lesson I want to cover a few really important topics which will be super useful as you progress your general IT career, but especially so for anyone who is working with traditional or hybrid networking.

      Now I want to start by covering what a VLAN is and why you need them, then talk a little bit about Trump connections and finally cover a more advanced version of VLANs called Q in Q.

      Now I've got a lot to cover so let's just jump in and get started straight away.

      Let's start with what I've talked about in my technical fundamentals lesson so far.

      This is a physical network segment.

      It has a total of eight devices, all connected to a single, layer 2 capable device, a switch.

      Each LAN, as I talked about before, is a shared broadcast domain.

      Any frames which are addressed to all Fs will be broadcast on all ports of the switch and reach all devices.

      Now this might be fine with eight devices but it doesn't scale very well.

      Every additional device creates yet more broadcast traffic.

      Because we're using a switch, each port is a different collision domain and so by using a switch rather than a layer 1 hub we do improve performance.

      Now this local network also has three distinct groups of users.

      We've got the game testers in orange, we've got sales in blue and finance in green.

      Now ideally we want to separate the different groups of devices from one another.

      In larger businesses you might have a requirement for different segments of the network from normal devices, for servers and for other infrastructure.

      Different segments for security systems and CCTV and maybe different ones for IoT devices and IP telephony.

      Now if we only had access to physical networks this would be a challenge.

      Let's have a look at why.

      Let's say that we talk each of the three groups and split them into either different floors or even different buildings.

      On the left finance, in the middle game testers and on the right sales.

      Each of these buildings would then have its own switch and the switches in those buildings would be connected to devices also in those buildings.

      Which for now is all the finance, all the game tester and all the sales teams and machines.

      Now these switches aren't connected and because of that each one is its own broadcast domain.

      This would be how things would look in the real world if we only had access to physical networking.

      And this is fine if different groups don't need to communicate with us so we don't require cross domain communication.

      The issue right now is that none of these switches are connected so the switches have no layer 2 communications between them.

      If we wanted to do cross building or cross domain communications then we could connect the switches.

      But this creates one larger broadcast domain which moves us back to the architecture on the previous screen.

      What's perhaps more of a problem in this entirely physical networking world is what happens if a staff member changes role but not building.

      In this case moving from sales to game tester.

      In this case you need to physically run a new cable from the middle switch to the building on the right.

      If this happens often it doesn't scale very well and that is why some form of virtual local area networking is required.

      And that's why VLANs are invaluable.

      Let's have a look at how we support VLANs using layer 2 as the OSI 7-Line model.

      This is a normal Ethernet frame.

      In the context of this lesson what's important is that it has a source and destination MAC address fields together with a payload.

      Now the payload carries the data.

      The source MAC address is the MAC address of the device which is creating and sending the frame.

      The destination MAC address can contain a specific MAC address which means that it's a unique S-frame to a frame that's destined for one other device.

      Or it can contain all F's which is known as a broadcast.

      And it means that all of the devices on the same layer 2 network will see that frame.

      What a standard frame doesn't offer us is any way to isolate devices into different parts, different networks.

      And that's where a new standard comes in handy which is known as 802.1Q, also known as .1Q. .1Q changes the frame format of the standard Ethernet frame by adding a new field, a 32-bit field in the middle in Scion.

      The maximum size of the frame as a result can be larger to accommodate this new data. 12 bits of this 32-bit field can be used to store values from 0 through to 4095.

      This represents a total of 4096 values.

      This is used for the VLAN ID or VID.

      A 0 in this 12-bit value signifies no VLAN and 1 is generally used to signify the management VLAN.

      The others can be used as desired by the local network admin.

      What this means is that any .1Q frames can be a member of over 4,000 VLANs.

      And this means that you can create separate virtual LANs or VLANs in the same layer 2 physical network.

      A broadcast frame so anything that's destined to all PEPs would only reach all the devices which are in the same VLAN.

      Essentially, it creates over 4,000 different broadcast domains in the same physical network.

      You might have a VLAN for CCTV, a VLAN for servers, a VLAN for game testing, a VLAN for guests and many more.

      Anything that you can think of and can architect can be supported from a networking perspective using VLANs.

      But I want you to imagine even bigger.

      Think about a scenario where you as a business have multiple sites and each site is in a different area of the country.

      Now each site has the same set of VLANs.

      You could connect them using a dedicated wide area network and carry all of those different company specific VLANs and that would be fine.

      But what if you wanted to use a comms provider, a service provider who could provide you with this wide area network capability?

      What if the comms provider also uses VLANs to distinguish between their different clients?

      Well, you might face a situation where you use VLAN 1337 and another client of the comms provider also uses VLAN 1337.

      Now to help with this scenario, another standard comes to the rescue, 802.1AD.

      And this is known as Q in Q, also known as provider bridging or stacked VLANs.

      This adds another space in the frame for another VLAN field.

      So now instead of just the one field for 802.1Q VLANs, now you have two.

      You keep the same customer VLAN field and this is known as the C tag or customer tag.

      But you add another VLAN field called the service tag or the S tag.

      This means that the service provider can use VLANs to isolate their customer traffic while allowing each customer to also use VLANs internally.

      As the customer, you can tag frames with your VLANs and then when those frames move onto the service provider network, they can tag with the VLAN ID which represents you as a customer.

      Once the frame reaches another of your sites over the service provider network, then the S tag is removed and the frame is passed back to you as a standard .1Q frame with your customer VLAN still tagged.

      Q in Q tends to be used for larger, more complex networks and .1Q is used in smaller networks as well as cloud platforms such as AWS.

      For the remainder of this lesson, I'm going to focus on .1Q though if you're taking an advanced networking course of mine, I will be returning to the Q in Q topic in much more detail.

      For now though, let's move on and look visually at how .1Q works.

      This is a cut down version of the previous physical network I talked about, only this time instead of the three groups of devices we have two.

      So on the left we have the finance building and on the right we have game testers.

      Inside these networks we have switches and connected to these switches are two groups of machines.

      These switches have been configured to use 802.1Q and ports have been configured in a very specific way which I'm going to talk about now.

      So what makes .1Q really cool is that I've shown these different device types as separate buildings but they don't have to be.

      Different groupings of devices can operate on the same layer to switch and I'll show you how that works in a second.

      With 802.1Q ports and switches are defined as either access ports or trunk ports and access ports generally has one specific VLAN ID or vid associated with it.

      A trunk conceptually has all VLAN IDs associated with it.

      So let's say that we allocate the finance team devices to VLAN 20 and the game tester devices to VLAN 10.

      We could easily hit any other numbers, remember we have over 4,000 to choose from, but for this example let's keep it simple and keep 10 and 20.

      Now right now these buildings are separate broadcast domains because they have separate switches which are not connected and they have devices within them.

      Two laptops connected to switch number one for the finance team and two laptops connected to switch number two for the game tester team.

      Now I mentioned earlier that we have two types of switch ports in a VLAN cable network.

      The first are access ports and the ports which the orange laptops on the right are connected to are examples of access ports.

      Access ports communicate with devices using standard Ethernet which means no VLAN tags are applied to the frames.

      So in this case the laptop at the top right sends a frame to the switch and let's say that this frame is a broadcast frame.

      When the frame exits an access port it's tagged with a VLAN that the access port is assigned to.

      In this case VLAN 10 which is the orange VLAN.

      Now because this is a broadcast frame the switch now has to decide what to do with the frame and the default behaviour for switches is to forward the broadcast out of all ports except the one that it was received on.

      For switches using VLANs this is slightly different.

      First it forwards to any other access ports on the same VLAN but the tagging will be removed.

      This is important because devices connected to access ports won't always understand 802.1Q so they expect normal untagged frames.

      In addition the switch will fold frames over any trunk ports.

      A trunk port in this context is a port between two switches for example this one between switch two and switch one.

      Now a trunk port is a connection between two dot 1Q capable devices.

      It forwards all frames and it includes the VLAN tagging.

      So in this case the frame will also be forwarded over to switch one tagged as VLAN 10 which is the gain tester VLAN.

      So tagged dot 1Q frames they only get forwarded to other access ports with the same VLAN but they have the tag stripped or they get forwarded across trunk ports with the VLAN tagging intact.

      And this is how broadcast frames work.

      For unicast ones which go to a specific single MAC address well these will be either forwarded to an access port in the same VLAN that the specific device is on or if the switch isn't aware of the MAC address of that device in the same VLAN then it will do a broadcast.

      Now let's say that we have a device on the finance VLAN connected to switch two.

      And let's say that the bottom left laptop sends a broadcast frame on the finance VLAN.

      Can you see what happens to this frame now?

      Well first it will go to any other devices in the same VLAN using access ports meaning the top left laptop and in that case the VLAN tag will be removed.

      It will also be forwarded out of any trunk ports tagged with VLAN 20 so the green finance VLAN.

      It will arrive at switch two with the VLAN tag still there and then it will be forwarded to any access ports on the same VLAN so VLAN 20 on that switch but the VLAN tagging will be removed.

      Using virtual LANs in this way allows you to create multiple virtual LANs or VLANs.

      With this visual you have two different networks.

      The finance network in green so the two laptops on the left and the one at this middle bottom and then you have the gain testing network so VLAN 10 meaning the orange one on the right.

      Both of these are isolated.

      Devices cannot communicate between VLANs which are led to networks without a device operating between them such as a layer 3 router.

      Both of these virtual networks operate over the top of the physical network and it means that now we can configure this network in using virtual configuration software which can be configured on the switches.

      Now VLANs are how certain things within AWS such as public and private vifs on direct connect works so keep this lesson in mind when I'm talking about direct connect.

      A few summary points though that I do want to cover before I finish up with this lesson.

      First VLANs allow you to create separate layer 2 network segments and these provide isolation so traffic is isolated within these VLANs.

      If you don't configure and deploy a router between different VLANs then frames cannot leave that VLAN boundary so they're virtual networks and these are ideal if you want to configure different virtual networks for different customers or if you want to access different networks for example when you're using direct connect to access VPCs.

      VLANs offer separate broadcast domains and this is important.

      They create completely separate virtual network segments so any broadcast frames within a VLAN won't leave that VLAN boundary.

      If you see any mention of 802.1Q then you know that means VLANs.

      If you see any mention of VLANs stacking or provide a bridging or 802.1AD or Q in Q this means nested VLANs.

      So having a customer tag and a service tag allowing you to have VLANs in VLANs and these are really useful if you want to use VLANs on your internal business network and then use a service provider to provide wide area network connectivity who also uses VLANs and if you are doing any networking exams then you will need to understand Q in Q as well as 802.1Q.

      So with that being said that's everything I wanted to cover.

      Go ahead and complete this video and when you're ready I'll look forward to you joining me in the next.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Summary:

      This manuscript by Meissner and colleagues described a novel take on a classic social cognition paradigm developed for marmosets. The classic pull task is a powerful paradigm that has been used for many years across numerous species, but its analog approach has several key limitations. As such, it has not been feasible to adopt the task for neuroscience experiments. Here the authors capture the spirit of the classic task but provide several fundamental innovations that modernize the paradigm - technically and conceptually. By developing the paradigm for marmosets, the authors leverage the many advantages of this primate model for studies of social brain functions and their particular amenability to freely-moving naturalistic approaches.

      Strengths:

      The current manuscript describes one of the most exciting paradigms in primate social cognition to be developed in many years. By allowing for freely-moving marmosets to engage in high numbers of trials, while precisely quantifying their visual behavior (e.g. gaze) and recording neural activity this paradigm has the potential to usher in a new wave of research on the cognitive and neural mechanisms underlying primate social cognition and decision-making. This paradigm is an elegant illustration of how naturalistic questions can be adapted to more rigorous experimental paradigms. Overall, I thought the manuscript was well written and provided sufficient details for others to adopt this paradigm. I did have a handful of questions and requests about topics and information that could help to further accelerate its adoption across the field.

      Weaknesses:

      LN 107 - Otters have also been successful at the classic pull task (https://link.springer.com/article/10.1007/s10071-017-1126-2)

      We have added this reference to the manuscript.

      LN 151 - Can you provide a more precise quantification of timing accuracy than the 'sub-second level'. This helps determine synchronization with other devices.

      We have included more precise timing details, noting that data is stored at the millisecond level.

      Using this paradigm, the marmosets achieved more trials than in the conventional task (146 vs 10). While this is impressive, given that only ~50 are successful Mutual Cooperation trials it does present some challenges for potential neurophysiology experiments and particular cognitive questions. The marmosets are only performing the task for 20 minutes, presumably because they become sated and are no longer motivated. This seems a limitation of the task and is something worth discussing in the manuscript. Did the authors try other food rewards, reduce the amount of reward, food/water restrict the animals for more than the stated 1-3 hours? How might this paradigm be incorporated into in-cage approaches that have been successful in marmosets? Any details on this would help guide others seeking to extend the number of trials performed each day.

      We have added a discussion addressing the use of liquid rewards, minimal food and water restriction, and the potential for further optimization to increase task engagement and trial numbers. This is now reflected in the revised manuscript.

      Can you provide more details on the DLC/Anipose procedure? How were the cameras synchronized? What percentage of trials needed to be annotated before the model could be generalized? Did each monkey require its own model, or was a single one applied to all animals?

      We have added more detailed information on the DLC and Anipose tracking which can be found in the Multi-animal 3D tracking section under Materials & Methods.

      Will the schematics and more instructions on building this system be made publicly available? A number of the components listed in Table 1 are custom-designed. Although it is stated that CAD files will be made available upon request, sharing a link to these files in an accessible folder would significantly add to the potential impact of this paradigm by making it easier for others to adopt.

      We have made the SolidWorks CAD files publicly available. They can now be found in the Github repository alongside the apparatus and task code.

      In the Discussion, it would be helpful to have some discussion of how this paradigm might be used more broadly. The classic pulling paradigm typically allows one to ask a specific question about social cognition, but this task has the potential to be more widely applied to other social decision-making questions. For example, how might this task be adopted to ask some of the game-theory-type approaches common in this literature? Given the authors' expertise in this area, this discussion could serve to provide a roadmap for the broader field to adopt.

      Although this paradigm was developed specifically for marmosets, it seems to me that it could readily be adopted in other species with some modifications. Could the authors speak to this and their thoughts on what may need to be changed to be used in other species? This is particularly important because one of the advantages of the classic paradigm is that it has been used in so many species, providing the opportunity to compare how different species approach the same challenge. For example, though both chimps and bonobos are successful, their differences are notably illuminating about the nuances of their respective social cognitive faculties.

      We have expanded the discussion for the broader applications of this apparatus both for other decision-making research questions as well as its adaptability for use in other species.

      Reviewer #2 (Public Review):

      Summary:

      This important work by Meisner et al., developed an automated apparatus (MarmoAPP) to collect a wide array of behavioral data (lever pulling, gaze direction, vocalizations) in marmoset monkeys, with the goal of modernizing collection of behavioral data to coincide with the investigation of neurological mechanisms governing behavioral decision making in an important primate neuroscience model. The authors show a variety of "proof-of-principle" concepts that this apparatus can collect a wide range of behavioral data, with higher behavioral resolution than traditional methods. For example, the authors highlight that typical behavioral experiments on primate cooperation provide around 10 trials per session, while using their approach the authors were able to collect over 100 trials per 20-minute session with the MarmoAAP.

      Overall the authors argue that this approach has a few notable advantages:<br /> (1) it enhances behavioral output which is important for measuring small or nuanced effects/changes in behavior;<br /> (2) allows for more advanced analyses given the higher number of trials per session;<br /> (3) significantly reduces the human labor of manually coding behavioral outcomes and experimenter interventions such as reloading apparatuses for food or position;<br /> (4) allows for more flexibility and experimental rigor in measuring behavior and neural activity simultaneously.

      Strengths:

      The paper is well-written and the MarmoAPP appears to be highly successful at integrating behavioral data across many important contexts (cooperation, gaze, vocalizations), with the ability to measure significantly many more behavioral contexts (many of which the authors make suggestions for).

      The authors provide substantive information about the design of the apparatus, how the apparatus can be obtained via a long list of information Apparatus parts and information, and provide data outcomes from a wide number of behavioral and neurological outcomes. The significance of the findings is important for the field of social neuroscience and the strength of evidence is solid in terms of the ability of the apparatus to perform as described, at least in marmoset monkeys. The advantage of collecting neural and freely-behaving behavioral data concurrently is a significant advantage.

      Weaknesses:

      While this paper has many significant strengths, there are a few notable weaknesses in that many of the advantages are not explicitly demonstrated within the evidence presented in the paper. There are data reported (as shown in Figures 2 and 3), but in many cases, it is unclear if the data is referenced in other published work, as the data analysis is not described and/or self-contained within the manuscript, which it should be for readers to understand the nature of the data shown in Figures 2 and 3.

      (1) There is no data in the paper or reference demonstrating training performance in the marmosets. For example, how many sessions are required to reach a pre-determined criterion of acceptable demonstration of task competence? The authors reference reliably performing the self-reward task, but this was not objectively stated in terms of what level of reliability was used. Moreover, in the Mutual Cooperation paradigm, while there is data reported on performance between self-reward vs mutual cooperation tasks, it is unclear how the authors measured individual understanding of mutual cooperation in this paradigm (cooperation performance in the mutual cooperation paradigm in the presence or absence of a partner; and how, if at all, this performance varied across social context). What positive or negative control is used to discern gained advantages between deliberate cooperation vs two individuals succeeding at self-reward simultaneously?

      Thank you for your comment. This Tools & Resources paper is focused solely on the development of the apparatus and methods. Future publications will provide more details on training performance, learning behaviors, and include appropriate controls to distinguish deliberate cooperation from simultaneous success in self-reward tasks.

      (2) One of the notable strengths of this approach argued by the authors is the improved ability to utilize trials for data analysis, but this is not presented or supported in the manuscript. For example, the paper would be improved by explicitly showing a significant improvement in the analytical outcome associated with a comparison of cooperation performance in the context of ~150 trials using MarmoAAP vs 10-12 trials using conventional behavioral approaches beyond the general principle of sample size. The authors highlight the dissection of intricacies of behavioral dynamics, but more could be demonstrated to specifically show these intricacies compared to conventional approaches. Given the cost and expertise required to build and operate the MarmoAAP, it is critical to provide an important advantage gained on this front. The addition of data analysis and explicit description(s) of other analytical advantages would likely strengthen this paper and the advantages of MarmoAAP over other behavioral techniques.

      Thank you for the suggestion. While this manuscript focuses on the apparatus and methods, the increase in trial numbers itself provides clear advantages, including greater statistical power and more robust analyses of behavioral dynamics. Future publications will offer more in-depth analyses comparing the performance and cooperation behavior observed with MarmoAAP, further demonstrating these analytical benefits.

      Reviewer #3 (Public Review):

      Summary:

      The authors set out to devise a system for the neural and behavioral study of socially cooperative behaviors in nonhuman primates (common marmosets). They describe instrumentation to allow for a "cooperative pulling" paradigm, the training process, and how both behavioral and neural data can be collected and analyzed. This is a valuable approach to an important topic, as the marmoset stands as a great platform to study primate social cognition. Given that the goals of such a methods paper are to (a) describe the approach and instrumentation, (b) show the feasibility of use, and (c) quantitatively compare to related approaches, the work is easily able to meet those criteria. My specific feedback on both strengths and weaknesses is therefore relatively limited in scope and depth.

      Strengths:

      The device is well-described, and the authors should be commended for their efforts in both designing this system but also in "writing it up" so that others can benefit from their R&D.

      The device appears to generate more repetitions of key behavior than other approaches used in prior work (with other species).

      The device allows for quantitative control and adjustment to control behavior.

      The approach also supports the integration of markerless behavioral analysis as well as neurophysiological data.

      Weaknesses:

      A few ambiguities in the descriptions are flagged below in the "Recommendations for authors".

      The system is well-suited to marmosets, but it is less clear whether it could be generalized for use in other species (in which similar behaviors have been studied with far less elegant approaches). If the system could impact work in other species, the scope of impact would be significantly increased, and would also allow for more direct cross-species comparisons. Regardless, the future work that this system will allow in the marmoset will itself be novel, unique, and likely to support major insights into primate social cognition.

      Thank you for this feedback. We have expanded the discussion to include how the apparatus could be adapted for use in other species, highlighting the potential modifications required, such as adjusting the size and strength of the servo motor and components. These changes would enable broader applications and facilitate cross-species comparisons.

    1. Reviewer #3 (Public review):

      Summary:

      The study investigates reinforcement learning across the lifespan with a large sample of participants recruited for an online game. It finds that children gradually develop their abilities to learn reward probability, possibly hindered by their immature spatial processing and probabilistic reasoning abilities. Motor noise, reinforcement learning rate, and exploration after a failure all contribute to children's subpar performance.

      Strengths:

      (1) The paradigm is novel because it requires continuous movement to indicate people's choices, as opposed to discrete actions in previous studies.

      (2) A large sample of participants were recruited.

      (3) The model-based analysis provides further insights into the development of reinforcement learning ability.

      Weaknesses:

      (1) The adequacy of model-based analysis is questionable, given the current presentation and some inconsistency in the results.

      (2) The task should not be labeled as reinforcement motor learning, as it is not about learning a motor skill or adapting to sensorimotor perturbations. It is a classical reinforcement learning paradigm.

    1. The faculty of re-solution{d} is possibly much invigorated by mathematical study, and especially by that highest branch of it which, unjustly, and merely on account of its retrograde operations, has been called, as if par excellence, analysis. Yet to calculate is not in itself to analyse. A chess-player, for example, does the one without effort at the other. It follows that the game of chess, in its effects upon mental character, is greatly misunderstood. I am not now writing a treatise, but simply prefacing a somewhat peculiar narrative by observations very much at random; I will, therefore, take occasion to assert that the higher powers of the reflective intellect are more decidedly and more usefully tasked{e} by the unostentatious game of draughts than by all the elaborate frivolity of chess. In this latter, where the pieces have different and bizarre{f} motions, with various and variable values, what{g} is only complex is mistaken (a not unusual error) for what{h} is profound. The attention is here called powerfully into play. If it flag for an instant, an oversight is committed, resulting in injury or defeat. The possible moves being not only manifold but involute, the chances of such oversights are multiplied; and in nine cases out of ten it is the more concentrative rather than the more acute player who conquers. In draughts, on the contrary, where the moves are unique{i} and have but little variation, the probabilities of inadvertence are diminished, [page 529:] and the mere attention being left comparatively unemployed, what advantages are obtained by either party are obtained by superior acumen.{j} To be less abstract — Let us suppose a game of draughts where the pieces are reduced to four kings, and where, of course, no oversight is to be expected. It is obvious that here the victory can be decided (the players being at all equal) only by some recherché{k} movement, the result of some strong exertion of the intellect. Deprived of ordinary resources, the analyst throws himself into the spirit of his opponent, identifies himself therewith, and not unfrequently sees thus, at a glance, the sole methods (sometimes indeed absurdly simple ones) by which he may seduce into {ll}error or hurry into miscalculation.{ll}

      Using chess to connect with the concept of analysis at the beginning of the story is innovative, however, I have to admit that this "chess metaphor" doesn't work for me ----It neither provides me any necessary background information nor triggers my interest and curiosity to read on.

    2. A chess-player, for example, does the one without effort at the other. It follows that the game of chess, in its effects upon mental character, is greatly misunderstood. I am not now writing a treatise, but simply prefacing a somewhat peculiar narrative by observations very much at random; I will, therefore, take occasion to assert that the higher powers of the reflective intellect are more decidedly and more usefully tasked{e} by the unostentatious game of draughts than by all the elaborate frivolity of chess. In this latter, where the pieces have different and bizarre{f} motions, with various and variable values, what{g} is only complex is mistaken (a not unusual error) for what{h} is profound. The attention is here called powerfully into play. If it flag for an instant, an oversight is committed, resulting in injury or defeat. The possible moves being not only manifold but involute, the chances of such oversights are multiplied; and in nine cases out of ten it is the more concentrative rather than the more acute player who conquers. In draughts, on the contrary, where the moves are unique{i} and have but little variation, the probabilities of inadvertence are diminished, [page 529:] and the mere attention being left comparatively unemployed, what advantages are obtained by either party are obtained by superior acumen.

      This surprised me, as I initially thought both games should be played with a unique move to mess up the opponent's plan. Instead, because of the lack of possible moves in chess, the moves will not be as unique as playing draughts.

    1. The chemist said it would be all right, but I’ve never been the same. You are a proper fool, I said. Well, if Albert won’t leave you alone, there it is, I said, What you get married for if you don’t want children?

      Eliot’s “A Game of Chess” comments on the patriarchy through the board game Chess, especially in terms of gender, sex, and the female body. On lines 145-149 of “A Game of Chess”, the narrator speaks to a woman about her boyfriend/husband. The narrator states, “He said, I swear, I can’t bear to look at you. // And no more can’t I, I said, and think of poor Albert, // He’s been in the army four years, he wants a good time.” This dialogue already comments on the the beauty of a woman in relation to the desires of her husband, who is soon to return from war. Eliot’s depiction of the husband as a soldier is interesting- not only does it refer to the Great War, but chess pieces can also be interpreted as types of soldiers, engaged in strategic combat. In “A Game of Chess” by Thomas Middleton, we begin to see some of the roots of such social commentary in terms of chess gameplay. In Middleton’s play, the Jesuit Black Bishop’s pawn, recieves a letter written by the Black King, both chess pieces. The Kings letter demands the Black Bishop’s capturing the Virgin White Queens person, stating, “These are therefore to require you, by the burning affection I bear to the rape of devotion that speedily upon the surprisal of her by all watchful advantage you make some attempt upon the White Queen’s person, whose fall or prostitution our lust most violently rages for.” The King’s letter to the Bishop mentions the “rape of devotion”, which evokes a sense of abuse and violation in committed relationships, such as marraiges. The mention of “our list” describes a shared masculine desire for “fall or prostitution”. While it is clear what these figures desire in regards to prositution, “fall” is left seemingly ambiguous. Interpreting the “fall” of the woman, we can assume that these figures desire the destruction or death of the female body. Another noticable aspect of this letter is that the letter comes from a King, a figure of male authority, to a pawn, a weaker masculine figure. These misogynistic commands reflect how the patriarchy is perpetuated and constantly enforced in our society - a process also shown in the narrator’s scolding of the woman in “A Game of Chess”. The King’s letter also describes how the Virgin White Queen’s person, “passed the general rule, the large extent of our prescriptions for obedience”. This idea of obedience fits into the discussion of the patriarchy, as women are expected to be obedient and subservient. This call for obedience evokes an earlier-mentioned notion of women giving their sexuality for men, as well as the disturbing control men have over a woman’s body during pregancy. In lines 161- 165 of “A Game of Chess”, we return to the woman speaking about her abortion, stating “The chemist said it would be alright, but I’ve never been the same// You are a proper fool, I said// Well, if Albert won’t leave you alone, there it is, I said”. Again, the narrator berates the woman because of her choice to terminate her pregnancy- a process she has also struggled with, saying she has never been the same. The woman is called a fool, and the theme of harassment is evoked again, as Albert “won’t leave you [the woman] alone”. Both the Virgin White Queen’s person and the woman in “A Game of Chess” are objectified and subject to patriarchal expectations and harassment, reported through the metaphor of chess.

    2. I think we are in rats’ alley Where the dead men lost their bones.

      Hamlet’s most famous line questions whether “to be, or not to be.” That is indeed the nihilistic question that seems to play upon Eliot’s mind as he writes “I think we are in rats’ alley/Where the dead men lost their bones.” Immediately, this image invokes a sense of urban decay and despair as the alley–presumably a home for humans–is possessed, grammatically and literally, by the “rats.” Additionally, Eliot’s notion of men who “lost their bones” is quite peculiar. In typical instances of human decomposition, flesh is lost immediately while bones persist for an extended period. While death and decomposition are naturally morbid affairs, Eliot seems to stretch their impact on erasing an individual’s vestiges by suggesting that their bones–the enduring structure that had carried them through life and is meant to remain intact for a period after death–are “lost.” Furthermore, the notion that these bones are lost in a rat-possesed alley rids the scenario of any shred of dignity that it could possibly have. So, why is Eliot ridding the process of decomposition–one’s material departure from Earth–of its natural duration, instead suggesting a more instant and final end to life?

      Perhaps he is casting an argument towards the “not to be” side of Hamlet’s ballot by despairingly exaggerating the frailty of human life and the limited control one has over their existence. In the context of a “Game of Chess,” this argument makes much sense. During a chess game, each piece plays a critical role; even an advantage of one pawn can make all the difference in one’s endgame. Yet, even the potent queen is but a pawn in the overall quest to win the game and preserve the king. A lost piece is forever gone–barring the technicality of promotion–with no lingering bones to preserve its memory. Eliot seems to be suggesting that humans, too, are constantly manipulated, with their agency having limited meaning in a rats’–representative of some cabalistic (generally hidden/evil) controlling force–where not even their core structure of bones remains with them.

      The irony of exercising agency despite ultimately being at the mercy of the rats who truly possess control emerges in Pound’s “The Game of Chess.” Pound writes that “these pieces are living in form/Their moves break and reform the pattern.” This is inherently paradoxical because it is impossible for a chess piece to be “living” and exercise agency to “break and reform” while also being pawns in the human player's larger strategic game. The ontology of a chess piece prevents it from having its own independent agency. Then, Pound’s closing phrase, “Renewing of contest,” furthers the notion of the pieces’ existences being transient because, after the King disappears “down in the vortex,” the board is reset and the distinct role of a piece in a specific game is banished to be an irrelevant relic of history. The only force that remains dominant and constant is the agency of the entity “renewing the contest,” the overlord that presides over many chess games and pieces. Thus, the suggestion at the convergence of Eliot and Pound seems to be that humans are but pawns in a larger game, inherently limited in agency and frail in nature.

    3. 'Speak to me. Why do you never speak. Speak. 'What are you thinking of? What thinking? What?

      In the first part of The Game of Chess, the female character is isolated and defined by the lifeless objects surrounding her—perfumes, glass, candle flames, lacquer, and more. Only through these objects, do we get a chance to become acquainted with her. These inanimate items symbolize suffocation and entrapment in her loneliness. The readers can only speculate if it’s the notion of rape that Eliot continuoulsy references that led to this isolation. No matter the cause of it, however, the character strives to get herself out of this situation. She strives for human connection. Her plea, “Speak to me,” reflects this need, but the absence of a question mark in “Why do you never speak to me” suggests that the character already knows the answer. The reader, however, is left to speculate: does she see herself as undesirable because of her trauma? Or is it simply the years of a relationship that deteriorate this connection referencing Eliot’s own troubled marriage? In either case, this emotional disconnect is further demonstrated by the subsequent question, “What are you thinking of?” This time, the question is marked by a question mark, suggesting an actual attempt to break through this emotional barrier. These attempts, however, are ineffective, as the desperation rises and the questions shorten to “what thinking” and “what.” This fragmented monologue mirrors the fragmentation of her emotional state.

      In contrast, the second character suffers from the destructive excess of human connection. Shamed for her appearance, she faces the reality of her partner’s potential infidelity, reflected in the statement, “And if you don’t give it to him, there’s others.” The references to abortion intensify this degradation. She justifies her loss of beauty and confidence with the line, “It’s them pills I took, to bring it off.” Instead of finding fulfillment in connection, this character’s relationships strip her of her self-worth.

      In these two cases, Eliot presents women trapped at the opposite extremes of human connection: one suffers from its absence, the other from its destructive abundance. Yet, in both cases, external forces define and entrap them. The first woman is reduced to the objects around her, while the second is judged by an external voice—the pronoun “I” suggesting our, as readers, own judgment projected onto the character. We become not simply the judges, but also the victimes of this broken connection. The poem’s fragmented language, which severely affects our understanding of it, mirrors the emotional chaos, invoking feelings rather than rationality, similarly to Ophelia’s “mad” song in Hamlet.

      This pattern mirrors the nature of chess, where a single wrong move can drastically alter the entire game. Just as in chess, life’s unpredictability is highlighted in these women’s lives, as one extreme of human connection can quickly shift to another, with equally devastating outcomes. The title of the section, The Game of Chess, thus, reflects this instability —one wrong move leads to extremes of connection. In this case, these characters and we, as readers, are not simply entrapped in this game. Instead, we are playing with an unwinnable position from the very outset: each move only brings the inevitability of loss closer.

    4. The change of Philomel,

      9.25

      On many levels of interpretations, the section “A Game of Chess” depicts the rape of one or several female characters. The meticulous steps of the sport of chess bring to mind a careful, calculated series of sexual manipulation, and the rather disjunct imagery throughout these stanzas seem to be linked together by the metaphorical or literal subjugation of the female body. Of great interest to me here is the portrayal of Philomela’s metamorphosis, framed and “[displayed] above the antique mantel” (97).

      In Ovid’s Metamorphoses, Philomela transforms into a nightingale after revenging King Tereus for the assailment and mutilation of her body. In Eliot’s poem, however, she is forever frozen in this frame of incomplete mutation. In a way, her fate-like tragedy endures through this artwork; even from the modern observer’s perspective, “still she cried, and still the world pursues” (98). Philomela’s animalistic freedom is unachieved or at least incomplete. Recalling Sibyl’s male-given “liberation” from the natural laws of mortality and Marie’s temporary freedom sledding with an unnamed man “down [...] in the mountains,” Eliot seems to characterize female autonomy with a sense of ephemera throughout the poem (14-7).

      As Ira points out, the female nightingale is naturally mute in terms of singing – it is factually impossible for their voices to permeate the desert air. Rather, female nightingales can at best produce a soft collection of sounds resemblant of the “crying” that Eliot attributes to the pronoun “she” (98). Given that it traverses biological gender boundaries, one could argue that this line is yet another example of the disembodiment of perspectives.

      But I think there’s more to it. We cannot confirm what this intrusive non-female voice has filled the desert with, but we notice that the only sound emphasized in the rest of the stanza is an onomatopoeia “Jug Jug” (103). Having had her tongue cut off, the violated female exemplified by Philomela does not regain her voice through the transformation to a nightingale. Moreover, this transformation – this scene hanging over the ancient mantel, this “withered stumps of time” – can only generate impact through the interpretation of the (presumably male) observer (104). Sadly, all that the “dirty ears” of the viewer can perceive from the image is condensed into the obscene “Jug Jug.” The strength and resolution that Philomela supposedly represents does not resonate in the modern waste land. Only the vulgar, filthy perception seems to live on.

    5. (She’s had five already, and nearly died of young George.).

      The information contained within parentheses is perhaps the most honest and straightforward we receive in this section of devolving poetry. A Game of Chess begins in iambic pentameter, the chosen meter of Shakespeare and other prominent English playwrights, and mixes references to classical poetry which also depends on strict meter. As the section progresses, this meter devolves into jumbled lines and line breaks indicative of the psychosis all of the alluded women themselves devolve into (Dido, Cleopatra, Philomena, Ophelia, all defined by being "crazy"). Anchored amongst the chaos are these parenthetical statements, associated in my mind with being asides of truth, as the punctuation typically indicates. From these we gain three kernels of knowledge: Lil is 31, has 5 kids, and nearly died in childbirth. We are also told that her husband has been away at war for 4 years, so she must have had these 5 kids before the age of 27, meaning she most likely had the first one when she was about 20 or 21. Therefore, almost Lils entire adult life has been defined by motherhood, which has obviously aged her. Throughout most of history, women are defined by their ability to have children. In Ancient Greece, women were thought of as child-bearing vessels, holding little value beyond what their wombs could produce. So even as Lil time and time again proves her archaic value, at the cost of her own health, she ultimately fails in this job, succumbing to “madness” by taking abortion pills. As the meter devolves with the sanity of its female characters, Lils descent into craziness is marked by her long face, lack of beauty, and choice to abort her child. Her biggest failure is trying to avoid death.

    6. Flung their smoke into the laquearia,

      This line recalls the language and setting of the room where Dido is infected by Cupid to love Aeneas, which inevitably leads to her demise. A moment marked by Dido losing her autonomy and unwillingly being transformed into a pawn, is described not by the woman herself, but the candles that surround her. As she becomes a passive agent, the inorganic flames become the subject, described with an active tense. Even they have autonomy as to where they fling their smoke, but as Dido “burns” with love sickness, she has no control over where her smoke goes.

      This language depicts Dido as a chess piece in Venus’s cosmic plan, which she describes, “Wherefore I purpose to outwit the queen with guile and encircle her with love’s flame, that so no power may change her, but on my side she may be held fast in strong love for Aeneas” (Aeneid 1). The word “encircle” is indicative of predators circling around their prey before going in for the kill. Similarly, a powerful chess piece is typically killed by being surrounded, meaning no move will result in a safe square. Furthermore, “on my side” almost equates Dido as a tool in a belt, meant to be taken out when the user is ready.

      Like in a game of chess, where the player is like a God, moving pieces around and sacrificing them for the safety of the King, in this same way Venus plays with Dido. This queen, once a pawn in her brother's quest for power, traveled across the ocean, across the chess board, to become a queen in a new land, like a pawn becoming a new queen. Yet in the end, her power, her ability to become a queen and then move anywhere across the board, is reduced once more as an agent for the player to use, protecting the King, Aeneas. When the game finishes, Dido is still the pawn she once came from.

      The women of A Game of Chess are all similar pawns, victims of the use of divine power to serve a more powerful figure's needs. This pattern appears in all of the referenced women: Philomela becomes an object of lust for the King Tereus, and when she breaks from her role in the game, she is turned into a bird, doomed to sing her song of sorrow with a mute voice. The head being chopped off the cadaver woman brings a queen back down to the size of a pawn, and Madame Sosostris uses his “divine” powers to force women into his plan.

      And it is in this repetition that these women lose even more of their individuality; they join a long list of examples, an array of chess boards with the same goal. Unless the game itself is rewritten, their fate remains final.

    7. with inviolable voice And still she cried, and still the world pursues,

      Breaking Eliot’s own pattern, the line between life and death is not blurred for the female characters; instead, they are murdered physically, spiritually, and emotionally, with no possibility of redemption.

      In Ovid’s story of Philomela, King Tereus rapes her and, to ensure her silence, cuts out her tongue. Philomela seems to reclaim her control through her transformation into a nightingale, but even this reclamation is deceptive. On the surface, it appears as if she has regained her voice as a bird; yet only male nightingales can sing. The perceived revenge is hollow, as she remains voiceless. In a similar hollow attempt of revenge, Philomela’s sister, Procne, directs her rage not at Tereus but at their son. Tereus, the primary perpetrator of the violence, thus, becomes only a secondary victim of his brutality, while the raped Philomela and the now childless Procne bear the consequences of his violence.

      Interestingly, Ovid describes Tereus’s violent intents as “the flame of love”(Ovid,3) that has undertaken him. In this case, love gains a perverse connotation – what tends to be a source of warmth, becomes an all-consuming fire, synonymous with lust and destruction.

      Eliot’s reference to Cupid, traditionally a symbol of romantic love, is similarly ironic. On the surface, Cupid represents love, yet his story reveals the same pattern of female disempowerment. Cupid falls in love with a mortal - Psyche. She is passive, with others dictating her fate. In the so-called happy ending, Cupid brings her to Olympus, where Zeus grants her immortality. However, this transformation, similarly to Philomela’s, is controlled entirely by male figures. None of these female characters seem to regain any control over their fates.

      Eliot described nightingale’s voice in reference to Ovid’s myth as “inviolable” or unbreakable. The use of this word is once again ironic – the character’s voice is taken away from her in both her human and bird existences. Eliot follows up this irony of false hope with a line “And still she cried, and still the world pursues.” The use of past tense for a verb “cried” signifies a cry that has already ended, further emphasizing her inability to voice her pain. The world, however, “still pursues.” The double emphasis on the word “still” shows the consistent persecution and implies absolutely no justice for the female characters. “Pursue” itself can carry violent connotations, meaning “to follow or chase someone or something, especially in order to catch them,” (Oxford Dictionary) reinforcing the idea of female victimization and helplessness.

      While Eliot’s vision often blurs the boundaries between life and death, for the women of A Game of Chess and their mythic counterparts, these lines are brutally drawn. Their fate is not ambiguous—it is absolute. They are erased, their voices silenced, leaving them trapped in a world where love is violence, survival is silence, and any hope of redemption is only ironic.

    8. Under the firelight, under the brush, her hair Spread out in fiery points Glowed into words, then would be savagely still.

      At first, I was drawn to (and rather overwhelmed by) the series of physical materials we encounter in the opening. “The Chair” is equated to a “burnished throne” - a more authoritative rendition signifying a polished metal substance - before seeming to glow “on the marble, where the glass / Held up by standards wrough with fruited vines”. The light (and the reader’s focus) is tossed between shiny marble surfaces and glass refractions and metallic chairs, not once landing on the subject of the opening lines who we infer is indeed a queen. Amidst the influx of visual descriptions, the queen - an allusion to the title chess piece as well as the sitter of “The Chair”, is belittled. She, referring to Dido, Cleopatra, or Queen Mary (WW1/contemporary reference), is overshadowed by the focus on materialistic goods such as the “glitter of her jewels” and the “rich profusion” that poured from “satin cases”. Eliot embellishes this “Game of Chess”’s first key female figure with a plethora of perfumes, jewels, and silks instead of addressing her as a subject. Very few verbs are even connected to her, from which all are passive such as “sat” and “lurked”, with the most dramatic being “cried” in line 102. The woman in the scene feels very much secondary to the ornamentation (plug for capitalism? A critique on mass consumption? On vanity?).

      True, the faceless-de-individualization argument is not singular to women in The Waste Land. However, the allusions to rape of “Philomel, by the barbarous king / So rudely forcced” as in Ovid’s Metamorphoses, of Dido canonically in the Aeneid, and even with the romanticized possibility with Baudelaire’s cadaver “with eyes as provocative as the pose, / Reveals an unwholesome love, / Guilty joys and exotic revelries” suggests there may be a sexual connotation of objectifying these once-powerful women. The reader intrudes upon a crime scene (a setting deeply romanticized but still the scene of rape, suicide, regicide…) and our eyes flicker to everything but the victim. We can only reconcile with glimmers of light, hovering “under the brush,” observing “her hair / Spread out in points / Glowed into words, then would be savagely still” in the inhuman MO of a guilty perpetrator. Eliot’s readers are not only deeply uncomfortable with the prolonged gawking at unburied women (not deserving of the same solace or privacy soldiers or men do, but rather publicly displayed as a symbol of fleeting beauty) but also guilty of partaking in the crime that ruined them.

    9. The Chair she sat in

      I was particularly interested in the multidimensional and various representations of women in this passage. Just as Angela said last year, in this part of the poem, separate stories converge. However, they share a similar theme—women whose lives have been defined and constrained by their relationships with men. Last year, after studying books one and two of Vergil’s The Aeneid in my Latin class, I focused my final paper on the book as a piece of sophisticated propaganda during the Augustan era to reaffirm traditional Roman values in the aftermath of political turmoil. Specifically, I focused on the female character Dido—how she stood out for embodying qualities typically associated with noteworthy men, yet met her tragic fate at the end of book four. Such was Vergil’s way of illustrating the consequence of women daring to transcend conventional roles. Dido’s transformation from a regal, autonomous ruler to a figure destroyed by unrequited love and abandonment is reflected in the way women are presented in Baudelaire’s and Eliot’s work.

      In a similar vein, Baudelaire’s portrayal of the decapitated woman adorned in luxurious fabrics and jewels exposes the paradox of female beauty and power. The woman, once adorned and admired, now becomes an object of decay: “A headless cadaver pours out, like a river, / On the saturated pillow / Red, living blood.” Here, Baudelaire critiques the way society both glorifies and consumes women, reducing them to passive symbols of desire, even in death.

      The first part of “A Game of Chess” also invokes this tension between female power and subjugation, particularly in its reference to Philomel, a figure from Ovid's Metamorphoses, who was raped by Tereus and transformed into a nightingale. In TWL, Philomel’s story appears as a reminder of female suffering: "Above the antique mantel was displayed / The change of Philomel, by the barbarous king / So rudely forced; yet there the nightingale / Filled all the desert with inviolable voice." Philomel, like Dido and Baudelaire’s martyred woman, is a figure whose pain is immortalized but also aestheticized. Her "inviolable voice" suggests that even in her forced silence, her trauma cannot be erased, much like Dido’s lasting curse on Aeneas and his descendants. Yet, her transformation into a nightingale also represents a form of agency reclaimed in tragedy, as her voice persists despite her violation.

    10. ‘Nam Sibyllam quidem Cumis ego ipse oculis meis vidi in ampulla pendere, et cum illi pueri dicerent: Σίβυλλα τί θέλεις; respondebat illa: ἀποθανεῖν θέλω.’

      The original title of the poem, “He Do The Police In Different Voices,” originated from Dicken’s novel Our Mutual Friend, in which the character Sloppy reads newspaper reports in a variety of tones, mimicking different speakers. While an aspect of this choice echoes Eliot’s technique of blending different languages, perspectives, and cultural and mythic references throughout the poem, another intriguing facet can be unveiled through a deeper analysis of Sloppy as a “beautiful reader of a newspaper” (9). His ability to “do the police in different voices” reflects a talent for embodying the voices of authority figures and others in society, blurring the line between personal identity and social narrative. If Eliot had kept this title, it would have positioned the poet as a kind of ventriloquist, stepping into the roles of the voices he captures, making the poem less about detached observation and more about active engagement with the cacophony of modern life, just as indicated by the original first section of the poem: “The next thing we were out in the street…sopped up some gin, sat in to the cork game…then we thought we’d breeze along and take a walk” (3). However, in the final version of “The Waste Land,” the poet becomes a more detached observer, surveying the desolation of modernity from a distance. This shift is crucial to the tone of the poem, which no longer presents the poet as a mimic of society but rather as someone bearing witness to its fragmentation and decay. The tone of the title changes from playfulness to solemnity, a shift that is clearly reflected in the respective following sections.

      “Preludes” serves as a fitting example of this detached narrator. Throughout the first three sections, the narrator offers a series of fragmented, disconnected snippets of urban life, illustrating the grimy, monotonous existence of people living in industrialized cities. The only time the narrator appears in the first person is in the penultimate stanza, the only complete person amidst images of fragmented people. This approach highlights the alienation and isolation inherent in the modern world, and the poet’s role is merely to record them as part of a decaying, impersonal landscape.

    1. All interaction and connection with the Net isa game of identity-making and community-making, where we choose toshow off our position within certain established, acceptable social net-works

      This is genuinely fascinating - the idea that communities and social networks are forming just by participating in certain online forums, chat rooms, or just following the same news platforms as others. It seems to natural and effortless to find others with a shared interest (community) on the Net that it seems difficult to imagine how these kind of social groups formed before the Internet.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      In this manuscript, the authors use a large dataset of neuroscience publications to elucidate the nature of self-citation within the neuroscience literature. The authors initially present descriptive measures of self-citation across time and author characteristics; they then produce an inclusive model to tease apart the potential role of various article and author features in shaping self-citation behavior. This is a valuable area of study, and the authors approach it with an appropriate and well-structured dataset.

      The study's descriptive analyses and figures are useful and will be of interest to the neuroscience community. However, with regard to the statistical comparisons and regression models, I believe that there are methodological flaws that may limit the validity of the presented results. These issues primarily affect the uncertainty of estimates and the statistical inference made on comparisons and model estimates - the fundamental direction and magnitude of the results are unlikely to change in most cases. I have included detailed statistical comments below for reference.

      Conceptually, I think this study will be very effective at providing context and empirical evidence for a broader conversation around self-citation. And while I believe that there is room for a deeper quantitative dive into some finer-grained questions, this paper will be a valuable catalyst for new areas of inquiry around citation behavior - e.g., do authors change self-citation behavior when they move to more or less prestigious institutions? do self-citations in neuroscience benefit downstream citation accumulation? do journals' reference list policies increase or decrease self-citation? - that I hope that the authors (or others) consider exploring in future work.

      Thank you for your suggestions and your generally positive view of our work. As described below, we have made the statistical improvements that you suggested.

      Statistical comments:

      (1) Throughout the paper, the nested nature of the data does not seem to be appropriately handled in the bootstrapping, permutation inference, and regression models. This is likely to lead to inappropriately narrow confidence bands and overly generous statistical inference.

      We apologize for this error. We have now included nested bootstrapping and permutation tests. We defined an “exchangeability block” as a co-authorship group of authors. In this dataset, that meant any authors who published together (among the articles in this dataset) as a First Author / Last Author pairing were assigned to the same exchangeability block. It is not realistic to check for overlapping middle authors in all papers because of the collaborative nature of the field. In addition, we believe that self-citations are primarily controlled by first and last authors, so we can assume that middle authors do not control self-citation habits. We then performed bootstrapping and permutation tests in the constraints of the exchangeability blocks.

      We first describe this in the results (page 3, line 110):

      “Importantly, we accounted for the nested structure of the data in bootstrapping and permutation tests by forming co-authorship exchangeability blocks.”

      We also describe this in 4.8 Confidence Intervals (page 21, line 725):

      “Confidence intervals were computed with 1000 iterations of bootstrap resampling at the article level. For example, of the 100,347 articles in the dataset, we resampled articles with replacement and recomputed all results. The 95% confidence interval was reported as the 2.5 and 97.5 percentiles of the bootstrapped values.

      We grouped data into exchangeability blocks to avoid overly narrow confidence intervals or overly optimistic statistical inference. Each exchangeability block comprised any authors who published together as a First Author / Last Author pairing in our dataset. We only considered shared First/Last Author publications because we believe that these authors primarily control self-citations, and otherwise exchangeability blocks would grow too large due to the highly collaborative nature of the field. Furthermore, the exchangeability blocks do not account for co-authorship in other journals or prior to 2000. A distribution of the sizes of exchangeability blocks is presented in Figure S15.”

      In describing permutation tests, we also write (page 21, line 739):

      “4.9 P values

      P values were computed with permutation testing using 10,000 permutations, with the exception of regression P values and P values from model coefficients. For comparing different fields (e.g., Neuroscience and Psychiatry) and comparing self-citation rates of men and women, the labels were randomly permuted by exchangeability block to obtain null distributions. For comparing self-citation rates between First and Last Authors, the first and last authorship was swapped in 50% of exchangeability blocks.”

      For modeling, we considered doing a mixed effects model but found difficulties due to computational power. For example, with our previous model, there were hundreds of thousands of levels for the paper random effect, and tens of thousands of levels for the author random effect. Even when subsampling or using packages designed for large datasets (e.g., mgcv’s bam function: https://www.rdocumentation.org/packages/mgcv/versions/1.9-1/topics/bam), we found computational difficulties.

      As a result, we switched to modeling results at the paper level (e.g., self-citation count or rate). We found that results could be unstable when including author-level random effects because in many cases there was only one author per group. Instead, to avoid inappropriately narrow confidence bands, we resampled the dataset such that each author was only represented once. For example, if Author A had five papers in this dataset, then one of their five papers was randomly selected. We updated our description of our models in the Methods section (page 21, line 754):

      “4.10 Exploring effects of covariates with generalized additive models

      For these analyses, we used the full dataset size separately for First and Last Authors (Table S2). This included 115,205 articles and 5,794,926 citations for First Authors, and 114,622 articles and 5,801,367 citations for Last Authors. We modeled self-citation counts, self-citation rates, and number of previous papers for First Authors and Last Authors separately, resulting in six total models.

      We found that models could be computationally intensive and unstable when including author-level random effects because in many cases there was only one author per group. Instead, to avoid inappropriately narrow confidence bands, we resampled the dataset such that each author was only represented once. For example, if Author A had five papers in this dataset, then one of their five papers was randomly selected. The random resampling was repeated 100 times as a sensitivity analysis (Figure S12).

      For our models, we used generalized additive models from mgcv’s “gam” function in R 49. The smooth terms included all the continuous variables: number of previous papers, academic age, year, time lag, number of authors, number of references, and journal impact factor. The linear terms included all the categorical variables: field, gender affiliation country LMIC status, and document type. We empirically selected a Tweedie distribution 50 with a log link function and p=1.2. The p parameter indicates that the variance is proportional to the mean to the p power 49. The p parameter ranges from 1-2, with p=1 equivalent to the Poisson distribution and p=2 equivalent to the gamma distribution. For all fitted models, we simulated the residuals with the DHARMa package, as standard residual plots may not be appropriate for GAMs 51. DHARMa scales the residuals between 0 and 1 with a simulation-based approach 51. We also tested for deviation from uniformity, dispersion, outliers, and zero inflation with DHARMa. Non-uniformity, dispersion, outliers, and zero inflation were significant due to the large sample size, but small in effect size in most cases. The simulated quantile-quantile plots from DHARMa suggested that the observed and simulated distributions were generally aligned, with the exception of slight misalignment in the models for the number of previous papers. These analyses are presented in Figure S11 and Table S7.

      In addition, we tested for inadequate basis functions using mgcv’s “gam.check()” function 49. Across all smooth predictors and models, we ultimately selected between 10-20 basis functions depending on the variable and outcome measure (counts, rates, papers). We further checked the concurvity of the models and ensured that the worst-case concurvity for all smooth predictors was about 0.8 or less.”

      The direction of our results primarily stayed the same, with the exception of gender results. Men tended to self-cite slightly less (or equal self-citation rates) after accounting for numerous covariates. As such, we also modeled the number of previous papers to explain the discrepancy between our raw data and the modeled gender results. Please find the updated results text below (page 11, line 316):

      “2.9 Exploring effects of covariates with generalized additive models

      Investigating the raw trends and group differences in self-citation rates is important, but several confounding factors may explain some of the differences reported in previous sections. For instance, gender differences in self-citation were previously attributed to men having a greater number of prior papers available to self-cite 7,20,21. As such, covarying for various author- and article-level characteristics can improve the interpretability of self-citation rate trends. To allow for inclusion of author-level characteristics, we only consider First Author and Last Author self-citation in these models.

      We used generalized additive models (GAMs) to model the number and rate of self-citations for First Authors and Last Authors separately. The data were randomly subsampled so that each author only appeared in one paper. The terms of the model included several article characteristics (article year, average time lag between article and all cited articles, document type, number of references, field, journal impact factor, and number of authors), as well as author characteristics (academic age, number of previous papers, gender, and whether their affiliated institution is in a low- and middle-income country). Model performance (adjusted R2) and coefficients for parametric predictors are shown in Table 2. Plots of smooth predictors are presented in Figure 6.

      First, we considered several career and temporal variables. Consistent with prior works 20,21, self-citation rates and counts were higher for authors with a greater number of previous papers. Self-citation counts and rates increased rapidly among the first 25 published papers but then more gradually increased. Early in the career, increasing academic age was related to greater self-citation. There was a small peak at about five years, followed by a small decrease and a plateau. We found an inverted U-shaped trend for average time lag and self-citations, with self-citations peaking approximately three years after initial publication. In addition, self-citations have generally been decreasing since 2000. The smooth predictors showed larger decreases in the First Author model relative to the Last Author model (Figure 6).

      Then, we considered whether authors were affiliated with an institution in a low- and middle-income country (LMIC). LMIC status was determined by the Organisation for Economic Co-operation and Development. We opted to use LMIC instead of affiliation country or continent to reduce the number of model terms. We found that papers from LMIC institutions had significantly lower self-citation counts (-0.138 for First Authors, -0.184 for Last Authors) and rates (-12.7% for First Authors, -23.7% for Last Authors) compared to non-LMIC institutions. Additional results with affiliation continent are presented in Table S5. Relative to the reference level of Asia, higher self-citations were associated with Africa (only three of four models), the Americas, Europe, and Oceania.

      Among paper characteristics, a greater number of references was associated with higher self-citation counts and lower self-citation rates (Figure 6). Interestingly, self-citations were greater for a small number of authors, though the effect diminished after about five authors. Review articles were associated with lower self-citation counts and rates. No clear trend emerged between self-citations and journal impact factor. In an analysis by field, despite the raw results suggesting that self-citation rates were lower in Neuroscience, GAM-derived self-citations were greater in Neuroscience than in Psychiatry or Neurology.

      Finally, our results aligned with previous findings of nearly equivalent self-citation rates for men and women after including covariates, even showing slightly higher self-citation rates in women. Since raw data showed evidence of a gender difference in self-citation that emerges early in the career but dissipates with seniority, we incorporated two interaction terms: one between gender and academic age and a second between gender and the number of previous papers. Results remained largely unchanged with the interaction terms (Table S6).

      2.10 Reconciling differences between raw data and models

      The raw and GAM-derived data exhibited some conflicting results, such as for gender and field of research. To further study covariates associated with this discrepancy, we modeled the publication history for each author (at the time of publication) in our dataset (Table 2). The model terms included academic age, article year, journal impact factor, field, LMIC status, gender, and document type. Notably, Neuroscience was associated with the fewest number of papers per author. This explains how authors in Neuroscience could have the lowest raw self-citation rates but the highest self-citation rates after including covariates in a model. In addition, being a man was associated with about 0.25 more papers. Thus, gender differences in self-citation likely emerged from differences in the number of papers, not in any self-citation practices.”

      (2) The discussion of the data structure used in the regression models is somewhat opaque, both in the main text and the supplement. From what I gather, these models likely have each citation included in the model at least once (perhaps twice, once for first-author status and one for last-author status), with citations nested within citing papers, cited papers, and authors. Without inclusion of random effects, the interpretation and inference of the estimates may be misleading.

      Please see our response to point (1) to address random effects. We have also switched to GAMs (see point #3 below) and provided more detail in the methods. Notably, we decided against using author-level effects due to poor model stability, as there can be as few as one author per group. Instead, we subsampled the dataset such that only one paper appeared from each author.

      (3) I am concerned that the use of the inverse hyperbolic sine transform is a bit too prescriptive, and may be producing poor fits to the true predictor-outcome relationships. For example, in a figure like Fig S8, it is hard to know to what extent the sharp drop and sign reversal are true reflections of the data, and to what extent they are artifacts of the transformed fit.

      Thank you for raising this point. We have now switched to using generalized additive models (GAMs). GAMs provide a flexible approach to modeling that does not require transformations. We described this in detail in point (1) above and in Methods 4.10 Exploring effects of covariates with generalized additive models (page 21, line 754).

      “4.10 Exploring effects of covariates with generalized additive models

      For these analyses, we used the full dataset size separately for First and Last Authors (Table S2). This included 115,205 articles and 5,794,926 citations for First Authors, and 114,622 articles and 5,801,367 citations for Last Authors. We modeled self-citation counts, self-citation rates, and number of previous papers for First Authors and Last Authors separately, resulting in six total models.

      We found that models could be computationally intensive and unstable when including author-level random effects because in many cases there was only one author per group. Instead, to avoid inappropriately narrow confidence bands, we resampled the dataset such that each author was only represented once. For example, if Author A had five papers in this dataset, then one of their five papers was randomly selected. The random resampling was repeated 100 times as a sensitivity analysis (Figure S12).

      For our models, we used generalized additive models from mgcv’s “gam” function in R 48. The smooth terms included all the continuous variables: number of previous papers, academic age, year, time lag, number of authors, number of references, and journal impact factor. The linear terms included all the categorical variables: field, gender affiliation country LMIC status, and document type. We empirically selected a Tweedie distribution 49 with a log link function and p=1.2. The p parameter indicates that the variance is proportional to the mean to the p power 48. The p parameter ranges from 1-2, with p=1 equivalent to the Poisson distribution and p=2 equivalent to the gamma distribution. For all fitted models, we simulated the residuals with the DHARMa package, as standard residual plots may not be appropriate for GAMs 50. DHARMa scales the residuals between 0 and 1 with a simulation-based approach 50. We also tested for deviation from uniformity, dispersion, outliers, and zero inflation with DHARMa. Non-uniformity, dispersion, outliers, and zero inflation were significant due to the large sample size, but small in effect size in most cases. The simulated quantile-quantile plots from DHARMa suggested that the observed and simulated distributions were generally aligned, with the exception of slight misalignment in the models for the number of previous papers. These analyses are presented in Figure S11 and Table S7.

      In addition, we tested for inadequate basis functions using mgcv’s “gam.check()” function 48. Across all smooth predictors and models, we ultimately selected between 10-20 basis functions depending on the variable and outcome measure (counts, rates, papers). We further checked the concurvity of the models and ensured that the worst-case concurvity for all smooth predictors was about 0.8 or less.”

      (4) It seems there are several points in the analysis where papers may have been dropped for missing data (e.g., missing author IDs and/or initials, missing affiliations, low-confidence gender assessment). It would be beneficial for the reader to know what % of the data was dropped for each analysis, and for comparisons across countries it would be important for the authors to make sure that there is not differential missing data that could affect the interpretation of the results (e.g., differences in self-citation being due to differences in Scopus ID coverage).

      Thank you for raising this important point. In the methods section, we describe how the data are missing (page 18, line 623):

      “4.3 Data exclusions and missingness

      Data were excluded across several criteria: missing covariates, missing citation data, out-of-range values at the citation pair level, and out-of-range values at the article level (Table 3). After downloading the data, our dataset included 157,287 articles and 8,438,733 citations. We excluded any articles with missing covariates (document type, field, year, number of authors, number of references, academic age, number of previous papers, affiliation country, gender, and journal). Of the remaining articles, we dropped any for missing citation data (e.g., cannot identify whether a self-citation is present due to lack of data). Then, we removed citations with unrealistic or extreme values. These included an academic age of less than zero or above 38/44 for First/Last Authors (99th percentile); greater than 266/522 papers for First/Last Authors (99th percentile); and a cited year before 1500 or after 2023. Subsequently, we dropped articles with extreme values that could contribute to poor model stability. These included greater than 30 authors; fewer than 10 references or greater than 250 references; and a time lag of greater than 17 years. These values were selected to ensure that GAMs were stable and not influenced by a small number of extreme values.

      In addition, we evaluated whether the data were not missing at random (Table S8). Data were more likely to be missing for reviews relative to articles, for Neurology relative to Neuroscience or Psychiatry, in works from Africa relative to the other continents, and for men relative to women. Scopus ID coverage contributed in part to differential missingness. However, our exclusion criteria also contribute. For example, Last Authors with more than 522 papers were excluded to help stabilize our GAMs. More men fit this exclusion criteria than women.”

      Due to differential missingness, we wrote in the limitations (page 16, line 529):

      “Ninth, data were differentially missing (Table S8) due to Scopus coverage and gender estimation. Differential missingness could bias certain results in the paper, but we hope that the dataset is large enough to reduce any potential biases.”

      Reviewer #2 (Public Review):

      The authors provide a comprehensive investigation of self-citation rates in the field of Neuroscience, filling a significant gap in existing research. They analyze a large dataset of over 150,000 articles and eight million citations from 63 journals published between 2000 and 2020. The study reveals several findings. First, they state that there is an increasing trend of self-citation rates among first authors compared to last authors, indicating potential strategic manipulation of citation metrics. Second, they find that the Americas show higher odds of self-citation rates compared to other continents, suggesting regional variations in citation practices. Third, they show that there are gender differences in early-career self-citation rates, with men exhibiting higher rates than women. Lastly, they find that self-citation rates vary across different subfields of Neuroscience, highlighting the influence of research specialization. They believe that these findings have implications for the perception of author influence, research focus, and career trajectories in Neuroscience.

      Overall, this paper is well written, and the breadth of analysis conducted by authors, with various interactions between variables (eg. gender vs. seniority), shows that the authors have spent a lot of time thinking about different angles. The discussion section is also quite thorough. The authors should also be commended for their efforts in the provision of code for the public to evaluate their own self-citations. That said, here are some concerns and comments that, if addressed, could potentially enhance the paper:

      Thank you for your review and your generally positive view of our work.

      (1) There are concerns regarding the data used in this study, specifically its bias towards top journals in Neuroscience, which limits the generalizability of the findings to the broader field. More specifically, the top 63 journals in neuroscience are based on impact factor (IF), which raises a potential issue of selection bias. While the paper acknowledges this as a limitation, it lacks a clear justification for why authors made this choice. It is also unclear how the "top" journals were identified as whether it was based on the top 5% in terms of impact factor? Or 10%? Or some other metric? The authors also do not provide the (computed) impact factors of the journals in the supplementary.

      We apologize for the lack of clarity about our selection of journals. We agree that there are limitations to selecting higher impact journals. However, we needed to apply some form of selection in order to make the analysis manageable. For instance, even these 63 journals include over five million citations. We better describe our rationale behind the approach as follows (page 17, line 578):

      “We collected data from the 25 journals with the highest impact factors, based on Web of Science impact factors, in each of Neurology, Neuroscience, and Psychiatry. Some journals appeared in the top 25 list of multiple fields (e.g., both Neurology and Neuroscience), so 63 journals were ultimately included in our analysis. We recognize that limiting the journals to the top 25 in each field also limits the generalizability of the results. However, there are tradeoffs between breadth of journals and depth of information. For example, by limiting the journals to these 63, we were able to look at 21 years of data (2000-2020). In addition, the definition of fields is somewhat arbitrary. By restricting the journals to a set of 63 well-known journals, we ensured that the journals belonged to Neurology, Neuroscience, or Psychiatry research. It is also important to note that the impact factor of these journals has not necessarily always been high. For example, Acta Neuropathologica had an impact factor of 17.09 in 2020 but 2.45 in 2000. To further recognize the effects of impact factor, we decided to include an impact factor term in our models.”

      In addition, we have now provided the 2020 impact factors in Table S1.

      By exclusively focusing on high impact journals, your analysis may not be representative of the broader landscape of self-citation patterns across the neuroscience literature, which is what the title of the article claims to do.

      We agree that this article is not indicative of all neuroscience literature, but rather the top journals. Thus, we have changed the title to: “Trends in Self-citation Rates in High-impact Neurology, Neuroscience, and Psychiatry Journals”. We would also like to note that compared to previous bibliometrics works in neuroscience (Bertolero et al. 2020; Dworkin et al. 2020; Fulvio et al. 2021), this article includes a wider range of data.

      (2) One other concern pertains to the possibility that a significant number of authors involved in the paper may not be neuroscientists. It is plausible that the paper is a product of interdisciplinary collaboration involving scientists from diverse disciplines. Neuroscientists amongst the authors should be identified.

      In our opinion, neuroscience is a broad, interdisciplinary field. Individuals performing neuroscience research may have a neuroscience background. Yet, they may come from many backgrounds, such as physics, mathematics, biology, chemistry, or engineering. As such, we do not believe that it is feasible to characterize whether each author considers themselves a neuroscientist or not. We have added the following to the limitations section (page 16, line 528):

      “Eighth, authors included in this work may not be neurologists, neuroscientists, or psychiatrists. However, they still publish in journals from these fields.”

      (3) When calculating self-citation rate, it is important to consider the number of papers the authors have published to date. One plausible explanation for the lower self-citation rates among first authors could be attributed to their relatively junior status and short publication record. As such, it would also be beneficial to assess self-citation rate as a percentage relative to the author's publication history. This number would be more accurate if we look at it as a percentage of their publication history. My suspicion is that first authors (who are more junior) might be more likely to self-cite than their senior counterparts. My suspicion was further raised by looking at Figures 2a and 3. Considering the nature of the self-citation metric employed in the study, it is expected that authors with a higher level of seniority would have a greater number of publications. Consequently, these senior authors' papers are more likely to be included in the pool of references cited within the paper, hence the higher rate.

      While the authors acknowledge the importance of the number of past publications in their gender analysis, it is just as important to include the interplay of seniority in (1) their first and last author self-citation rates and (2) their geographic analysis.

      Thank you for this thoughtful comment. We agree that seniority and prior publication history play an important role in self-citation rates.

      For comparing First/Last Author self-citation rates, we have now included a plot similar to Figure 2a, where self-citation as a percentage of prior publication history is plotted.

      (page 4, line 161): “Analyzing self-citations as a fraction of publication history exhibited a similar trend (Figure S3). Notably, First Authors were more likely than Last Authors to self-cite when normalized by prior publication history.

      For the geographic analysis, we made two new maps: 1) that of the number of previous papers, and 2) that of the journal impact factor (see response to point #4 below).

      (page 5, line 185): “We also investigated the distribution of the number of previous papers and journal impact factor across countries (Figure S4). Self-citation maps by country were highly correlated with maps of the number of previous papers (Spearman’s r\=0.576, P=4.1e-4; 0.654, P=1.8e-5 for First and Last Authors). They were significantly correlated with maps of average impact factor for Last Authors (0.428, P=0.014) but not Last Authors (Spearman’s r\=0.157, P=0.424). Thus, further investigation is necessary with these covariates in a comprehensive model.”

      Finally, we included a model term for the number of previous papers (Table 2). We analyzed this both for self-citation counts and self-citation rates and found a strong relationship between publication history and self-citations. We also included the following section where we modeled the number of previous papers for each author (page 13, line 384):

      “2.10 Reconciling differences between raw data and models

      The raw and GAM-derived data exhibited some conflicting results, such as for gender and field of research. To further study covariates associated with this discrepancy, we modeled the publication history for each author (at the time of publication) in our dataset (Table 2). The model terms included academic age, article year, journal impact factor, field, LMIC status, gender, and document type. Notably, Neuroscience was associated with the fewest number of papers per author. This explains how authors in Neuroscience could have the lowest raw self-citation rates but the highest self-citation rates after including covariates in a model. In addition, being a man was associated with about 0.25 more papers. Thus, gender differences in self-citation likely emerged from differences in the number of papers, not in any self-citation practices.”

      (4) Because your analysis is limited to high impact journals, it would be beneficial to see the distribution of the impact factors across the different countries. Otherwise, your analysis on geographic differences in self-citation rates is hard to interpret. Are these differences really differences in self-citation rates, or differences in journal impact factor? It would be useful to look at the representation of authors from different countries for different impact factors.

      We made a map of this in Figure S4 (see our response to point #3 above).

      (page 5, line 185): “We also investigated the distribution of the number of previous papers and journal impact factor across countries (Figure S4). Self-citation maps by country were highly correlated with maps of the number of previous papers (Spearman’s r=0.576, P=4.1e-4; 0.654, P=1.8e-5 for First and Last Authors). They were significantly correlated with maps of average impact factor for Last Authors (0.428, P=0.014) but not Last Authors (Spearman’s r=0.157, P=0.424). Thus, further investigation is necessary with these covariates in a comprehensive model.”

      We also included impact factor as a term in our model. The results suggest that there are still geographic differences (Table 2, Table S5).

      (5) The presence of self-citations is not inherently problematic, and I appreciate the fact that authors omit any explicit judgment on this matter. That said, without appropriate context, self-citations are also not the best scholarly practice. In the analysis on gender differences in self-citations, it appears that authors imply an expectation of women's self-citation rates to align with those of men. While this is not explicitly stated, use of the word "disparity", and also presentation of self-citation as an example of self-promotion in discussion suggest such a perspective. Without knowing the context in which the self-citation was made, it is hard to ascertain whether women are less inclined to self-promote or that men are more inclined to engage in strategic self-citation practices.

      We agree that on the level of an individual self-citation, our study is not useful for determining how related the papers are. Yet, understanding overall trends in self-citation may help to identify differences. Context is important, but large datasets allow us to investigate broad trends. We added the following text to the limitations section (page 16, line 524):

      “In addition, these models do not account for whether a specific citation is appropriate, as some situations may necessitate higher self-citation rates.”

      Reviewer #3 (Public Review):

      This paper analyses self-citation rates in the field of Neuroscience, comprising in this case, Neurology, Neuroscience and Psychiatry. Based on data from Scopus, the authors identify self-citations, that is, whether references from a paper by some authors cite work that is written by one of the same authors. They separately analyse this in terms of first-author self-citations and last-author self-citations. The analysis is well-executed and the analysis and results are written down clearly. There are some minor methodological clarifications needed, but more importantly, the interpretation of some of the results might prove more challenging. That is, it is not always clear what is being estimated, and more importantly, the extent to which self-citations are "problematic" remains unclear.

      Thank you for your review. We attempted to improve the interpretation of results, as described in the following responses.

      When are self-citations problematic? As the authors themselves also clarify, "self-citations may often be appropriate". Researchers cite their own previous work for perfectly good reasons, similar to reasons of why they would cite work by others. The "problem", in a sense, is that researchers cite their own work, just to increase the citation count, or to promote their own work and make it more visible. This self-promotional behaviour might be incentivised by certain research evaluation procedures (e.g. hiring, promoting) that overly emphasise citation performance. However, the true problem then might not be (self-)citation practices, but instead, the flawed research evaluation procedures that emphasis citation performance too much. So instead of problematising self-citation behaviour, and trying to address it, we might do better to address flawed research evaluation procedures. Of course, we should expect references to be relevant, and we should avoid self-promotional references, but addressing self-citations may just have minimal effects, and would not solve the more fundamental issue.

      We agree that this dataset is not designed to investigate the downstream effects of self-citations. However, self-citation practices are more likely to be problematic when they differ across specific groups. This work can potentially spark more interest in future longitudinal designs to investigate whether differences in self-citation practices leads to differences in career outcomes, for example. We added the following text to clarify (page 17, line 565):

      “Yet, self-citation practices become problematic when they are different across groups or are used to “game the system.” Future work should investigate the downstream effects of self-citation differences to see whether they impact the career trajectories of certain groups. We hope that this work will help to raise awareness about factors influencing self-citation practices to better inform authors, editors, funding agencies, and institutions in Neurology, Neuroscience, and Psychiatry.”

      Some other challenges arise when taking a statistical perspective. For any given paper, we could browse through the references, and determine whether a particular reference would be warranted or not. For instance, we could note that there might be a reference included that is not at all relevant to the paper. Taking a broader perspective, the irrelevant reference might point to work by others, included just for reasons of prestige, so-called perfunctory citations. But it could of course also include self-citations. When we simply start counting all self-citations, we do not see what fraction of those self-citations would be warranted as references. The question then emerges, what level of self-citations should be counted as "high"? How should we determine that? If we observe differences in self-citation rates, what does it tell us?

      Our focus is when the self-citation practices differ across groups. We agree that, on a case-by-case basis, there is no exact number for a self-citation rate that is “high.” With a dataset of the current size, evaluating whether each individual self-citation is appropriate is not feasible. If we observe differences in self-citation rate, this may tell us about broad (not individual-level) trends and differences in self-citing practice. If one group is self-citing much more highly compared to another group–even after covarying relevant variables such as prior publication history–then the self-citation differences can likely be attributed to differences in self-citation practices/behaviors.

      For example, the authors find that the (any author) self-citation rate in Neuroscience is 10.7% versus 15.9% in Psychiatry. What does this difference mean? Are psychiatrists citing themselves more often than neuroscientists? First author men showed a self-citation rate of 5.12% versus a self-citation rate of 3.34% of women first authors. Do men engage in more problematic citation behaviour? Junior researchers (10-year career) show a self-citation rate of about 5% compared to a self-citation rate of about 10% for senior researchers (30-year career). Are senior researchers therefore engaging in more problematic citation behaviour? The answer is (most likely) "no", because senior authors have simply published more, and will therefore have more opportunities to refer to their own work. To be clear: the authors are aware of this, and also take this into account. In fact, these "raw" various self-citation rates may, as the authors themselves say, "give the illusion" of self-citation rates, but these are somehow "hidden" by, for instance, career seniority.

      We included numerous covariates in our model. In addition, to address the difference between “raw” and “modeled” self-citation rates, we added the following section (page 13, line 384):

      “2.10 Reconciling differences between raw data and models

      The raw and GAM-derived data exhibited some conflicting results, such as for gender and field of research. To further study covariates associated with this discrepancy, we modeled the publication history for each author (at the time of publication) in our dataset (Table 2). The model terms included academic age, article year, journal impact factor, field, LMIC status, gender, and document type. Notably, Neuroscience was associated with the fewest number of papers per author. This explains how authors in Neuroscience could have the lowest raw self-citation rates but the highest self-citation rates after including covariates in a model. In addition, being a man was associated with about 0.25 more papers. Thus, gender differences in self-citation likely emerged from differences in the number of papers, not in any self-citation practices.”

      Again, the authors do consider this, and "control" for career length and number of publications, et cetera, in their regression model. Some of the previous observations then change in the regression model. Neuroscience doesn't seem to be self-citing more, there just seem to be junior researchers in that field compared to Psychiatry. Similarly, men and women don't seem to show an overall different self-citation behaviour (although the authors find an early-career difference), the men included in the study simply have longer careers and more publications.

      But here's the key issue: what does it then mean to "control" for some variables? This doesn't make any sense, except in the light of causality. That is, we should control for some variable, such as seniority, because we are interested in some causal effect. The field may not "cause" the observed differences in self-citation behaviour, this is mediated by seniority. Or is it confounded by seniority? Are the overall gender differences also mediated by seniority? How would the selection of high-impact journals "bias" estimates of causal effects on self-citation? Can we interpret the coefficients as causal effects of that variable on self-citations? If so, would we try to interpret this as total causal effects, or direct causal effects? If they do not represent causal effects, how should they be interpreted then? In particular, how should it "inform author, editors, funding agencies and institutions", as the authors say? What should they be informed about?

      We apologize for our misuse of language. We will be more clear, as in most previous self-citation papers, that our analysis is NOT causal. Causal datasets do have some benefits in citation research, but a limitation is that they may not cover as wide of a range of authors. Furthermore, non-causal correlational studies can still be useful in informing authors, editors, funding agencies, and institutions. Association studies are widely used across various fields to draw non-causal conclusions. We made numerous changes to reduce our causal language.

      Before: “We then developed a probability model of self-citation that controls for numerous covariates, which allowed us to obtain significance estimates for each variable of interest.”

      After (page 3, line 113): “We then developed a probability model of self-citation that includes numerous covariates, which allowed us to obtain significance estimates for each variable of interest.”

      Before: “As such, controlling for various author- and article-level characteristics can improve the interpretability of self-citation rate trends.”

      After (page 11, line 321): “As such, covarying various author- and article-level characteristics can improve the interpretability of self-citation rate trends.”

      Before: “Initially, it appeared that self-citation rates in Neuroscience are lower than Neurology and Psychiatry, but after controlling for various confounds, the self-citation rates are higher in Neuroscience.”

      After (page 15, line 468): “Initially, it appeared that self-citation rates in Neuroscience are lower than Neurology and Psychiatry, but after considering several covariates, the self-citation rates are higher in Neuroscience.”

      We also added the following text to the limitations section (page 16, line 526):

      “Seventh, the analysis presented in this work is not causal. Association studies are advantageous for increasing sample size, but future work could investigate causality in curated datasets.”

      The authors also "encourage authors to explore their trends in self-citation rates". It is laudable to be self-critical and review ones own practices. But how should authors interpret their self-citation rate? How useful is it to know whether it is 5%, 10% or 15%? What would be the "reasonable" self-citation rate? How should we go about constructing such a benchmark rate? Again, this would necessitate some causal answer. Instead of looking at the self-citation rate, it would presumably be much more informative to simply ask authors to check whether references are appropriate and relevant to the topic at hand.

      We believe that our tool is valuable for authors to contextualize their own self-citation rates. For instance, if an author has published hundreds of articles, it is not practical to count the number of self-citations in each. We have added two portions of text to the limitations section:

      (page 16, line 524): “In addition, these models do not account for whether a specific citation is appropriate, though some situations may necessitate higher self-citation rates.”

      (page 16, line 535): “Despite these limitations, we found significant differences in self-citation rates for various groups, and thus we encourage authors to explore their trends in self-citation rates. Self-citation rates that are higher than average are not necessarily wrong, but suggest that authors should further reflect on their current self-citation practices.”

      In conclusion, the study shows some interesting and relevant differences in self-citation rates. As such, it is a welcome contribution to ongoing discussions of (self) citations. However, without a clear causal framework, it is challenging to interpret the observed differences.

      We agree that causal studies provide many benefits. Yet, association studies also provide many benefits. For example, an association study allowed us to analyze a wider range of articles than a causal study would have.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Statistical suggestions:

      (1) To improve statistical inference, nesting should be accounted for in all of the analyses. For example, the logistic regression model using citing/cited pairs should include random effects for article, author, and perhaps subfield, in order for independence of observations to be plausible. Similarly, bootstrapping and permutation would ideally occur at the author level rather than (or in addition to) the paper level.

      Detailed updates addressing these points are in the public review. In short, we found computational challenges with many levels of the random effects (>100,000) and millions of observations at the citation pairs level. As such, we decided to model citations rates and counts by paper. In this case, we found that results could be unstable when including author-level random effects because in many cases there was only one author per group. Instead, to avoid inappropriately narrow confidence bands, we resampled the dataset such that each author was only represented once. For example, if Author A had five papers in this dataset, then one of their five papers was randomly selected. We repeated the random resampling 100 times (Figure S12). We updated our description of our models in the Methods section (page 21, line 754).

      For permutation tests and bootstrapping, we now define an “exchangeability block” as a co-authorship group of authors. In this dataset, that meant any authors who published together (among the articles in this dataset) as a First Author / Last Author pairing were assigned to the same exchangeability block. It is not realistic to check for overlapping middle authors in all papers because of the collaborative nature of the field. In addition, we believe that self-citations are primarily controlled by first and last authors, so we can assume that middle authors do not control self-citation habits. We then performed bootstrapping and permutation tests in the constraints of the exchangeability blocks.

      (2) In general, I am having trouble understanding the structure of the regression models. My current belief is that rows are composed of individual citations from papers' reference lists, with the outcome representing their status as a self-citation or not, and with various citing article and citing author characteristics as predictors. However, the fact that author type is included in the model as a predictor (rather than having a model for FA self-citations and another for LA self-citations) suggests to me that each citation is entered as two separate rows - once noting whether it was a FA self-citation and once noting whether it was an LA self-citation - and then it is run as a single model.

      (2a) If I am correct, the model is unlikely to be producing valid inference. I would recommend breaking this analysis up into two separate models, and including article-, author-, and subfield-level random effects. You could theoretically include a citation-level random effect and keep it as one model, but each 'group' would only have two observations and the model would be fairly unstable as a result.

      (2b) If I am misunderstanding (and even if not), I would encourage you to provide a more detailed description of the dataset structure and the model - perhaps with a table or diagram

      We split the data into two models and decided to model on the level of a paper (self-citation rate and self-citation count). In addition, we subsampled the dataset such that each author only appears once to avoid misestimation of confidence intervals (see point (1) above). As described in the public review, we included much more detail in our methods section now to improve the clarity of our models.

      (3) I would suggest removing the inverse hyperbolic sine transform and replacing it with a more flexible approach to estimating the relationships' shape, like generalized additive models or other spline-based methods to ensure that the chosen method is appropriate - or at the very least checking that it is producing a realistic fit that reflects the underlying shape of the relationships.

      More details are available in the public review, but we now use GAMs throughout the manuscript.

      (4) For the "highly self-citing" analysis, it is unclear why papers in the 15-25% range were dropped rather than including them as their own category in an ordinal model. I might suggest doing the latter, or explaining the decision more fully

      We previously included this analysis as a paper-level model because our main model was at the level of citation pairs. Now, we removed this analysis because we model self-citation rates and counts by paper.

      (5) It would be beneficial for the reader to know what % of the data was dropped for each analysis, and for your team to make sure that there is not differential missing data that could affect the interpretation of the results (e.g., differences in self-citation being due to differences in Scopus ID coverage).

      Thank you for this suggestion. We added more detailed missingness data to 4.3 Data exclusions and missingness. We did find differential missingness and added it to the limitations section. However, certain aspects of this cannot be corrected because the data are just not available (e.g., Scopus coverage issues). Further details are available in the public review.

      Conceptual thoughts:

      (1) I agree with your decision to focus on the second definition of self-citation (self-cites relative to my citations to others' work) rather than the first (self-cites relative to others' citations to my work). But it does seem that the first definition is relevant in the context of gaming citation metrics. For example, someone who writes one paper per year with a reference list of 30% self-citations will have much less of an impact on their H-index than someone who writes 10 papers per year with 10% self-citations. It could be interesting to see how these definitions interact, and whether people who are high on one measure tend to be high on the other.

      We agree this would be interesting to investigate in the future. Unfortunately, our dataset is organized at the level of the paper and thus does not contain information regarding how many times the authors cite a particular work. We hope that we can explore this interaction in the future.

      (2) This is entirely speculative, but I wonder whether the increasing rate of LA self-citation relative to FA self-citation is partly due to PIs over-citing their own lab to build up their trainees' citation records and help them succeed in an increasingly competitive job market. This sounds more innocuous than doing it to benefit their own reputation, but it would provide another mechanism through which students from large and well-funded labs get a leg-up in the job market. Might be interesting to explore, though I'm not exactly sure how :)

      This is a very interesting point. We do not have any means to investigate this with the current dataset, but we added it to the discussion (page 14, line 421):

      “A third, more optimistic explanation is that principal investigators (typically Last Authors) are increasingly self-citing their lab’s papers to build up their trainee’s citation records for an increasingly competitive job market.”

      Reviewer #2 (Recommendations For The Authors):

      (1) In regards to point 1 in the public review: In the spirit of transparency, the authors would benefit from providing a rationale for their choice of top journals, and the methodology used to identify them. It would also be valuable to include the impact factor of each journal in the S1 table alongside their names.

      Given the availability and executability of code, it would be useful to see how and if the self-citation trends vary amongst the "low impact" journals (as measured by the IF). This could go in any of the three directions:

      a. If it is found that self-citations are not as prevalent in low impact journals, this could be a great starting point for a conversation around the evaluation of journals based on impact factor, and the role of self-citations in it.

      b. If it is found that self-citations are as prevalent in low impact journals as high impact journals, that just strengthens your results further.

      c. If it is found that self-citations are more prevalent in low impact journals, this would mean your current statistics are a lower bound to the actual problem. This is also intuitive in the sense that high impact journals get more external citations (and more exposure) than low impact journals, as such authors (and journals) may be less likely to self-cite.

      Expanding the dataset to include many more journals was not feasible. Instead, we included an impact factor term in our models, as detailed in the public review. We found no strong trends in the association between impact factor and self-citation rate/count. Another important note is that these journals were considered “high impact” in 2020, but many had lower impact factors in earlier years. Thus, our modeling allows us to estimate how impact factor is related to self-citations across a wide range of impact factors.

      It is crucial to consider utilizing such a comprehensive database as Scopus, which provides a more thorough list of all journals in Neuroscience, to obtain a more representative sample. Alternatively, other datasets like Microsoft Academic Graph, and OpenAlex offer information on the field of science associated with each paper, enabling a more comprehensive analysis.

      We agree that certain datasets may offer a wider view of the entire field. However, we included a large number of papers and journals relative to previous studies. In addition, Scopus provides a lot of detailed and valuable author-level information. We had to limit our calls to the Scopus API so restricted journals by 2020 impact factor.

      (2) In regards to point 2 in the public review: To enhance the accuracy and specificity of the analysis, it would be beneficial to distinguish neuroscientists among the co-authors. This could be accomplished by examining their publication history leading up to the time of publication of the paper, and identify each author's level of engagement and specialization within the field of neuroscience.

      Since the field of neuroscience is largely based on collaborations, we find that it might be impossible to determine who is a neuroscientist. For example, a researcher with a publication history in physics may now be focusing on computational neuroscience research. As such, we feel that our current work, which ensures that the papers belong to neuroscience, is representative of what one may expect in terms of neuroscience research and collaboration.

      (3) In regards to point 3 in the public review: I highly recommend plotting self-citation rate as the number of papers in the reference list over the number of total publications to date of paper publication.

      As described in the public review, we have now done this (Figure S3).

      (4) In regards to point 5 in the public review: It would be useful to consider the "quality" of citations to further the discussion on self-citations. For instance, differentiating between self-citations that are perfunctory and superficial from those that are essential for showing developmental work, would be a valuable contribution.

      Other databases may have access to this information, but ours unfortunately does not. We agree that this is an interesting area of work.

      (5) The authors are to be commended for their logistic regression models, as they control for many confounders that were lacking in their earlier descriptive statistics. However, it would be beneficial to rerun the same analysis but on a linear model whereby the outcome variable would be the number of self-citations per author. This would possibly resolve many of the comments mentioned above.

      Thank you for your suggestion. As detailed in the public review, we now model the number of self-citations. This is modeled on the paper level, not the author level, because our dataset was downloaded by paper, not by author.

      Minor suggestions:

      (1) Abstract says one of your findings is: "increasing self-citation rates of First Authors relative to Last Authors". Your results actually show the opposite (see Figure 1b).

      Thank you for catching this error. We corrected it to match the results and discussion in the paper:

      “…increasing self-citation rates of Last Authors relative to First Authors.”

      (2) It might be interesting to compute an average academic age for each paper, and look at self-citation vs average academic age plot.

      We agree that this would be an interesting analysis. However, to limit calls to the API, we collected academic age data only on First and Last Authors.

      (3) It may be interesting to look at the distribution of women in different subfields within neuroscience, and the interaction of those in the context of self-citations.

      Thank you for this interesting suggestion. We added the following analysis (page 9, line 305):

      “Furthermore, we explored topic-by-gender interactions (Figure S10). In short, men and women were relatively equally represented as First Authors, but more men were Last Authors across all topics. Self-citation rates were higher for men across all topics.”

      Reviewer #3 (Recommendations For The Authors):

      - In the abstract, "flaws in citation practices" seems worded rather strongly.

      We respectfully disagree, as previous works have shown significant bias in citation practices. For example, Dworkin et al. (Dworkin et al. 2020) found that neuroscience reference lists tended to under-cite women, even after including various covariates.

      - Links of the references to point to (non-accessible) paperpile references, you would probably want to update this.

      We apologize for the inconvenience and have now removed these links.

      - p 2, l 24: The explanation of ref. (5) seems to be a bit strangely formulated. The point of that article is that citations to work that reinforce a particular belief are more likely to be cited, which *creates* unfounded authority. The unfounded authority itself is hence no part of the citation practices

      Thank you for catching our misinterpretation. We have now removed this part of the sentence.

      - p 3, l 16: "h indices" or "citations" instead of "h-index".

      We now say “h-indices”.

      - p 5, l 5: how was the manual scoring done?

      We added the following to the caption of Figure S1.

      “Figure S1. Comparison between manual scoring of self-citation rates and self-citation rates estimated from Python scripts in 5 Psychiatry journals: American Journal of Psychiatry, Biological Psychiatry, JAMA Psychiatry, Lancet Psychiatry, and Molecular Psychiatry. 906 articles in total were manually evaluated (10 articles per journal per year from 2000-2020, four articles excluded for very large author list lengths and thus high difficulty of manual scoring). For manual scoring, we downloaded information about all references for a given article and searched for matching author names.”

      - p 5, l 23: Why this specific p-value upper bound of 4e-3? From later in the article, I understand that this stems from the 10000 bootstrap sample, with then taking a Bonferroni correction? Perhaps good to clarify this briefly somewhere.

      Thank you for this suggestion. We now perform Benjamini/Hochberg false discovery rate (FDR) correction, but we added a description of the minimum P value from permutations (page 21, line 748):

      “All P values described in the main text were corrected with the Benjamini/Hochberg 16 false discovery rate (FDR) correction. With 10,000 permutations, the lowest P value after applying FDR correction is P=2.9e-4, which indicates that the true point would be the most extreme in the simulated null distribution.”

      - Fig. 1, caption: The (a) and (b) labelling here is a bit confusing, because the first sentence suggests both figures portray the same, but do so for different time periods. Perhaps rewrite, so that (a) and (b) are both described in a single sentence, instead of having two different references to (a) and (b).

      Thank you for pointing this out. We fixed the labeling of this caption:

      “Figure 1. Visualizing recent self-citation rates and temporal trends. a) Kernel density estimate of the distribution of First Author, Last Author, and Any Author self-citation rates in the last five years. b) Average self-citation rates over every year since 2000, with 95% confidence intervals calculated by bootstrap resampling.”

      - p7, l 9: Regarding "academic age", note that there might be a difference between "age" effects and "cohort" effects. That is, there might be difference between people with a certain career age who started in 1990 and people with the same career age, but who started in 2000, which would be a "cohort" effect.

      We agree that this is a possible effect and have added it to the limitations (page 16, line 532):

      “Tenth, while we considered academic age, we did not consider cohort effects. Cohort effects would depend on the year in which the individual started their career.”

      - p 7, l 15: "jumps" suggests some sort of sudden or discontinuous transition, I would just say "increases".

      We now say “increases.”

      - Fig. 2: Perhaps it should be made more explicit that this includes only academics with at least 50 papers. Could the authors please clarify whether the same limitation of at least 50 papers also features in other parts of the analysis where academic age is used? This selection could affect the outcomes of the analysis, so its consequences should be carefully considered. One possibility for instance is that it selects people with a short career length who have been exceptionally productive, namely those that have had 50 papers, but only started publishing in 2015 or so. Such exceptionally productive people will feature more highly in the early career part, because they need to be so productive in order to make the cut. For people with a longer career, the 50 papers would be less of a hurdle, and so would select more and less productive people more equally.

      We apologize for the lack of clarity. We did not use this requirement where academic age was used. We mainly applied this requirement when aggregating by country, as we did not want to calculate self-citation rate in a country based on only several papers. We have clarified various data exclusions in our new section 4.3 Data exclusions and missingness.

      - p 8, l 11: The affiliated institution of an author is not static, but rather changes throughout time. Did the authors consider this? If not, please clarify that this refers to only the most recent affiliation (presumably). Authors also often have multiple affiliations. How did the authors deal with this?

      The institution information is at the time of publication for each paper. We added more detail to our description of this on page 19, line 656:

      “For both First and Last Authors, we found the country of their institutional affiliation listed on the publication. In the case of multiple affiliations, the first one listed in Scopus was used.”

      - p 10, l 6: How were these self-citation rates calculated? This is averaged per author (i.e. only considering papers assigned to a particular topic) and then averaged across authors? (Note that in this way, the average of an author with many papers will weigh equally with the average of an author with few papers, which might skew some of the results).

      We calculate it across the entire topic (i.e., do NOT calculate by author first). We updated the description as follows (page 7, line 211):

      “We then computed self-citation rates for each of these topics (Figure 4) as the total number of self-citations in each topic divided by the total number of references in each topic…”

      - p 13, l 18: Is the academic age analysis here again limited to authors having at least 50 papers?

      This is not limited to at least 50 papers. To clarify, the previous analysis was not limited to authors with 50 papers. It was instead limited to ages in our dataset that had at least 50 data points. e.g., If an academic age of 70 only had 20 data points in our dataset, it would have been excluded.

      - Fig. 5: Here, comparing Fig. 5(d) and 5(f) suggests that partly, the self-citation rate differences between men and women, might be the result of the differences in number of papers. That is, the somewhat higher self-citation rate at a given academic age, might be the result of the higher number of papers at that academic age. It seems that this is not directly described in this part of the analysis (although this seems to be the case from the later regression analysis).

      We agree with this idea and have added a new section as follows (page 13, line 384):

      “2.10 Reconciling differences between raw data and models

      The raw and GAM-derived data exhibited some conflicting results, such as for gender and field of research. To further study covariates associated with this discrepancy, we modeled the publication history for each author (at the time of publication) in our dataset (Table 2). The model terms included academic age, article year, journal impact factor, field, LMIC status, gender, and document type. Notably, Neuroscience was associated with the fewest number of papers per author. This explains how authors in Neuroscience could have the lowest raw self-citation rates by highest self-citation rates after including covariates in a model. In addition, being a man was associated with about 0.25 more papers. Thus, gender differences in self-citation likely emerged from differences in the number of papers, not in any self-citation practices.”

      - Section 2.10. Perhaps the authors could clarify that this analysis takes individual articles as the unit of analysis, not citations.

      We updated all our models to take individual articles and have clarified this with more detailed tables.

      - p 18, l 10: "Articles with between 15-25% self-citation rates were 10 discarded" Why?

      We agree that these should not be discarded. However, we previously included this analysis as a paper-level model because our main model was at the level of citation pairs. Now, we removed this analysis because we model self-citation rates and counts by paper.

      - p 20, l 5: "Thus, early-career researchers may be less incentivized to 5 self-promote (e.g., self-cite) for academic gains compared to 20 years ago." How about the possibility that there was less collaboration, so that first authors would be more likely to cite their own paper, whereas with more collaboration, they will more often not feature as first author?

      This is an interesting point. We feel that more collaboration would generally lead to even more self-citations, if anything. If an author collaborates more, they are more likely to be on some of the references as a middle author (which by our definition counts toward self-citation rates).

      - p 20, l 15: Here the authors call authors to avoid excessive self-citations. Of course, there's nothing wrong with calling for that, but earlier the authors were more careful to not label something directly as excessive self-citations. Here, by stating it like this, the authors suggest that they have looked at excessive self-citations.

      We rephrased this as follows:

      Before: “For example, an author with 30 years of experience cites themselves approximately twice as much as one with 10 years of experience on average. Both authors have plenty of works that they can cite, and likely only a few are necessary. As such, we encourage authors to be cognizant of their citations and to avoid excessive self-citations.”

      After: “For example, an author with 30 years of experience cites themselves approximately twice as much as one with 10 years of experience on average. Both authors have plenty of works that they can cite, and likely only a few are necessary. As such, we encourage authors to be cognizant of their citations and to avoid unnecessary self-citations.”

      - p 22, l 11: Here again, the same critique as p 20, l15 applies.

      We switched “excessively” to “unnecessarily.”

      - p 23, l 12: The authors here critique ref. (21) of ascertainment bias, namely that they are "including only highly-achieving researchers in the life 12 sciences". But do the authors not do exactly the same thing? That is, they also only focus on the top high-impact journals.

      We included 63 high-impact journals with tens of thousands of authors. In addition, some of these journals were not high-impact at the time of publication. For example, Acta Neuropathologica had an impact factor of 17.09 in 2020 but 2.45 in 2000. This still is a limitation of our work, but we do cover a much broader range of works than the listed reference (though their analysis also has many benefits since it included more detailed information).

      - p 26, l 22-26: It seems that the matching is done quite broadly (matching last names + initials at worst) for self-citations, while later (in section 4.9, p 31, l 9), the authors switch to only matching exact Scopus Author IDs. Why not use the same approach throughout? Or compare the two definitions (narrow / broad).

      Thank you for catching this mistake. We now use the approach of matching Scopus Author IDs throughout.

      - S8: it might be nice to explore open alternatives, such as OpenAlex or OpenAIRE, instead of the closed Scopus database, which requires paid access (which not all institutions have, perhaps that could also be corrected in the description in GitHub).

      Thank you for this suggestion. Unfortunately, switching databases would require starting our analysis from the beginning. On our GitHub page, we state: “Please email matthew.rosenblatt@yale.edu if you have trouble running this or do not have institutional access. We can help you run the code and/or run it for you and share your self-citation trends.” We feel that this will allow us to help researchers who may not have institutional access. In addition, we released our aggregated, de-identified (title and paper information removed) data on GitHub for other researchers to use.

    1. 34. "Fun." Last, but not least - games must be fun. No amount of emotional depth will save a game that is boring.

      We talked a lot about this last class but I'd love to know what he meant by this

    2. 26. "Connective Tissue" Techniques. These create a feeling of connectivity across the various locales in a game, even if they are distant in terms of space and time.

      I talked about this in relation to how Undertale uses music as part of it's storytelling in a class a took last spring!

    1. The track with the next closest number of comments was a freestyle in which Jay-Z allegedly impugned Game. It was released a week later and garnered 589 commentsand over 79.000 listens over the same period. In fact, four out of five of the longestmessage board threads (as of March 20, 2005) were posted on beef tracks. Much like theradio call-in forum discussed above, the postings on these message boards manifest oneform of popular participation in beef, additionally, they reveal how meaningful publicparticipation in beef discourse is enmeshed in the network of media broadcast andconsumption.

      This shows online platforms' role in shaping the discourse around hip-hop beef. By looking at the engagement data from sites like hiphopgame.com, it becomes clear that public interest in beef is a substantial and structural part of hip-hop culture. The overwhelming response to tracks like 50 Cent’s “Piggybank” highlights how fans actively participate in these dialogues, transforming music into a communal experience. This interplay between media consumption and public commentary underscores the dynamic nature of hip-hop as both an art form and a site of cultural discussion.

    2. Norman Kelly analyzes the rap music industry asan extension of colonial economic structures that exploit African Americans. Accordingto Kelly, the white-owned music industry has agency over the content of hip hop becausethey control the apparatus of distribution and the means of production. Since blacks failedto develop a viable alternative to corporate music production, when hip hop becamecommercialized, black artists lost their creative control over hip hop to the marketplace

      I never really thought about the music being some sort of economic strategy for economic dominance and power. After reading this part I realize that it has now become a chain of reactions. Black artist create their art, but don't have the resources needed to produce their music, thus forcing them to rely on the individuals that can produce their music and provide connections. The taking advantage of colored artist truly has become just another white-mans game and it's interesting to see how art has now become a business deal.

    1. Résumé de la vidéo [00:00:12][^1^][1] - [00:26:56][^2^][2]:

      Ce webinaire présente la formation "Enseigner avec le jeu" proposée par le collectif UNICAMP. Jérôme Le Gris Pages de l'Université de Caen Normandie explique les objectifs, le contenu et les modalités de cette formation.

      Temps forts: + [00:02:00][^3^][3] Présentation de la formation * Formation par UNICAMP * Micro-certification reconnue * 12 heures sur 3 semaines + [00:03:18][^4^][4] Intervention de Jérôme Le Gris Pages * Historien des idées * Vice-président de l'Université de Caen Normandie * Expert en ludopédagogie + [00:07:02][^5^][5] Public cible * Enseignants et formateurs * Conseillers pédagogiques * Game designers et développeurs + [00:09:00][^6^][6] Objectifs de la formation * Scénarisation pédagogique * Animation de séances ludo-pédagogiques * Évaluation de l'efficacité des jeux + [00:17:01][^7^][7] Méthodes et outils * Gamification de la formation * Jeu de rôle sur forum * Ateliers paragogiques et tutorat motivationnel

      Résumé de la vidéo [00:27:00][^1^][1] - [00:44:11][^2^][2]:

      Cette vidéo présente un webinaire sur la formation "Enseigner avec le jeu" de l'Université de Caen. Elle aborde les différents formats de jeux utilisés, les modalités de formation synchrones et asynchrones, ainsi que les questions des participants sur l'inscription, la certification et l'adaptation de la formation à différents publics.

      Points forts: + [00:27:12][^3^][3] Formats de jeux abordés * Jeux vidéo et analogiques * Jeux de rôle et de plateau * Jeux compétitifs et non compétitifs + [00:28:01][^4^][4] Modalités synchrones et asynchrones * Heures asynchrones pour travail autonome * Heures synchrones pour interactions en direct * Importance de l'interactivité en synchrone + [00:30:29][^5^][5] Modalités d'inscription et de financement * Inscription via la page dédiée * Contrat de formation professionnelle * Possibilité de remboursement par l'employeur + [00:33:00][^6^][6] Certification et évaluation * Micro-certification universitaire * Exigence de participation et d'assiduité * Évaluation par quiz final + [00:35:01][^7^][7] Adaptation à différents publics * Enseignants en maternelle * Accompagnement de seniors * Utilisation du jeu pour l'animation et la concertation

    1. Growing up, she remembers her father watching wrestling on TV. “Because he didn’t have a son, he used to make us, his five daughters, bout with each other at home. I am the oldest of them and the best at the game.”

      Including a personal anecdote here is an excellent choice, as it evokes the reader's emotions. By learning more about her background and circumstances, it becomes clear why she is passionate about pursuing this career.

    1. Learning through GamesThe final and most prescriptive type of play-based learning is learning through games. This type ofplay was implemented in all nine classrooms to promote the development of discrete math andlanguage skills. In this manner, teachers sought to make the learning of these mandated academicstandards more engaging for the students: “There are always some academic things that we have todeal with too. We do try to make it engaging” (Participant 3). One of the ways in which theseteachers promoted this engagement was through the playing of games. “Because then it is a game.We are not learning. We are playing” (Participant 7). In these play episodes the teacher directed theoutcomes and prescribed the process while the children followed the rules of the games. Forinstance, the students in Class 5 played Words With Friends, a game that involved using lettertiles to spell words and names on a game board. Students in Classes 6 and 9 played word and letterbingo, whereas in Classes 12 and 14 the students went fishing for letters with their magnetic fishingpoles. Math games were also prevalent in several of the classrooms. Class 2 played Go Fish withnumber cards, whereas in Class 12 students used Play-Doh to make the assigned number of wormsand place them in the scene on themed placemats.In these nine classrooms, multiple types of play were integrated. These variations of play providedthe opportunity for both play and play-based learning. The play episodes were child directed andfree from the confines of academic standards. The play-based learning activities involved varyinglevels of teacher involvement and integrated varying degrees of academic learning.

      Found to be most structured to develop discrete math and language skills, but in a more hands on engaging way.

    1. Alternating game play with novelistic components, interactive fictions expand the repertoire of the literary through a variety of techniques, including visual displays, graphics, animations, and clever modifications of traditional literary devices.

      "Visual Novels" are a type of EL and are popular in Japan. They are usually like games, where you can see pictures and the text of the story underneath them. The player can interact with the text by clicking to continue or making choices about the characters' actions in the plot, which changes it.

    1. nstead of speculating on their history, the ubiquity of mancalagames is shown to be particularly useful to dispel a long-standing belief thatso-called complex societies are more likely to play strategic games.

      This ubiquity shows that even simpler societies engage with games involving strategy, breaking the stereotype of game complexity being tied to societal development

    2. games and gaming have often been used as agents and arenas ofrestriction of the non-privileged.

      Luxury game sets and exclusive access reinforced social hierarchies, restricting lower classes from full participation in gaming, thereby maintaining societal divisions

    1. In other words, the move from tax equality to tax breaks for the married cannot be Pareto-optimal: the benefit for the married can be achieved only at the expense of the unmarried.116

      another game theory here

    2. The symbolic pressure on women to marry, and the idea that they are worthless if unmarried, means that if marriage exists women are better off married than unmarried.

      this is like game theory, and the foot binding example from PPE gateway

    1. That’s how you can consistently exist in the current finite game, and leave yourself open to the surprises (and the possibility of being surprising) in games that don’t yet exist that you don’t know you’re already playing. And that’s how you continue playing.

      This reminds me of, Leto II Atreides allowing for surprises within his Eugenics program, Paul Atreides dies because he tried to use his power of the spice to articulate a exact future rather than getting hints and adapting to it as it came along.

      In reality it is better to have guide rails on a bowling alley rather than a machine that can get strikes for you. Cause when you go play Croquet or lawn bowling or Batchi ball that machine ain't going to help you but you still got some skill from Bowling and still got a high score from playing bowling.

    2. One way to remember this is to treat the infinite game of evolutionary success as a sort of Zeno’s paradox turned around. You never reach the finish line because when you’re mediocre, you only take a step that’s halfway to the finish, so there’s always more room left to continue the game.
    3. In Douglas Hofstadter’s Metamagical Themas, there is a description of a game (I forget the details) where the goal is not to get the top score, but the average score. The subtlety is that after playing multiple rounds, the overall winner is not the one with the highest total score, but the most average total score. So to illustrate, if Alice, Bob, and Charlie are playing such a game and their scores in a series of 6 games are: Alice: 7 5 3 5 6 2 Bob: 5 8 2 1 9 7 Charlie: 3 1 5 4 5 5 We have the following outcome. Bob wins game 1, Alice wins game 2, Bob wins game 3, Charlie wins game 4, 5, and 6. So Alice gets 1 point, Bob gets 2 points, and Charlie gets 3 points. The overall winner is Bob, not Charlie. Charlie is the most mediocre, but Bob is mediocre mediocre. His prize is (perhaps) highest probability of continuing the game.
    4. There are instances of programs respecting the rules of the game while blatantly violating its spirit.

      Usually most of the work of a task is defining and understanding it rather than actually doing it

    5. For instance, there are instances of programs figuring out how to use tiny rounding errors in simulated game environments to violate the simulated law of conservation of energy, and milking the simulation itself for a winning strategy. Like the characters in The Matrix bend the laws of physics when inside.

      I wonder when AI will start speed running video games?

    6. Every principal-agent game is of this sort. Every sort of moral hazard is marked by the ability of one side to pursue mediocrity rather than excellence. In each case, there is an information asymmetry powering the mediocrity.

      This is the cure to being a perfectionist

    7. This kind of indifference-driven mediocrity is the hallmark of games where one side is playing a finite game and the other side is playing an infinite game that isn’t necessarily evil in the Carse sense of wanting to end the game for the other, but isn’t striving for excellence either.

      Can you compare this to WOW(World of Warcraft) proper and the PvP and PvE parts of the game. One of an infinite game but there are finite games within the infinite game that people play.

    8. This is just a different way of playing a finite game. Instead of optimizing (playing to win), you minimize effort to stay in the specific finite game. If you can perform consistently without disqualifying errors, you are satisficing. Most automation and quality control is devoted to raising the floor of this kind of performance.

      This phrasing reminds me of "War Games", the only way to win is not to play

    9. Mediocrity is the functionally embodied and situated form of what Sarah Perry called deep laziness. To be mediocre at something is to be less than excellent at it in order to conserve energy for the indefinitely long haul. Mediocrity is the ethos of perpetual beta at work in a domain you’re not sure what the “product” is even for. Functionally unfixed self-perpetuation.

      Wana talk about being Mediocre, check out Sam Larson who understood to win the game show alone, you just need to get really really fat and just sit around not wasting calories trying to get more calories in an environment where that can't really be done.

      • To draw a card -> to pick up another paper from the pile
      • To move a game pice -> to rotate opportunities to play.
      • to take turns -> to advance a token
      • to read instructions -> to learn from the written directions
      • to get points -> to obtain a hegher score number
    1. 1- Do you like to play games? why or why not? I love playing all kinds of games, whether they are board games or video games. Sometimes I take the games a bit seriously because I'm a bit competitive. But they usually represent a sense of calm, focus, and fun for me at the same time.

      2- What kind of games do you like to play? Now, I am a fan of starcraft 2 or any RTS game. But now I have become a fan of dota 2 and I think I'm going to give it a try. also when I get together with my friends, we play card games like Uno.

      3- always i have a mate with play duo and is the same mate with i play the cards games always i have a mate to play duo and is the same mate witch i play the cards games, his name is bruno. But once a week we are playing with a most than 5 friends more.

    1. players can literally write the rules and behavior of decentralized applications, and therefore, any Smart Assembly created in the game

      It seems that the protocol of a smart object is given through the Solidity code.

      Protocol code as contract code.

    1. The final season of Game of Thrones resulted in a petition of more than a million signatures for HBO to remake it. Ridiculous? Yes. But maybe that was the point

      I heard they did this because the last couple seasons were lacking, I watch a lot of anime and This is so common inside of that community

    2. The last decade or so has witnessed huge changes in the awareness, perception and tools of fandom. In terms of television and film, the enormous successes of Game of Thrones and the Marvel Cinematic Universe have introduced geek culture – and its brand of participatory fandom – to the mainstream. At the same time, the internet – and more specifically social media – has amplified fans’ voices, while also breaking down the boundaries between them and the artists they love/hate.

      due to the increasing ways that's the fans can interact with artist I feel that shows or movies have grown because now the artist knows exactly what the people want and can give it to them

    3. And to the 1920s, where fan groups would write thousands of letters to movie studios demanding their favourite actor be given better roles. “It was the same thing,” he says, “as Sonic the Hedgehog having weird teeth and people going, ‘No, that’s not the game I played as a kid, you need to fix it or I am not giving you any money.’”

      I feel that this is a great way to look at it, if the fans have to pay money to watch it then them having some input as to how the characters are portrayed isn't necessarily a bad thing

    1. "People were marching around the building with placards denouncing Campbell. They were shouting and their mood was very ugly. When I got to my seat near the ice, the game had already started and the Canadiens couldn't seem to get untracked, Then Campbell made his grand entrance. I looked up and I could see some fans beginning to menace him. On one hand I felt pleased because I hated him for what he had done to me and on the other hand I didn't want to see harm come to him. Then a tear-gas bomb went off and the arena was getting filled with smoke."

      Retrieved from: https://www.nhl.com/news/voices-from-the-past-maurice-rocket-richard

    1. On March 13, 1955, in a game in Boston, Richard got into a fight with Hal Laycoe after he was high-sticked in the head. Richard needed five stitches to close the cut on his forehead. When the whistle was blown to end the play, Richard skated up to Laycoe and hit him in the face with his stick. A linesman attempted to restrain Richard who repeatedly tried to attack Laycoe. Richard eventually broke the stick over the body of Laycoe. Linesman Cliff Thompson attempted to contain Richard and Richard punched him twice in the face, knocking him out. Richard was given a match penalty and an automatic $100 fine.In the dressing room after the game, Boston police attempted to arrest Richard but were blocked from getting into the dressing room by Canadiens players. Eventually, the Bruins convinced the officers to let the Canadiens leave on condition that the NHL would take care of the issue.

      Retrieved from :https://hyp.is/go?url=https%3A%2F%2Fcanadaehx.com%2F2022%2F04%2F23%2Fthe-richard-riot%2F&group=world

    1. But how is the author supposed to accommodate them? What if the audience runs away with the story? And how do we handle the blur — not just between fiction and fact, but between author and audience, entertainment and advertising, story and game? A lot of smart people — in film, in television, in videogames, in advertising, in technology, even in neuroscience — are trying to sort these questions out. The Art of Immersion is their story.

      This passage discusses issues with modern storytelling and how difficult it can be with audience interaction since new boundaries and changing narrative.

    1. Welcome back, this is part two of this lesson. We're going to continue immediately from the end of part one. So let's get started.

      Now that you know the structure of a segment, let's take a look at how it's used within TCP.

      Let's take a few minutes to look at the architecture of TCP.

      TCP, like IP, is used to allow communications between two devices.

      Let's assume a laptop and a game server.

      TCP is connection-based, so it provides a connection architecture between two devices.

      And let's refer to these as the client and the server.

      Once established, the connection provides what's seen as a reliable communication channel between the client and the server, which is used to exchange data.

      Now let's step through how this actually works, now that you understand TCP segments.

      The actual communication between client and server, this will still use packets at layer three.

      We know now that these are isolated.

      They don't really provide error checking, any ordering, and they're isolated, so there's no association between each other.

      There's no connection as such.

      Because they can be received out of order, and because there are no ports, you can't use them in a situation where there will be multiple applications or multiple clients, because the server has no way of separating what relates to what.

      But now we have layer four, so we can create segments.

      Layer four takes data provided to it and chops that data up into segments, and these segments are encapsulated into IP packets.

      These segments contain a sequence number, which means that the order of segments can be maintained.

      If packets arrive out of order, that's okay, because the segments can be reordered.

      If a packet is damaged or lost in transit, that's okay, because even though that segment will be lost, it can be retransmitted, and segments will just carry on.

      TCP gives you this guaranteed reliable ordered set of segments, and this means that layer four can build on this platform of reliable ordered segments between two devices.

      It means that you can create a connection between a client and the server.

      In this example, let's assume segments are being exchanged between the client and the game server.

      The game communicates to a TCP port 443 on the server.

      Now, this might look like this architecturally, so we have a connection from a random port on the client to a well-known port, so 443 on the game server.

      So between these two ports, segments are exchanged.

      When the client communicates to the server, the source port is 23060, and the destination port is 443.

      This architecturally is now a communication channel.

      TCP connections are bi-directional, and this means that the server will send data back to the client, and to do this, it just flips the ports which are in use.

      So then the source port becomes TCP443 on the server, and the destination port on the client is 23060.

      And again, conceptually, you can view this as a channel.

      Now, these two channels you can think of as a single connection between the client and the server.

      Now, these channels technically aren't real, they're created using segments, so they build upon the concept of this reliable ordered delivery that segments provide, and give you this concept at a stream or a channel between these two devices over which data can be exchanged, but understand that this is really just a collection of segments.

      Now, when you communicate with the game server in this example, you use a destination port of 443, and this is known as a well-known port.

      It's the port that the server is running on.

      Now, as part of creating the connection, you also create a port on your local machine, which is temporary, this is known as the ephemeral port.

      This tends to use a higher port range, and it's temporary.

      It's used as a source port for any segments that you send from the client to the server.

      When the server responds, it uses the well-known port number as the source, and the ephemeral port as the destination.

      It reverses the source and destination for any responses.

      Now, this is important to understand, because from a layer 4 perspective, you'll have two sets of segments, one with a source port of 23060 and a destination of 443, and ones which are the reverse, so a source port of 443, and a destination of 23060.

      From a layer 4 perspective, these are different, and it's why you need two sets of rules on a network ACL within AWS.

      One set for the initiating part, so the laptop to the server, and another set for the response part, the server to the laptop.

      When you hear the term ephemeral ports or high ports, this means the port range that the client picks as the source port.

      Often, you'll need to add firewall rules, allowing all of this range back to the client.

      Now, earlier, when I was stepping through TCP segment structure, I mentioned the flags field.

      Now, this field contains, as the name suggests, some actual flags, and these are things which can be set to influence the connection.

      So, Finn will finish a connection, Akk is an acknowledgement, and Sin is used at the start of connections to synchronize sequence numbers.

      With TCP, everything is based on connections.

      You can't send data without first creating a connection.

      Both sides need to agree on some starting parameters, and this is best illustrated visually.

      So, that's what we're going to do.

      So, the start of this process is that we have a client and a server.

      And as I mentioned a moment ago, before any data can be transferred using TCP, a connection needs to be established, and this uses a three-way handshake.

      So, step one is that a client needs to send a segment to the server.

      So, this segment contains a random sequence number from the client to the server.

      So, this is unique in this direction of travel for segments.

      And this sequence number is initially set to a random value known as the ISN or initial sequence number.

      So, you can think of this as the client saying to the server, "Hey, let's talk," and setting this initial sequence number.

      So, the server receives the segment, and it needs to respond.

      So, what it does is it also picks its own random sequence number.

      We're going to refer to this as SS, and it picks this as with the client side randomly.

      Now, what it wants to do is acknowledge that it's received all of the communications from the client.

      So, it takes the client sequence number, received in the previous segment, and it adds one.

      And it sets the acknowledgement part of the segment that it's going to send to the CS plus one value.

      What this is essentially doing is informing the client that it's received all of the previous transmission, so CS, and it wants it to send the next part of the data, so CS plus one.

      So, it's sending this segment back to the client.

      It's picking its own server sequence, so SS, and it's incrementing the client sequence by one, and it sends this back to the client.

      So, in essence, this is responding with, "Sure, let's talk."

      So, this type of segment is known as a SIN-AC.

      It's used to synchronize sequence numbers, but also to acknowledge the receipt of the client sequence number.

      So, when the first segment was called a SIN, to synchronize sequence numbers, the next segment is called a SIN-AC.

      It serves two purposes.

      It's also used to synchronize sequence numbers, but also to acknowledge the segment from the client.

      The client receives the segment from the server.

      It knows the server sequence, and so, to acknowledge to the server that it's received all of that information, it takes the server sequence, so SS, and it adds one to it, and it puts this value as the acknowledgement.

      Then it also increments its own client sequence value by one, and puts that as the sequence, and then sends an acknowledgement segment, containing all this information through to the server.

      Essentially, it's saying, "Autumn, let's go."

      At this point, both the client and server agree on the sequence values.

      The client has acknowledged the initial sequence value decided by the server, and the server has acknowledged the initial value decided by the client.

      So, both of them are synchronized, and at this point, data can flow over this connection between the client and the server.

      Now, from this point on, any time either side sends data, they increment the sequence, and the other side acknowledges the sequence value plus one, and this allows for retransmission when data is lost.

      So, this is a process that you need to be comfortable with, so just make sure that you understand every step of this process.

      Okay, so let's move on, and another concept which I want to cover is sessions, and the state of sessions.

      Now, you've seen this architecture before, a client communicating with the game server.

      The game server is running on a well-known port, so TCP 443, and the client is using an ephemeral port 23060 to connect with port 443 on the game server.

      So, response traffic will come up from the game server, its source port will be 443, and it will be connecting to the client on destination port 23060.

      Now, imagine that you want to add security to the laptop, let's say using a firewall.

      The question is, what rules would you add?

      What types of traffic would you allow from where and to where in order that this connection will function without any issues?

      Now, I'm going to be covering firewalls in more detail in a separate video.

      For now though, let's keep this high level.

      Now, there are two types of capability levels that you'll encounter from a security perspective.

      One of them is called a stateless firewall.

      With a stateless firewall, it doesn't understand the state of a connection.

      So, when you're looking at a layer 4 connection, you've got the initiating traffic, and you've got the response traffic.

      So, the initiating traffic in light with the bottom, and the response traffic in red at the top.

      With a stateless firewall, you need two rules.

      A rule allowing the outbound segments, and another rule which allows the response segments coming in the reverse direction.

      So, this means that the outbound connection from the laptop's IP, using port 23060, connecting to the server IP, using port 443.

      So, that's the outgoing part.

      And then the inbound response coming from the service IP on port 443, going to the laptop's IP on a femoral port 23060.

      So, the stateless firewall, this is two rules, one outbound rule and one inbound rule.

      So, this is a situation where we're securing an outbound connection.

      So, where the laptop is connecting to the server.

      If we were looking to secure, say, a web server, where connections would be made into our server, then the initial traffic would be inbound, and the response would be outbound.

      There's always initiating traffic, and then the response traffic.

      And you have to understand the directionality to understand what rules you need with a stateless firewall.

      So, that's a stateless firewall.

      And if you have any AWS experience, that's what a network access control list is.

      It's a stateless firewall which needs two rules for each TCP connection, one in both directions.

      Now, a stateless firewall is different.

      This understands the state of the TCP segment.

      So, with this, it sees the initial traffic and the response traffic as one thing.

      So, if you allow the initiating connection, then you automatically allow the response.

      So, in this case, if we allowed the initial outbound connection from the client laptop to the server, then the response traffic, the inbound traffic, would be automatically allowed.

      In AWS, this is how a security group works.

      The difference is that a stateless firewall understands level and the state of the traffic.

      It's an extension of what a stateless firewall can achieve.

      Now, this is one of those topics where there is some debate about whether this is layer four or layer five.

      Layer four uses TCP segments and concerns itself with ID addresses and port numbers.

      Strictly speaking, the concept of a session or an ongoing communication between two devices, that is layer five.

      It doesn't matter if this level item can by layer four and layer five anyway, because it's just easier to explain.

      But you need to remember the term stateless and the term stateful and how they change how you create security rules.

      For this point, that's everything I wanted to cover.

      So, go ahead and complete this video. And when you're ready, I'll look forward to you joining me in the next video of this series.

    1. Welcome back, this is part three of this lesson. We're going to continue immediately from the end of part two. So let's get started.

      The address resolution protocol is used generally when you have a layer three packet and you want to encapsulate it inside a frame and then send that frame to a MAC address.

      You don't initially know the MAC address and you need a protocol which can find the MAC address for a given IP address.

      For example, if you communicate with AWS, AWS will be the destination of the IP packets.

      But you're going to be forwarding via your home router which is the default gateway.

      And so you're going to need the MAC address of that default gateway to send the frame to containing the packet.

      And this is where ARP comes in.

      ARP will give you the MAC address for a given IP address.

      So let's step through how it works.

      For this example, we're going to keep things simple.

      We've got a local network with two laptops, one on the left and one on the right.

      And this is a layer three network which means it has a functional layer two and layer one.

      What we want is the left laptop which is running a game and it wants to send the packets containing game data to the laptop on the right.

      This laptop has an IP address of 133.33.3.10.

      So the laptop on the left takes the game data and passes it to its layer three which creates a packet.

      This packet has its IP address as the source and the right laptop as the destination.

      So 133.33.3.10.

      But now we need a way of being able to generate a frame to put that packet in for transmission.

      We need the MAC address of the right laptop.

      This is what ARP or the address resolution protocol does for us.

      It's a process which runs between layer two and layer three.

      It's important to point out at this point that now you know how devices can determine if two IP addresses are on the same local network.

      In this case, the laptop on the left because it has its subnet mask and IP address as well as the IP address of the laptop on the right.

      It knows that they're both on the same network.

      And so this is a direct local connection.

      Routers aren't required.

      We don't need to use any routers for this type of communication.

      Now ARP broadcasts on layer two.

      It sends an ARP frame to all Fs as a MAC address.

      And it's asking who has the IP address 133.33.3.10 which is the IP address of the laptop on the right.

      Now the right laptop because it has a full layer one, two and three networks stack is also running the address resolution protocol.

      The ARP software sees this broadcast and it responds by saying I'm that IP address.

      I'm 133.33.3.10.

      Here's my MAC address ending 5B colon 7, 8.

      So now the left laptop has the MAC address of the right one.

      Now it can use this destination MAC address to build a frame, encapsulate the packet in this frame.

      And then once the frame is ready, it can be given to layer one and sent across the physical network to layer one of the right laptop.

      Layer one of the right laptop receives this physical orbit stream and hands it off to the layer two software also on the right laptop.

      Now it's layer two software reviews the destination MAC address and sees that it's destined for itself.

      So it strips off the frame and it sends the packet to its layer three software.

      Layer three reviews the packet, sees that it is the intended destination and it de-encapsulates the data.

      So strips away the packet and hands the data back to the game.

      Now it's critical to understand as you move through this lesson series, even if two devices are communicating using layer three, they're going to be using layer two for local communications.

      If the machines are on the same local network, then it will be one layer two frame per packet.

      But if you'll see in a moment if the two devices are remote, then you can have many different layer two frames which are used along the way.

      And ARP, or the address resolution protocol, is going to be essential to ensure that you can obtain the MAC address for a given IP address.

      This is what facilitates the interaction between layer three and layer two.

      So now that you know about packets, now that you know about subnet masks, you know about routes and route tables, and you know about the address resolution protocol or ARP, let's bring this all together now and look at a routing example.

      So we're going to go into a little bit more detail now.

      In this example, we have three different networks.

      We've got the orange network on the left, we've got the green network in the middle, and then finally the pink network on the right.

      Now between these networks are some routers.

      Between the orange and green networks is router one, known as R1, and between the green and pink networks is router two, known as R2.

      Each of these routers has a network interface in both of the networks that it touches.

      Routers are layer three devices, which means that they understand layer one, layer two, and layer three.

      So the network interfaces in each of these networks work at layer one, two, and three.

      In addition to this, we have three laptops.

      We've got two in the orange network, so device one at the bottom and device two at the top, and then device three in the pink network on the right.

      Okay, so what I'm going to do now is to step through two different routing scenarios, and all of this is bringing together all of the individual concepts which I've covered at various different parts of this part of the lesson series.

      First, let's have a look at what happens when device one wants to communicate with device two using its IP address.

      First, device one is able to use its own IP address and subnet mask together with device two's IP address, and calculate that they're on the same local network.

      So in this case, router R1 is not required.

      So a packet gets created called P1 with a D2 IP address as the destination.

      The address resolution protocol is used to get D2's MAC address, and then that packet is encapsulated in a frame with that MAC address as the destination.

      Then that frame is sent to the MAC address of D2.

      Once the frame arrives at D2, it checks the frame, hits the destination, so it accepts it and then strips the frame away.

      It passes the packet to layer three.

      It sees that it's the destination IP address, so it strips the packet away and then passes the game data to the game.

      Now all of this should make sense.

      This is a simple local network communication.

      Now let's step through a remote example.

      Device two communicating with device three.

      These are on two different networks.

      Device two is on the orange network, and device three is on the pink network.

      So first, the D2 laptop, it compares its own IP address to the D3 laptop IP address, and it uses its subnet mask to determine that they're on different networks.

      Then it creates a packet P2, which has the D3 laptop as its destination IP address.

      It wraps this up in a frame called F2, but because D3 is remote, it knows it needs to use the default gateway as a router.

      So for the destination MAC address of F2, it uses the address resolution protocol to get the MAC address of the local router R1.

      So the packet P2 is addressed to the laptop D3 in the pink network, so the packet's destination IP address is D3.

      The frame F2 is now addressed to the router R1 at MAC address, so this frame is sent to router R1.

      R1 is going to see that the MAC address is addressed to itself, and so it will strip away the frame F2, leaving just the packet P2.

      Now a normal network device such as your laptop or phone, if it received a packet which wasn't destined for it, it would just drop that packet.

      A router though, it's different.

      The router's job is to route packets, so it's just fine to handle a packet which is addressed somewhere else.

      So it reviews the destination of the packet P2, it sees that it's destined for laptop D3, and it has a route for the pink network in its route table.

      It knows that for anything destined for the pink network, then router R2 should be the next hop.

      So it takes packet P2 and it encapsulates it in a new frame F3.

      Now the destination MAC address of this frame is the MAC address of router R2, and it gets this by using the address resolution protocol or ARP.

      So it knows that the next hop is the IP address of router R2, and it uses ARP to get the MAC address of router R2, and then it sends this frame off to router R2 as the next hop.

      So now we're in a position where router R2 has this frame F3 containing the packet P2 destined for the machine inside the pink network.

      So now the router R2 has this frame with the packet inside.

      It sees that it's the destination of that frame.

      The MAC address on the frame is its MAC address, so it accepts the frame and it removes it from around packet P2.

      So now we've just got packet P2 again.

      So now router R2 reviews the packet and it sees that it's not the destination, but that doesn't matter because R2 is a router.

      It can see that the packet is addressed to something on the same local network, so it doesn't need to worry anymore about routing.

      Instead, it uses ARP to get the MAC address of the device with the intended destination IP address, so laptop D3.

      It then encapsulates the packet P2 in a new frame, F4, whose destination MAC address is that of laptop D3, and then it sends this frame through to laptop D3. laptop D3 receives the frame, D3 sees that it is the intended destination of the frame because the MAC address matches its MAC address.

      It strips off the frame, it also sees that it's the intended destination of the IP packet, it strips off the packet, and then the data inside the packet is available for the game that's running on this laptop.

      So it's a router's job to move packets between networks.

      Router's doing this by reviewing packets, checking route tables for the next hop or target addresses, and then adding frames to allow the packets to pass through intermediate layer 2 networks.

      A packet during its life might move through any number of layer 2 networks and be re-encapsulated many times during its trip, but normally the packet itself remains unchanged all the way from source to destination.

      A router is just a device which understands physical networking, it understands data link networking, and it understands IP networking.

      So that's layer 3, the network layer, and let's review what we've learned quickly before we move on to the next layer of the OSI model.

      Now this is just an opportunity to summarize what we've learned, so at the start of this video, at layer 2 we had media access control, and we had device to device or device to all device communications, but only within the same layer 2 network.

      So what does layer 3 add to this?

      Well it adds IP addresses, either version 4 or version 6, and this is cross network addressing.

      It also adds the address, resolution, protocol, or ARP, which can find the MAC address for this IP address or for a given IP address.

      Layer 3 adds routes, which define where to forward a packet to, and it adds route tables, which contain multiple routes.

      It adds the concept of a device called a router, which moves packets from source to destination, encapsulating these packets in different layer 2 frames along the way.

      This altogether allows for device to device communication over the internet, so you can access this video, which is stored on a server, which has several intermediate networks away from your location.

      So you can access this server, which has an IP address, and packets can move from the server through to your local device, crossing many different layer 2 networks.

      Now what IP doesn't provide?

      It provides no method of individual channels of communication.

      Layer 3 provides packets, and packets only have source IP and destination IP, so for a given two devices, you can only have one stream of communication, so you can't have different applications on those devices communicating at the same time.

      And this is a critical limitation, which is resolved by layers 4 and above.

      Another element of layer 3 is that in theory packets could be delivered out of order.

      Individual packets move across the internet through intermediate networks, and depending on network conditions, there's no guarantee that those packets will take the same route from source to destination, and because of different network conditions, it's possible they could arrive in a different order.

      And so if you've got an application which relies on the same ordering at the point of receipt as at the point of transmission, then we need to add additional things on top of layer 3, and that's something that layer 4 protocols can assist with.

      Now at this point we've covered everything that we need to for layer 3.

      There are a number of related subjects which I'm going to cover in dedicated videos, such as network address translation, and how the IP address space functions, as well as IP version 6, which in this component of the lesson series, we've covered how the architecture of layer 3 of the OSI model works.

      So at this point, go ahead and complete this video, and then when you're ready, I'll look forward to you joining me in the next part of this lesson series where we're going to look at layer 4.

    1. Welcome back.

      Now that we've covered the physical and data link layers, next we need to step through layer 3 of the OSI model, which is the network layer.

      As I mentioned in previous videos, each layer of the OSI model builds on the layers below it, so layer 3 requires one or more operational layer 2 networks to function.

      The job of layer 3 is to get data from one location to another.

      When you're watching this video, data is being moved from the server hosting the video through to your local device.

      When you access AWS or stream from Netflix, data is being moved across the internet, and it's layer 3 which handles this process of moving data from a source to a destination.

      To appreciate layer 3 fully, you have to understand why it's needed.

      So far in the series, I've used the example of 2-4 friends playing the game on a local area network.

      Now what if we extended this, so now we have 2 local area networks and they're located with some geographic separation.

      Let's say that one is on the east coast of the US and another is on the west coast, so there's a lot of distance between these 2 separate layer 2 networks.

      Now LAN1 and LAN2 are isolated layer 2 networks at this point.

      Devices on each local network can communicate with each other, but not outside of that local layer 2 network.

      Now you could pay for and provision a point-to-point link across the entire US to connect these 2 networks, but that would be expensive, and if every business who had multiple offices needed to use point-to-point links, it would be a huge mess and wouldn't be scalable.

      Additionally, each layer 2 network uses a shared layer 2 protocol.

      In the example so far, this has been Ethernet.

      Any networks where only using layer 2, if we want them to communicate with each other, they need to use the same layer 2 protocol to communicate with another layer 2 network.

      Now not everything uses the same layer 2 protocol, this presents challenges, because you can't simply join 2 layer 2 networks together, which use different layer 2 protocols and have them work out of the box.

      With the example which is on screen now, imagine if we had additional locations spread across the continental US.

      Now in between these locations, let's add some point-to-point links, so we've got links in pink which are tabled connections, and these go between these different locations.

      Now we also might have point-to-point links which use a different layer 2 protocol.

      In this example, let's say that we had a satellite connection between 2 of these locations.

      This is in blue, and this is a different layer 2 technology.

      Now Ethernet is one layer 2 technology which is generally used for local networks.

      It's the most popular wired connection technology for local area networks.

      But for point-to-point links and other long distance connections, you might also use things such as PPP, MPLS or ATM.

      Not all of these use frames with the same format, so we need something in common between them.

      Layer 2 is the layer of the OSI stack which moves frames, it moves frames from a local source to a local destination.

      So to move data between different local networks, which is known as inter-networking, this is where the name internet comes from.

      We need a layer 3.

      Layer 3 is this common protocol which can span multiple different layer 2 networks.

      Now layer 3 or the network layer can be added onto one or more layer 2 networks, and it adds a few capabilities.

      It adds the internet protocol or IP.

      You get IP addresses which are cross-networking addresses, which you can assign to devices, and these can be used to communicate across networks using routing.

      So the device that you're using right now, it has an IP address.

      The server which stores this video, it too has an IP address.

      And the internet protocol is being used to send requests from your local network across the internet to the server hosting this video, and then back again.

      IP packets are moved from source to destination across the internet through many intermediate networks.

      Devices called routers, which are layer 3 devices, move packets of data across different networks.

      They encapsulate a packet inside of an ethernet frame for that part of the journey over that local network.

      Now encapsulation just means that an IP packet is put inside an ethernet frame for that part of the journey.

      Then when it needs to be moved into a new network, that particular frame is removed, and a new one is added around the same packet, and it's moved onto the next local network.

      So as this video data is moving from my server to you, it's been wrapped up in frames.

      Those frames are stripped away, new frames are added, all while the packets of IP data move from my video server to you.

      So that's why IP is needed at a high level, to allow you to connect to all that remote networks, crossing intermediate networks on the way.

      Now over the coming lesson, I want to explain the various important parts of how layer 3 works.

      Specifically IP, which is the layer 3 protocol used on the internet.

      Now I'm going to start with the structure of packets, which are the data units used within the internet protocol, which is a layer 3 protocol.

      So let's take a look at that next.

      Now packets in many ways are similar to frames.

      It's the same basic concept.

      They contain some data to be moved, and they have a source and destination address.

      The difference is that with frames, both the source and destination are generally local.

      With an IP packet, the destination and source addresses could be on opposite sides of the planet.

      During their journey from source to destination packets remain the same, as they move across layer 2 networks.

      They're placed inside frames, which is known as encapsulation.

      The frame is specific to the local network that the packet is moving through, and changes every time the packet moves between networks.

      The packet though doesn't change.

      Normally it's constant for the duration for its entire trip between source and destination.

      Although there are some exceptions that I'll be detailing in a different lesson, when I talk about things like network address translation.

      Now there are two versions of the internet protocol in use.

      Version 4, which has been used for decades, and version 6, which adds more scalability.

      And I'll be covering version 6 and its differences in a separate lesson.

      An IP packet contains various different fields, much like frames that we discussed in an earlier video.

      At this level there are a few important things within an IP packet which you need to understand, and some which are less important.

      Now let's just skip past the less relevant ones.

      I'm not saying any of these are unimportant, but you don't need to know exactly what they do at this introductory level.

      Things which are important though, every packet has a source and destination IP address field.

      The source IP address is generally the device IP which generates the packet, and the destination IP address is the intended destination IP for the packet.

      In the previous example we have two networks, one east coast and one west coast.

      The source might be a west coast PC, and the destination might be a laptop within the east coast network.

      But crucially these are both IP addresses.

      There's also the protocol field, and this is important because IP is layer 3.

      It generally contains data provided by another layer, a layer 4 protocol, and it's this field which stores which protocol is used.

      So examples of protocols which this might reference are things like ICMP, TCP or UDP.

      If you're storing TCP data inside a packet this value will be 6, for PINs known as ICMP this value will be 1, and if you're using UDP as a layer 4 protocol then this value will be 17.

      This field means that the network stack at the destination, specifically the layer 3 component of that stack, will know which layer 4 protocol to pass the data into.

      Now the bulk of the space within a packet is taken up with the data itself, something that's generally provided from a layer 4 protocol.

      Now lastly there's a field called time to live or TTL.

      Remember the packets will move through many different intermediate networks between the source and the destination, and this is a value which defines how many hops the packet can move through.

      It's used to stop packets looping around forever.

      If for some reason they can't reach their destination then this defines a maximum number of hops that the packet can take before being discarded.

      So just in summary a packet contains some data which it carries generally for layer 4 protocols.

      It has a source and destination IP address, the IP protocol implementation which is on routers moves packets between all the networks from source to destination, and it's these fields which are used to perform that process.

      As packets move through each intermediate layer 2 network, it will be inserted or encapsulated in a layer 2 frame, specific for that network.

      A single packet might exist inside tens of different frames throughout its route to its destination, one for every layer 2 network or layer 2 point to point link which it moves through.

      Now IP version 6 from a packet structure is very similar, we also have some fields which matter less at this stage.

      They are functional but to understand things at this level it's not essential to talk about these particular fields.

      And just as with IP version 4, IP version 6 packets also have both source and destination IP address fields.

      But these are bigger IP version 6 addresses are bigger which means there are more possible IP version 6 addresses.

      And I'm going to be covering IP version 6 in detail in another lesson.

      It means though that space taken in a packet to store IP version 6 source and destination addresses is larger.

      Now you still have data within an IP version 6 packet and this is also generally from a layer 4 protocol.

      Now strictly speaking if this were to scale then this would be off the bottom of the screen, but let's just keep things simple.

      We also have a similar field to the time to live value within IP version 4 packets, which in IP version 6 this is called the hop limit.

      Functionally these are similar, it controls the maximum number of hops that the packet can go through before being discarded.

      So these are IP packets, generally they store data from layer 4 and they themselves are stored in one or more layer 2 frames as they move around networks or links which fall on the internet.

      Okay so this is the end of part 1 of this lesson.

      It was getting a little bit on the long side and I wanted to give you the opportunity to take a small break, maybe stretch your legs or make a coffee.

      Now part 2 will continue immediately from this point, so go ahead complete this video and when you're ready I look forward to you joining me in part 2.

    1. Welcome back and in this part of the lesson series I'm going to be discussing layer one of the seven layer OSI model which is the physical layer.

      Imagine a situation where you have two devices in your home let's say two laptops and you want to play a local area network or LAN game between those two laptops.

      To do this you would either connect them both to the same Wi-Fi network or you'd use a physical networking cable and to keep things simple in this lesson I'm going to use the example of a physical connection between these two laptops so both laptops have a network interface card and they're connected using a network cable.

      Now for this part of the lesson series we're just going to focus on layer one which is the physical layer.

      So what does connecting this network cable to both of these devices give us?

      Well we're going to assume it's a copper network cable so it gives us a point-to-point electrical shared medium between these two devices so it's a piece of cable that can be used to transmit electrical signals between these two network interface cards.

      Now physical medium can be copper in which case it uses electrical signals it can be fiber in which case it uses light or it can be Wi-Fi in which case it uses radio frequencies.

      Whatever type of medium is used it needs a way of being able to carry unstructured information and so we define layer one or physical layer standards which are also known as specifications and these define how to transmit and receive raw bitstream so ones and zeros between a device and a shared physical medium in this case the piece of copper networking cable between our two laptops so the standard defines things like voltage levels, timings, data rates, distances which can be used, the method of modulation and even the connector type on each end of the physical cable.

      The specification means that both laptops have a shared understanding of the physical medium so the cable.

      Both can use this physical medium to send and receive raw data.

      For copper cable electrical signals are used so a certain voltage is defined as binary 1 say 1 volt and a certain voltage as binary 0 say -1 volt.

      If both network cards in both laptops agree because they use the same standard then it means that zeros and ones can be transmitted onto the medium by the left laptop and received from the medium by the right laptop and this is how two networking devices or more specifically two network interface cards can communicate at layer one.

      If I refer to a device as layer X so for example layer one or layer three then it means that the device contains functionality for that layer and below so a layer one device just understands layer one and a layer three device has layers one, two and three capability.

      Now try to remember that because it's going to make much of what's coming over the remaining videos of this series much easier to understand.

      So just to reiterate what we know to this point we've taken two laptops we've got two layer one network interfaces and we've connected them using a copper cable a copper shared medium and because we're using a layer one standard it means that both of these cards can understand the specific way that binary zeros and ones are transmitted onto the shared medium.

      Now on the previous screen I use the example of two devices so two laptops with network interface cards communicating with each other.

      Two devices can use a point-to-point layer one link a fancy way of talking about a network cable but what if we need to add more devices a two-player game isn't satisfactory we need to add two more players for a total of four.

      Well we can't really connect these four devices to a network cable with only two connectors but what we can do is to add a networking device called a hub in this example it's a four-port hub and the laptop on the left and right instead of being connected to each other directly and now connected to two ports of that hub because it's a four-port hub this also means that it has two ports free and so it can accommodate the top and bottom laptops.

      Now hubs have one job anything which the hub receives on any of its ports is retransmitted to all of the other ports including any errors or collisions.

      Conceptually a hub creates a four connector network cable one single piece of physical medium which four devices can be connected to.

      Now there are a few things that you really need to understand at this stage about layer one networking.

      First there are no individual device addresses at layer one one laptop cannot address traffic directly at another it's a broadcast medium the network card on the device on the left transmits onto the physical medium and everything else receives it it's like shouting into a room with three other people and not using any names.

      Now this is a limitation but it is fixed by layer two which will cover soon in this lesson series.

      The other consideration is that it is possible that two devices might try and transmit at once and if that happens there will be a collision this corrupts any transmissions on the shared medium only one thing can transmit at once on a shared medium and be legible to everything else if multiple things transmit on the same layer one physical medium then collisions occur and render all of the information useless.

      Now related to this layer one has no media access control so no method of controlling which devices can transmit so if you decide to use a layer one architecture so a hub and all of the devices which is shown on screen now then collisions are almost guaranteed and the likelihood increases the more layer one devices are present on the same layer one network.

      Layer one is also not able to detect when collisions occur remember these network cards are just transmitting via voltage changes on the shared medium it's not digital they can in theory all transmit at the same time and physically that's okay it means that nobody will be able to understand anything but at layer one it can happen so layer one is done it doesn't have any intelligence beyond defining the standards that all of the devices will use to transmit onto the shared medium and receive from the shared medium because of how layer one works and because of how a hub works because it simply retransmits everything even collisions then the layer one network is said to have one broadcast and one collision domain and this means that layer one networks tend not to scale very well the more devices are added to a layer one network the higher the chance of collisions and data corruption.

      Now layer one is fundamental to networking because it's how devices actually communicate at a physical level but for layer one to be useful for it to be able to be used practically for anything else then we need to add layer two and layer two runs over the top of a working layer one connection and that's what we'll be looking at in the next part of this lesson series.

      As a summary of the position that we're in right now assuming that we have only layer one networking we know that layer one focuses on the physical shared medium and it focuses on the standards for transmitting onto the medium and receiving from the shared medium so all devices which are part of the same layer one network need to be using the same layer one medium and device standards generally this means a certain type of network card and a certain type of cable or it means why vicar's using a certain type of antennas and frequency ranges what layer one doesn't provide is any form of access control of the shared medium and it doesn't give us uniquely identifiable devices and this means we have no method for device to device communication everything is broadcast using transmission onto the shared physical medium.

      Now in the next video of this series I'm going to be stepping through layer two which is the data link layer and this is the layer which adds a lot of intelligence on top of layer one and allows device to device communication and it's layer two which is used by all of the upper layers of the OSI model to allow effective communication but it's important that you understand how layer one works because this physically is how data moves between all devices and so you need to have a good fundamental understanding of layer one.

      Now this seems like a great place to take a break so I'm going to end this video here so go ahead and complete this video and then when you're ready I look forward to you joining me in the next part of this lesson series where we'll be looking at layer two or the data link layer.

    1. wavelength? Are you wide-awake and your roommate almost asleep? Is the baseball game really important to you but totally boring to the person you are talking with? It is important that everyone involved understands the context of the conversation. Is it a party, which lends itself to frivolous banter? Is the conversation about something serious that occurred? What are some of the relevant steps to understanding context? First of all, pay attention to timing. Is there enough time to cover what you are trying to say? Is it the right time to talk to the boss about a raise? What about the location? Should your conversation take

      I think it could be helpful for others to understand the context of the conversation.

    1. We're here to support our customers affected by wildfires. Learn more about relief options available through the TD Helps program.  Skip to main content Personal Small Business Commercial Investing About TD Select Country Canada Selected US Select Language English English Selected Français 简体中文 繁體中文 My Accounts Products Bank Accounts Credit Cards Mortgages Borrowing Personal Investing Insurance Promotions & Offers Ways to Bank TD App Online Banking Ways to Pay Ways to Send Money Phone, Branch & ATM Cross Border Banking Foreign Exchange Services Green Banking Learn Advice Hub TD Calculators and Tools Youth and Parent Students Post Grads New to Canada Seniors (60+) Estate Support Find Us Help Search Login Easy Web Web Broker U.S. Banking My Accounts & Profile EasyWeb My Accounts My Profile Password and Security Account Settings Notification and Alerts Other Accounts Web Broker U.S. Banking Logout Open Menu Login Open Menu My Accounts & Profile open menu Logout TD: Personal Personal Selected Small Business Commercial Investing About TD Home My Accounts Bank Accounts Credit Cards Mortgages Borrowing Personal Investing Insurance Promotions & Offers Ways to Bank TD App Online Banking Ways to Pay Ways to Send Money Phone, Branch & ATM Cross Border Banking Foreign Exchange Services Green Banking Learn Advice Hub TD Calculators and Tools Youth and Parent Students Post Grads New to Canada Banking Advice for Seniors (60+) Estate Support Find Us Help Country: Canada Canada Selected country US Language: English English Select language Français 简体中文 繁體中文 Login EasyWeb WebBroker U.S. Banking Logout EasyWeb My Accounts My Profile Password and Security Account Settings Notifications & Alerts Other Accounts WebBroker U.S. Banking @media (min-width: 320px) { #id-homepage-top1.cmp-banner-container { background-image: url(/content/dam/tdct/images/personal-banking/advice_hub.jpeg/jcr:content/renditions/cq5dam.web.320.320.jpeg); } } @media (min-width: 768px) { #id-homepage-top1.cmp-banner-container { background-image: url(/content/dam/tdct/images/personal-banking/advice_hub.jpeg/jcr:content/renditions/cq5dam.web.768.768.jpeg); } } @media (min-width: 1024px) { #id-homepage-top1.cmp-banner-container { background-image: url(/content/dam/tdct/images/personal-banking/advice_hub.jpeg/jcr:content/renditions/cq5dam.web.1024.1024.jpeg); } } @media (min-width: 1200px) { #id-homepage-top1.cmp-banner-container { background-image: url(/content/dam/tdct/images/personal-banking/advice_hub.jpeg/jcr:content/renditions/cq5dam.web.1200.1200.jpeg); } } Don't be afraid to talk money. Get financial advice to help you move forward. Speak with a TD advisor today. TD Ready Advice is here for you. Learn more EasyWeb: Online Banking Secure Login Register Security Guarantee WebBroker Online Trading Secure Login Register Security Guarantee Welcome to TD Personal Banking Explore TD Canada Trust and related products and services Find a chequing account For daily spending, making bill payments and more Find a savings account Accounts to help you grow your savings Find a credit card TD credit cards offer a host of benefits and features Explore mortgage options Get specialized advice to help with your home ownership journey Personal investing Registered plans and investments to help you reach your goals Borrowing Find a borrowing option that fits your life Invest and trade online TD Direct Investing – innovative tools for self-directed investors Personalized wealth advice Goals-based planning and advice with a TD Wealth advisor Today's rates Current rates for borrowing & investing products Plan your move to Canada Start learning about the Canadian banking system, the immigration process and what you could expect when you arrive. Learn more TD Advice - ready to help you move forward Looking for financial advice? Read through our articles, videos, and tools with helpful information for everyday banking, borrowing, saving and financial planning needs. Learn more Protect against fraud Find out what to do if you suspect fraud, and learn how to keep your accounts and identity more secure Learn more Learn more Plan for your financial future with our unique banking solutions slide 1 of 3 I'm a teen/parent It's never too late – or too early – to plan for your financial future. Check out our convenient banking options to help you grow and manage your money. I'm a teen/parent Get started I'm a student Discover TD banking solutions and resources to help you gain confidence about staying on top of your finances while in school. I'm a student Start planning I'm a post grad/young adult From everyday banking to investment advice, we're here to help you improve your financial game. I'm a post grad/young adult Enter your next phase Estate planning and settlement advice If you are looking for information on how to plan your own Estate or need support with settling an Estate of a loved one, we're here to help. Estate planning and settlement advice Learn more Check your financial health Try the Financial Health Assessment Tool. This tool can help assess your financial health and provide tips and information by answering a few questions. Get started It's never been more important to bank digitally slide 1 of 2 Easy, safe and secure banking from home At this time, we encourage you to bank digitally. Easy, safe and secure banking from home Learn more Bank from anywhere with confidence It’s easy to get started with online and mobile banking. Bank from anywhere with confidence View All Tutorials Updates from us slide 1 of 3 Personal TD Chequing Accounts Pay bills, send money, make purchases, and manage your cash flow. Learn more Accelerate your savings Get a premium rate with TD high interest savings account Find out more Looking for a TD Credit Card? Our Credit Card Selector Tool can help you choose. Learn more slide 1 of 2 Get advice virtually, or in person. Ask a TD advisor and get the advice you need. Don’t be afraid to talk money. Learn more Digital Tools Our digital tools are safe and secure, and help make everyday banking easy. Digital Tools Learn more Why Bank with TD Convenient banking At TD, there are so many ways to bank. You can bank online, in branch, on the phone and with the TD apps. See the ways in which TD enriches lives slide 1 of 4 Sustainability Learn how we're embedding sustainability into our business TD Ready Commitment We're targeting $1 billion in community giving by 2030 Customer Appreciation We're celebrating 10 years of #TDThanksYou with chances for our customers to win prizes Publications 2023 Public Accountability Statement is now available Have a question? Find answers here What's your question? Ask Us Popular questions Sorry, we didn't find any results. You could check for misspelled words or try a different term or question. We're sorry. Service is currently unavailable. We found a few responses for you: We matched that to: Popular questions View morepopular questions Helpful related questions View morehelpful related questions Did you find what you were looking for? YesNo Thank you We're sorry. Service is currently unavailable. Sorry this didn't help. Would you leave us a comment about your search? Submit Thank you slide 1 of 2 Your deposits may be insurable by the Canada Deposit Insurance Corporation. Learn more CIRO regulation applies to TD Investment Services Inc., a separate company from and a wholly-owned subsidiary of The Toronto-Dominion Bank. www.ciro.ca Need to talk to us directly? Contact us Follow TD Twitter Facebook Instagram YouTube Linkedin Privacy & Security Legal Accessibility CDIC Member About Us Careers Manage online experience Site Index

      Robust - The website has a responsive design allowing people to adjust content size to the desired specification

    1. But, is there an inflection point of consequence that changes the name of the “game” oflife on earth for everybody and everything?

      To me this question was something that could not be easily answered. I say that because I would say yes and no to this question. I feel us as humans create the not so good changes to the earth and we also adapt to a lot of things that may not always benefit us. Even though there may be an inflection point of consequence that slows us down persuading us to create a change, I think that more than likely it is something that will be subsided or something that will soon be deemed as "normal".

    1. TELEVISION PROTOCOL
      • TV picture: 60fps, each frame drawn line by line
      • 1 frame (Atari's research data for max TV compat)
      • 262/312 lines, 228 clocks per line
      • 3 x VSYNC + 37/45 x VBLANK + 192/228 display + 30/36 overscan
      • VSYNC = signal TV to start new frame
      • VBLANK = ?
      • overscan = ?
      • 1 line = 68 cycle horizontal blank + 160 cycle display
      • horizontal timing handled by TIA
      • WSYNC stops CPU till start of next line
      • vertical timing by CPU
      • after completion of a frame => VSYNC + VBLANK + pic + VBLANK
      • 1 CPU cycle = 3 TIA cycle
      • TV pic
      • drawn line at a time
      • CPU put data for line into TIA, TIA convert data to video signals
      • TIA has data only for current line
      • 70/85 blank lines for game logic
    1. My goal here is to engagein a productive conversation with proceduralism, bringing back playersand advocating, finally, for a player-centric approach to the design ofgames, particularly the design of ethics and politics in(to) games

      Yes, players are very important as the center of game design. Because the player's choices and behaviors actually participate in the creation of the game's narrative and experience, and at the same time, the player's actions can be fed back to the creator, thereby promoting the game to be more perfect.

    1. This is important because it reminds us that video games are software, and everything that happens in a game is based on its programming.

      In my opinion, I think interactivity is a "mental concept", since computers operate on a "input, process, output"(Stang, 2022). Interactivity is just how we percieve the uniqueness of computer reactions to our inputs.

    2. video games, like all computer-based media, are reactive rather than interactive: "a chain of reactions" in which "the player does not act so much as he reacts to what the game presents to him, and similarly, the game then reacts to his input" (pp. 119–20).

      I feel like this is literally describing interactivity/an interaction.

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Du et al. report 16 new well-preserved specimens of atiopodan arthropods from the Chengjiang biota, which demonstrate both dosal and vental anatomies of a pothential new taxon of atiopodans that are closely related to trolobites. Authors assigned their specimens to Acanthomeridion serratum, and proposed A. anacanthus as a junior subjective synonym of Acanthomeridion serratum. Critially, the presence of ventral plates (interpreted as cephalic liberigenae), together with phylogenic results, lead authors to conclude that the cephalic sutures originated multiple times within the Artiopoda.

      Strengths:

      New specimens are highly qualified and informative. The morphology of dorsal exoskeleton, except for the supposed free cheek, were well illustrated and described in detail, which provide a wealth of information for taxonmic and phylogenic analyses.

      Weaknesses:

      The weaknesses of this work is obvious in a number of aspects. Technically, ventral morphlogy is less well revealed and is poorly illustrated. Additional diagrams are necessary to show the trunk appendages and suture lines. Taxonomically, I am not convinced by authors' placement. The specimens are markedly different from either Acanthomeridion serratum Hou et al. 1989 or A. anacanthus Hou et al. 2017. The ontogenetic description is extremely weak and the morpholical continuity is not established. Geometric and morphomitric analyses might be helpful to resolve the taxonomic and ontogenic uncertainties. I am confused by author's description of free cheek (libragena) and ventral plate. Are they the same object? How do they connect with other parts of cephalic shield, e.g. hypostome and fixgena. Critically, homology of cephalic slits (eye slits, eye notch, doral suture, facial suture) not extensivlely discussed either morphologically or functionally. Finally, authors claimed that phylogenic results support two separate origins rather than a deep origin. However, the results in Figure 4 can be explain a deep homology of cephalic suture in molecular level and multiple co-options within the Atiopoda.

      Comments on the revised version:

      I have seen the extensive revision of the manuscript. The main point "Multiple origins of dorsal ecdysial sutures in atiopoans" is now partially supported by results presented by the authors. I am still unsatisfied with descriptions and interpretations of critical features newly revealed by authors. The following points might be useful for the author to make further revisions.

      (1) The antennae were well illustrated in a couple of specimens, while it was described in a short sentence.

      Some more details of the changing article shape and overall length of antennae has been added to the description.

      (2) There are also imprecise descriptions of features.

      Measurements, dimensions and multiple figures are provided for many features in the text and the supplement includes more figures. In total, 11 figures are provided with details (photographs or measurements) of the material.

      (3) Ontogeny of the cephalon was not described.

      A sentence has been added to the description to note the changing width:length of the cephalon during ontogeny, with a reference to Figure 6.

      (3) The critical head element is the so called "ventral plate". How this element connects with the cephalic shield is not adequately revealed. The authors claimed that the suture is along the cephalic margin. However, the lateral margin of cephalon is not rounded but exhibit two notches (e.g. Fig 3C) . This gives an indication that the supposed ventral plates have a dorsal extension to fit the notches. Alternatively, the "ventral plate" can be interpreted as a small free cheek with a large ventral extension, providing evidence for librigenal hypothesis.

      As noted in the diagnosis for the genus, these notches are interpreted to accommodate the eye stalks. The homology of the ventral plates is discussed at length in the manuscript, and is the focus of the three sets of phylogenetic analyses performed.

      Reviewer #3 (Public Review):

      Summary:

      Well-illustrated new material is documented for Acanthomeridion, a formerly incompletely known Cambrian arthropod. The formerly known facial sutures are proposed be associated with ventral plates that the authors homologise with the free cheeks of trilobites (although also testing alternative homologies). An update of a published phylogenetic dataset permits reconsideration of whether dorsal ecdysial sutures have a single or multiple origins in trilobites and their relatives.

      Strengths:

      Documentation of an ontogenetic series makes a sound case that the proposed diagnostic characters of a second species of Acanthomeridion are variation within a single species. New microtomographic data shed light on appendage morphology that was not formerly known. The new data on ventral plates and their association with the ecdysial sutures are valuable in underpinning homologies with trilobites.

      I think the revision does a satisfactory job of reconciling the data and analyses with the conclusions drawn from them. Referee 1's valid concerns about whether a synonymy of Acanthomeridion anacanthus is justified have been addressed by the addition of a length/width scatterplot in Figure 6. Referee 2's doubts about homology between the librigenae of trilobites and ventral plates of Acanthomeridion have been taken on board by re-running the phylogenetic analyses with a coding for possible homology between the ventral plates and the doublure of olenelloid trilobites. The authors sensibly added more trilobite terminals to the matrix (including Olenellus) and did analyses with and without constraints for olenelloids being a grade at the base of Trilobita. My concerns about counting how many times dorsal sutures evolved on a consensus tree have been addressed (the authors now play it safe and say "multiple" rather than attempting to count them on a bushy topology). The treespace visualisation (Figure 9) is a really good addition to the revised paper.

      Weaknesses:

      The question of how many times dorsal ecdysial sutures evolved in Artiopoda was addressed by Hou et al (2017), who first documented the facial sutures of Acanthomeridion and optimised them onto a phylogeny to infer multiple origins, as well as in a paper led by the lead author in Cladistics in 2019. Du et al. (2019) presented a phylogeny based on an earlier version of the current dataset wherein they discussed how many times sutures evolved or were lost based on their presence in Zhiwenia/Protosutura, Acanthomeridion and Trilobita. The answer here is slightly different (because some topologies unite Acanthomeridion and trilobites). This paper is not a game-changer because these questions have been asked several times over the past seven years, but there are solid, worthy advances made here.

      I'd like to see some of the most significant figures from the Supplementary Information included in the main paper so they will be maximally accessed. The "stick-like" exopods are not best illustrated in the main paper; their best imagery is in Figure S1. Why not move that figure (or at least its non-redundant panels) as well as the reconstruction (Figure S7) to the main paper? The latter summarises the authors' interpretation that a large axe-shaped hypostome appears to be contiguous with ventral plates.

      We have moved these figures from the supplementary information to the main text, and renumbered figures accordingly. Fig S1 has now been split – panels a and b are in the main text (new Fig. 4), with the remainder staying as Fig S1. Fig S7 is now Fig. 8 in the main text.

      The specimens depict evidence for three pairs of post-antennal cephalic appendages but it's a bit hard to picture how they functioned if there's no room between the hypostome and ventral plates. Also, a comment is required on the reconstruction involving all cephalic appendages originating against/under the hypostome rather the first pair being paroral near the posterior end of the hypostome and the rest being post-hypostomal as in trilobites.

      A short comment has been added to the caption.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      I have seen the extensive revision of the manuscript. The main point "Multiple origins of dorsal ecdysial sutures in atiopoans" is now partially supported by results presented by the authors. I am still unsatisfied with descriptions and interpretations of critical features newly revealed by authors. The following points might be useful for the author to make further revisions.

      (1) The antennae were well illustrated in a couple of specimens, while it was described in a short sentence.

      (2) There are also imprecise descriptions of features (see my annotations in submitted ms).

      (3) Ontogeny of the cephalon was not described.

      (3) The critical head element is the so called "vental plate". How this element connects with the cephalic shield is not adequately revealed. The authors claimed that the suture is along the cephalic margin. However, the lateral margin of cephalon is not rounded but exhibit two notches (e.g. Fig 3C) . This gives a indication that the supposed ventral plates have a dorsal extension to fit the notches. Alternatively, the "ventral plate" can be interpreted as a small free cheek with a large ventral extension, providing evidence for librigenal hypothesis.

      Reviewer #3 (Recommendations For The Authors):

      The references swap back and forth between journal titles being abbreviated or written out in full. Please standardise this to journal format rather than alternating between two different styles.

      Line 145: Perez-Peris et al. (2021) should be cited as the source for the Anacheirurus appendages.

      Added, thank you.

      Line 310: The El Albani et al (2024) paper on ellipsocephaloid appendages should be noted in connection with an A+4 (rather than A+3) head in trilobites.

      Added.

      Minor or trivial corrections:

      Line 51: move the three citations to follow "arthropods" rather than following "artiopodans", as none of these papers are specifically about Artiopoda.

      Changed thank you

      Caption to Figure 1 and line 100: Acanthomeridion appears in Figure 1 and in the text with no context. Please weave it into the text appropriately.

      Line 136: The data were...

      Corrected

      Line 164: upper case for Morphobank.

      Corrected

      Line 183: spelling of "Village" (not "Vallige").

      Corrected

      Line 197: I suggest using "articles" rather than "podomeres" for the antenna (as you did in line 232).

      Changed thank you

      Line 269: "gnathobasal spine (rather than "spin").

      Changed thank you

      Line 272: "Exopods" is used here but elsewhere "exopodites" is used.

      Exopodites is now used throughout

      Line 359: "can been seen" is awkward and, as evolutionary patterns are inferred rather than "seen", could be reworded as "... loss of the eye slit has been inferred...".

      Reworded as suggested

      Line 422 and 423: As two referees asked in the first round of review, delete "iconic" and "symbolic".

      Deleted as suggested

      Line 467: "librigena-like".

      Corrected

    1. Type-in activities:<br /> - learning about typewriters, both using and how they work;<br /> - demonstrations of typewriter maintenance, cleaning, how to change a ribbon, and small repairs for common problems - typewriter tool showcases - what tools might you need to maintain, clean, and repair your typewriter? - lots of machines to try out, which might best suit your writing style, and typing touch? If you don't have a typewriter, this is a great way to try some out before buying your first one - typewriter purchasing and collecting advice - encouraging typing as a distraction and screen-free writing tool - writing (fiction, non-fiction, poetry) along with potential writing prompts (this could dovetail in with other library-related writing endeavors); - A group story: A single typewriter is reserved to one side upon which each participant can contribute by typing a single sentence to create a collaborative/group story "exquisite corpse"-style; - typewriter art and arttyping - Speed typing contest (with small prizes) - We'll bring stationery (paper, envelopes, stamps) to encourage participants to type letters to friends or family (bring an address for someone you'd like to write to); - typewriter swap and sale (optional depending on the venue's perspective; sales are not the primary purpose here) - typewriter repairs using 3D printing or designing replacement parts (if the venue can support this) - typewriter handicrafts (typewriter covers and sewing/repairing cases) - typewriter resources (repair shops, where to find ribbon, how old is my machine?, et al.) - a possible typewriter mystery game? - share stories - encourage community

      For a local library-specific type in:<br /> - library card applications which can be typewritten for potential patrons who don't have a library card - typewriter books (particularly if hosted at a library; place a hold on several typewriter-related books which attendees can browse through at the event and check out afterward) - 3D printing typewriter keys, spare parts; design of replacement parts for 3D printing

      Attendees are encouraged to bring one (or many more of their own favorite manual typewriters) to use, showcase, or demonstrate to others, but having your own typewriter is NOT a requirement for attendance.

    1. oral culture was the primary means of cultural transmission

      When looking at both versions, I believe this is an a great way to think deeper about what Plato and Socrates were saying because this sentence makes me think about the game telephone and how easy it was back then to distort a story or conversation

    1. The growing consensus now is that game-engine renderers can model cameras well enough not only to test a perception system in simulation, but even to train perception systems in simulation and expect them to work in the real world!

      This is super cool if it actually works. Are there any examples of this being demonstrated?

    1. So plowing/bulldozing is OK? Doing it doesn't violate control rules but I guess it has to do with length of contact. Plowing on purpose is probably a violation.

    1. media consumers have the ability to add their input and criticism, and this is an important function for users.

      this is a facade created by companies to keep viewers engaged. we believe we have the freedom to speak but in reality if one comments or not they are playing into the game of letting what we see on our devices influence our thoughts. Even if one just creates an opinion and does not share, the post does its planned job of making the consumer think.

    1. The centrality of touch is a unique characteristic of board games

      The tactile nature of board games is a crucial differentiator from other media. Unlike video games, where touch is limited to controllers, board games engage players with direct interaction with the game pieces, making the tactile experience a significant part of the gameplay. This physical involvement may lead to deeper emotional responses than in other media.

    2. Players engage with boardgames for a variety of reasons

      The text highlights that different players appreciate different aspects of board games, some focusing more on mechanics, while others on materiality or aesthetics. It suggests a holistic approach to game design, where both elements are interwoven for an enriched experience.

    3. This is because there is no preexisting world that thegame’s rules need to limit;

      Would the goal to win a game be considered a generative or restrictive rule? Is it a rule? Who decides what's a rule?

    4. learning by playing may even be called the oldest learning method

      I agree, it's always easier to learn how to play a game by actually playing rather than listen to someone read the rules off of the rule book

    5. simulations and artificial intelligence, and thehumanities computing field. It was particularly from the last of these where the contemporary wave of gamestudies started to emerge. Many of the people working within this paradigm approached computers as a po-tential new medium

      a lot of popular games are based in simulation most notably (the sims) and others like goat simulator, minecraft etc...so its no suprise that lots of popular games have the siimulation aspect. the range of control in an artifical siutation has a pleasing disposition to be a game

    6. The situation is changing, and in the future the issue is likely to be put the opposite way – why should therenot be game studies represented in a modern university?

      to my above question

    7. tudies at the time of writing (in 2006) has not yet reached this stage in many universities,

      i wonder now in this day and age with digital game culture having such large platforms and cultural impact in media (streamers on twitch, five nights at freedies, flappy bird, amoung us, candy crush) if the field had also helped expand this into something larger that just for developers

    1. I am a better learner because I have found ways to use a more diverse range of studying tactics.”

      I can agree with this, there are many ways to learning for example some people have notes and some chop it down to flashcards and make a game out of that.

    1. Rapture of the Rhizome

      The Tangled Rhizome relates to "Adventures with Anxiety" because a rhizome is a root system with several different routes to different things. The game acts as a rhizome since there is a list of options for each activity, ranging from positive to negative options.

    2. But in a narrative experience not structured as a win-lose contest the movement forward has the feeling of enacting a meaningful experience both consciously chosen and surprising.

      reminds me of the different experiences that were shared last seminar about "adventures with anxiety." for some, the experience was cut short and left them feeling confused and unsatisfied, while others reached the true ending, completing the objective of the game. i would argue that the complexity and enjoyment of a maze-like game could also rely on the choices of the person who's playing, not just the creator of the game

    3. The Journey Story and the Pleasure of Problem Solving

      In Adventures with Anxiety, the game introduces two distinct journeys. The journey of the girl and the wolf. At each battle that the two have, they have to problem-solve in order to get through the ordeal and learn a lesson. Ultimately, this culminates in the two characters solving their problems as they learn where each character is coming from.

    4. The boundlessness of the rhizome experience is crucial to its comforting side. In this it is as much of a game as the adventure maze.

      The boundlessness of the rhizome allows the reader to connect with inner emotions, as seen in "Adventures With Anxiety," when the story asked the reader to pick the anxiety emotions they most strongly connected with.

    5. Minos’s maze was therefore a frightening place, full of danger and bafflement, but successful navigation of it led to great rewards

      a good foundation to create a story and/or a game from because there's a plot, an objective, the opportunity to make choices, and a clear win or lose ending

    6. a story and a game pattern derives from the melding of a cognitive problem (finding the path) with an emotionally symbolic pattern (facing what is frightening and unknown).

      correlating between story and game by describing the appeal of a problem and symbolistic pattern

    1. But the unsolvable maze does hold promise as an expressive structure.

      Stand different from adventures with anxiety since unsolvable maze does not prevail as a feature in the game even with the freedom of choice.

    2. The Journey Story and the Pleasure of Problem Solving

      Adventures with Anxiety is a mix of both a journey story/game and a maze story/game because players get to go through a story line and incorporates real world problem solving based on the player's preferences.

    3. This kind of narrative structure need not be limited to such simplistic content or to an explicitly mazelike interface.

      Really mazes can come from a variety of different settings, is this why they are probably one of the most popular form of in-game storytelling?

    4. Rapture of the Rhizome

      The Rhizome style of game can be freeing from anxiety because there is freedom from consequence and any point can lead to another. This is very different from Adventures with Anxiety, where your actions will a have set path towards a set ending.

    5. one enacts a story of wandering, of being enticed in conflicting directions, of remaining always open to surprise, of feeling helpless to orient oneself or to find an exit, but the story is also oddly reassuring

      While feeling helpless, there is also a sense of comfort of not having an end goal. You are able to explore more and take time to really consider things without worrying about reaching the end goal. People are able to spend as long or as little as time at specific points in the game.

    6. The postmodern hypertext tradition celebrates the indeterminate text as a liberation from the tyranny of the author and an affirmation of the reader’s freedom of interpretation.

      The Postmodern hypertext can be translated as a freedom of choice that a reader holds which directly reflects the purpose of adventures with anxiety in serving as a game that interplays liberation from anxiety with the liberation from the tyranny of the author

    7. navigational space of the computer

      The internet has made it suitable to gain more agency within games/stories for games such as "Adventures With Anxiety" where each player had options to chose from about how the story/game should move forward

    8. But in a narrative experience not structured as a win-lose contest the movement forward has the feeling of enacting a meaningful experience

      This comment made the game choices made in Adventures with Anxiety a lot more meaningful. The game initally followed the "win-lose contest," but when the game wanted to talk about something serious such as dealing with anxiety, got rid of the win-lose format

    9. are related to mazes but offer additional opportunities for exercising agency.

      How are mazes and journey stories different? Could a film or game potentially be a combination of both?

    10. If you forget to get it, you must retrace your steps through many perils. The game is like a treasure hunt in which a chain of discoveries acts as a kind of Ariadne’s thread to lead you through the maze to the treasure at the center. (11)

      The game builds upon itself to make progress seem meaningful towards reaching a certain goal instead of simply as an optional experience within the game.

    11. After two hours of this surreal activity, my husband became restless and began asking every five minutes or so if the game was almost over.

      Ironic considering kids often ask "how much longer" or "when we are done." This parallel that the author makes emphasizes that the rhizome experience can cause everyone, no matter their age, to feel emotional whether it be exasperated or entthralled.

    12. Both the overdetermined form of the single-path maze adventure and the underdetermined form of rhizome fiction work against the interactor’s pleasure in navigation.

      The enjoyment of a game/story is diminished by both too little freedom (the single path maze) and too much freedom (rhizome).

    13. they offer no end point and no way out.

      Often how anxiety feels. For example during the "Adventures with Anxiety" game, when she's on the roof she felt like there was no way out except jumping in certain scenarios.

    1. To announce it publicly; and the penalty ––Stoning to death I the public square

      reminds me of the "shame" scene from Game of Throne with Cersei Lannister. It wasn't to the death but it was a stoning regardless

    1. “historical materialism”

      Historical materialism, like a puppet in a chess game, always win effortlessly but relies on the hidden influence of theology, which isn't openly recognized. Even though theology is often dismissed, it plays a crucial role in helping historical materialism succedent though it has to stay out of light.

    1. but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills

      Depends on what you mean by "efficient." A computer can play a game thousands of times in a second, whereas it could take a human hours to play the same game one time. It took AlphaZero only NINE hours to play those forty-four million games and master the game of chess better than any human (or other computer program) in history. Which is more efficient?

    1. “Gamification is a good motivator, because one key aspect is reward, which is very powerful,” said Schwartz. The downside? Rewards are specific to the activity at hand, which may not extend to learning more generally. “If I get rewarded for doing math in a space-age video game, it doesn’t mean I’m going to be motivated to do math anywhere else.”Gamification sometimes tries to make “chocolate-covered broccoli,” Schwartz said, by adding art and rewards to make speeded response tasks involving single-answer, factual questions more fun. He hopes to see more creative play patterns that give students points for rethinking an approach or adapting their strategy, rather than only rewarding them for quickly producing a correct response.

      gamification is an issue - how to balance skill and intrinsic motivation and motivation for a reward.

    1. It simply means that like every other social situation, gamesinform how we interpret and act in the social contexts we find ourselves in

      This is true. I find that people become a different version of themselves when they're in their competitive nature, whether it's a board game or a sports game, they become different people and act differently based on the people playing with/against them and the people spectating.

    2. They persist through time and may evenaffect the general sense of self-worth the player has, if the emotional reac-tion to the player’s engagement with the game was strong enough.

      This is seen more in games of sports or highly competitive environments. As someone who has done club volleyball and basketball, some of my teammates were entirely affected by one bad game they had and the reaction of their fellow teammates or coaches. A bad experience can cause a player to stop liking the game ad sometimes cause them to not want to play the sport and game anymore.

    3. It simply means that like every other social situation, gamesinform how we interpret and act in the social contexts we find ourselves in.

      Especially with a close group of friends or family, the way we act while playing a game is changed. Games always come with rule-bending or disagreements and how 'much' we care is affected by who we are with as explained in the sentence before the highlighted.

    1. Were I dressed in armor on a high horse,there is no man here to match me—their might is so feeble.And so I crave in this court only a Christmas gamesince it’s the holidays, and you here are young and merry.If any in this house here holds that he is brave, 285if so bold be his blood or his brain be so wildthat he sternly dares to trade one strike for one strike,then I will give him as my gift this costly weapon,this axe—it’s heavy enough to handle as he pleases;and I’ll bear the first strike, here baring my neck as I kneel. 290If any fellow be so fierce as to try it,let him hasten to me and take hold of this weapon—I hand it over and he can have it as his own—and I will take a hit from him, stone-still on this floor,provided you agree to this pact: that I may give a hit to him, 295so says I!Yet a rest I will allow,till a year and a day go by.Come quick, and let’s see nowif any dare reply!

      The game

    1. I honestly feel Mad Men was the last show where everyone immediately got online to talk about it after the episode aired, en masse,

      I disagree with this statement. Because I think there are a few shows that have come out recently where we run straight to the internet to engage in discourse about. Game of Thrones (and other related series), Euphoria, Power, etc. There are plenty.

    1. Reviewer #2 (Public review):

      Wnt signaling is the name given to a cell-communication mechanism that cells employ to inform on each other's position and identity during development. In cells that receive the Wnt signal from the extracellular environment, intracellular changes are triggered that cause the stabilization and nuclear translocation of β-catenin, a protein that can turn on groups of genes referred to as Wnt targets. Typically these are genes involved in cell proliferation. Genetic mutations that affect Wnt signaling components can therefore affect tissue expansion. Loss of function of APC is a drastic example: APC is part of the β-catenin destruction complex, and in its absence, β-catenin protein is not degraded and constitutively turns on proliferation genes, causing cancers in the colon and rectum. And here lies the importance of the finding: β-catenin has for long been considered to be regulated almost exclusively by tuning its protein turnover. In this article, a new aspect is revealed: Ctnnb1, the gene encoding for β-catenin, possesses tissue-specific regulation with transcriptional enhancers in its vicinity that drive its upregulation in intestinal stem cells. The observation that there is more active β-catenin in colorectal tumors not only because the broken APC cannot degrade it, but also because transcription of the Ctnnb1 gene occurs at higher rates, is novel and potentially game-changing. As genomic regulatory regions can be targeted, one could now envision that mutational approaches aimed at dampening Ctnnb1 transcription could be a viable additional strategy to treat Wnt-driven tumors.

  3. drive.google.com drive.google.com
    1. She alleged that the coaches told her she was theteam’s tallest player and needed to play in that night’s game.Plaintiff claimed they observed her shaking and having diffi-culty participating during warm-ups but still played her in thegame. The plaintiff’s mother took her to the hospital after thesecond game. Plaintiff alleged that the delay in receiving med-ical treatment caused her to suffer an exacerbation of her neu-rological condition.

      Not addressing the player's issues during warm-up hurt the athlete. She was having a neurological problem that was not addressed when witnessed. They wanted to play her because she was tall and wanted for that game.

  4. Aug 2024
    1. the end goal in a game is partially constituted by the constraints on how you got it

      The effort placed by YOU to get to the goal -> the process What were the obstacles you overcame in order to reach the goal of the game?

    2. connect us not just toeach other in the moment and to each other across cultures, really, but just like you said, to thehumans of the past and the cultures of the past

      Through games, we can learn about the beliefs and practices different cultures had during the time period when the game was created. Therefore, providing a reflection of what was expected of them at the time due to societal norms.

    3. people across time have played that same game and negotiated the same board with thosesame pieces and same options for moves.

      It makes me wonder how they negotiated the rules to the games we know today. For example, Uno, Loteria, Hang Man. How were they able to collaborate without letting their beliefs on how the game should be played take over.

    1. “It’s a bit like a video game,” Prof. DeCarlo said. “And we’re able to measure everything all at once.”

      I would like to know how people go about measuring these chemicals and what goes in to collecting all of the data.

    2. pollution levels in real time, and even follow plumes to try to determine their source. “It’s a bit like a video game,” Prof. DeCarlo said. “And we’re able to measure everything all at once.”

      Do the vans own emissions effect the results? Or are the vans electric or non-gas.

    1. Cohen’s fatal injury happened the same day a 16-year-old football player from Alabama was fatally injured during his school’s season opener. Caden Tellier, the quarterback for John T. Morgan Academy in Selma, suffered a brain injury Friday night, Alabama Independent School Association Executive Director Michael McLendon told CNN in a statement. His death was announced Saturday. John T. Morgan Academy football player, Caden Tellier posing for a picture. Morgan Academy Related article Alabama teen dies after head injury during high school football game Caden’s family decided to donate the teen’s organs, his mother wrote on Facebook. “Caden is still fighting hard in his earthly body as he prepares for this final act of generosity to bring new life to others,” Arsella Slagel Tellier posted Tuesday. “We continue to pray for those whose lives will be forever changed by his gifts.”

      Hi @CNN - this is so inappropriate. Thanks.

    1. the world is not replete with divisions of power and privilege that skew one’s opportunities within it, predetermining possibilities through a game of social and economic fate

      this statement highlights the "can-do" attitude of the entrepreneurial subject. Szeman isn't describing reality, but the way that entrepreneurs see it and process it ...

    1. Furthermore,although a program exists that has enabled a computer to win against the master ofchess, no such program that reaches the level of chess exists for shogi or go. Thereason is that, in general, as the depth of reading ahead in a game increases, thenumber of cases that must follow increases explosively. In this respect, computerscannot yet cope well with shogi and go, as they can with chess. We must confron

      Despite advances, computers still struggle with games like shogi and go due to the explosive growth in possible moves.

  5. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Play is a second method for encouraging openness. While meditation focuses on interior, individual states, a playful environment is deeply participatory.

      Exactly this is very necessary for teacher to add interesting games in the class to create a relaxed feeling. Encouraging each child to join in the game will improve their concentration and absorb knowledge quickly.

    1. The purpose of this article is to not to enter the definitional game but ratherto ask whether focusing on fascism is politically useful for thinking aboutAmerica’s political future. Thinking about fascism in our present momentrequires, in my view, a focus on four issues: first, a hard look at salient featuresand outcomes of the Trump presidency; second, a view of fascism that focuses onhistorical methodology and the question of comparison across time and space;third, a revisitation of empirical evidence that asks what was happening ineurope in the 1930s—particularly Italy, where fascism began; and last, the ques-tion of political strategy—what is to be done?

      Author's Game Plan: Focus- is fascism a concept worth involving in current political discussions of US. Distinguished argument from "is Trump fascist"

      How will they do this? 1. features and outcomes of Trump presidency 2. fascism comparisons and methods 3. Italy Fascism- where it began? 4. Next steps- so what?

    1. the world is not replete with divisions of power and privilege that skew one’s opportunities within it, predetermining possibilities through a game of social and economic fate

      this statement highlights the "can-do" attitude of the entrepreneurial subject. Szeman isn't describing reality, but the way that entrepreneurs see it and process it ...

    1. Reviewer #3 (Public Review):

      Summary:<br /> In this paper, the authors demonstrate the inevitably of the emergence of some degree of spatial information in sufficiently complex systems, even those that are only trained on object recognition (i.e. not "spatial" systems). As such, they present an important null hypothesis that should be taken into consideration for experimental design and data analysis of spatial tuning and its relevance for behavior.

      Strengths:<br /> The paper's strengths include the use of a large multi-layer network trained in a detailed visual environment. This illustrates an important message for the field: that spatial tuning can be a result of sensory processing. While this is a historically recognized and often-studied fact in experimental neuroscience, it is made more concrete with the use of a complex sensory network. Indeed, the manuscript is a cautionary tale for experimentalists and computational researchers alike against blindly applying and interpreting metrics without adequate controls.

      Weaknesses:<br /> However, the work has a number of significant weaknesses. Most notably: the degree and quality of spatial tuning is not analyzed to the standards of evidence historically used in studies of spatial tuning in the brain, and the authors do not critically engage with past work that studies the sensory influences of these cells; there are significant issues in the authors' interpretation of their results and its impact on neuroscientific research; the ability to linearly decode position from a large number of units is not a strong test of spatial information, nor is it a measure of spatial cognition; and the authors make strong but unjustified claims as to the implications of their results in opposition to, as opposed to contributing to, work being done in the field.

      The first weakness is that the degree and quality of spatial tuning that emerges in the network is not analyzed to the standards of evidence that have been used in studies of spatial tuning in the brain. Specifically, the authors identify place cells, head direction cells, and border cells in their network and their conjunctive combinations. However, these forms of tuning are the most easily confounded by visual responses, and it's unclear if their results will extend to forms of spatial tuning that are not. Further, in each case, previous experimental work to further elucidate the influence of sensory information on these cells has not been acknowledged or engaged with.

      For example, consider the head direction cells in Figure 3C. In addition to increased activity in some directions, these cells also have a high degree of spatial nonuniformity, suggesting they are responding to specific visual features of the environment. In contrast, the majority of HD cells in the brain are only very weakly spatially selective, if at all, once an animal's spatial occupancy is accounted for (Taube et al 1990, JNeurosci). While the preferred orientation of these cells are anchored to prominent visual cues, when they rotate with changing visual cues the entire head direction system rotates together (cells' relative orientation relationships are maintained, including those that encode directions facing AWAY from the moved cue), and thus these responses cannot be simply independent sensory-tuned cells responding to the sensory change) (Taube et al 1990 JNeurosci, Zugaro et al 2003 JNeurosci, Ajbi et al 2023).

      As another example, the joint selectivity of detected border cells with head direction in Figure 3D suggests that they are "view of a wall from a specific angle" cells. In contrast, experimental work on border cells in the brain has demonstrated that these are robust to changes in the sensory input from the wall (e.g. van Wijngaarden et al 2020), or that many of them are not directionally selective (Solstad et al 2008).

      The most convincing evidence of "spurious" spatial tuning would be the emergence of HD-independent place cells in the network, however, these cells are a small minority (in contrast to hippocampal data, Thompson and Best 1984 JNeurosci, Rich et al 2014 Science), the examples provided in Figure 3 are significantly more weakly tuned than those observed in the brain, and the metrics used by the authors to quantify place cell tuning are not clearly defined in the methods, but do not seem to be as stringent as those commonly used in real data. (e.g. spatial information, Skaggs et al 1992 NeurIPS).

      Indeed, the vast majority of tuned cells in the network are conjunctively selective for HD (Figure 3A). While this conjunctive tuning has been reported, many units in the hippocampus/entorhinal system are *not* strongly hd selective (Muller et al 1994 JNeurosci, Sangoli et al 2006 Science, Carpenter et al 2023 bioRxiv). Further, many studies have been done to test and understand the nature of sensory influence (e.g. Acharya et al 2016 Cell), and they tend to have a complex relationship with a variety of sensory cues, which cannot readily be explained by straightforward sensory processing (rev: Poucet et al 2000 Rev Neurosci, Plitt and Giocomo 2021 Nat Neuro). E.g. while some place cells are sometimes reported to be directionally selective, this directional selectivity is dependent on behavioral context (Markus et al 1995, JNeurosci), and emerges over time with familiarity to the environment (Navratiloua et al 2012 Front. Neural Circuits). Thus, the question is not whether spatially tuned cells are influenced by sensory information, but whether feed-forward sensory processing alone is sufficient to account for their observed turning properties and responses to sensory manipulations.

      These issues indicate a more significant underlying issue of scientific methodology relating to the interpretation of their result and its impact on neuroscientific research. Specifically, in order to make strong claims about experimental data, it is not enough to show that a control (i.e. a null hypothesis) exists, one needs to demonstrate that experimental observations are quantitatively no better than that control.

      Where the authors state that "In summary, complex networks that are not spatial systems, coupled with environmental input, appear sufficient to decode spatial information." what they have really shown is that it is possible to decode *some degree* of spatial information. This is a null hypothesis (that observations of spatial tuning do not reflect a "spatial system"), and the comparison must be made to experimental data to test if the so-called "spatial" networks in the brain have more cells with more reliable spatial info than a complex-visual control.

      Further, the authors state that "Consistent with our view, we found no clear relationship between cell type distribution and spatial information in each layer. This raises the possibility that "spatial cells" do not play a pivotal role in spatial tasks as is broadly assumed." Indeed, this would raise such a possibility, if 1) the observations of their network were indeed quantitatively similar to the brain, and 2) the presence of these cells in the brain were the only evidence for their role in spatial tasks. However, 1) the authors have not shown this result in neural data, they've only noticed it in a network and mentioned the POSSIBILITY of a similar thing in the brain, and 2) the "assumption" of the role of spatially tuned cells in spatial tasks is not just from the observation of a few spatially tuned cells. But from many other experiments including causal manipulations (e.g. Robinson et al 2020 Cell, DeLauilleon et al 2015 Nat Neuro), which the authors conveniently ignore. Thus, I do not find their argument, as strongly stated as it is, to be well-supported.

      An additional weakness is that linear decoding of position is not a strong test, nor is it a measure of spatial cognition. The ability to decode position from a large number of weakly tuned cells is not surprising. However, based on this ability to decode, the authors claim that "'spatial' cells do not play a privileged role in spatial cognition". To justify this claim, the authors would need to use the network to perform e.g. spatial navigation tasks, then investigate the network's ability to perform these tasks when tuned cells were lesioned.

      Finally, I find a major weakness of the paper to be the framing of the results in opposition to, as opposed to contributing to, the study of spatially tuned cells. For example, the authors state that "If a perception system devoid of a spatial component demonstrates classically spatially-tuned unit representations, such as place, head-direction, and border cells, can "spatial cells" truly be regarded as 'spatial'?" Setting aside the issue of whether the perception system in question does indeed demonstrate spatially-tuned unit representations comparable to those in the brain, I ask "Why not?" This seems to be a semantic game of reading more into a name then is necessarily there. The names (place cells, grid cells, border cells, etc) describe an observation (that cells are observed to fire in certain areas of an animal's environment). They need not be a mechanistic claim (that space "causes" these cells to fire) or even, necessarily, a normative one (these cells are "for" spatial computation). This is evidenced by the fact that even within e.g. the place cell community, there is debate about these cells' mechanisms and function (eg memory, navigation, etc), or if they can even be said to serve only a single function. However, they are still referred to as place cells, not as a statement of their function but as a history-dependent label that refers to their observed correlates with experimental variables. Thus, the observation that spatially tuned cells are "inevitable derivatives of any complex system" is itself an interesting finding which *contributes to*, rather than contradicts, the study of these cells. It seems that the authors have a specific definition in mind when they say that a cell is "truly" "spatial" or that a biological or artificial neural network is a "spatial system", but this definition is not stated, and it is not clear that the terminology used in the field presupposes their definition.

      In sum, the authors have demonstrated the existence of a control/null hypothesis for observations of spatially-tuned cells. However, 1) It is not enough to show that a control (null hypothesis) exists, one needs to test if experimental observations are no better than control, in order to make strong claims about experimental data, 2) the authors do not acknowledge the work that has been done in many cases specifically to control for this null hypothesis in experimental work or to test the sensory influences on these cells, and 3) the authors do not rigorously test the degree or source of spatial tuning of their units.

    1. I SAW YUPHI— SHTH IRA’S PROSPERITY, I HAVE HAP NO PEACE

      The jealousy held by Duryodhana over his cousin Arjuna explores an interesting theme and has significant implications. As the eldest of the Kauravas, it makes sense that he holds a lot of resentment and envy for Arjuna as he is one of the Pandavas and it is mainly rooted in the fact that they are in rival families. This spite is the reason why he wants to play the game of dice leading to the eventual exile of the Pandavas. In other readings like Giwe for example, he exemplifies what it means to sacrifice for others and to help a larger group of people by putting his pride aside. On the other hand, we can see how jealousy can cause conflict and inflict harm upon others which is what good leaders avoid and do the opposite of. Not to mention, we can see the destructive potential of jealousy and how it can become even larger conflicts. CC BY Ajey Sasimugunthan (contact)

    1. Alas, their ruthless fate, unhappy friends!     But in what manner, tell me, did they perish?

      Diction such as "ruthless" to describe the fate of the Persian loss highlights the harshness of the sequence of events leading to the Persians defeat. Not to mention, mentioning "unhappy friends" explains the people that are mourning over the loss of loved ones especially in war that they were expected to win. People are often at a loss for words after a disaster happens and the first thing they think about is how did it happen. This is what the Persian people are going through as they are learning to accept the harsh reality of their defeat and will need to cut their losses in order to bounce back up. Similar to huge upsets in sports game, the teams are usually at a loss for words as they do not understand how they let themselves lose the game especially being in a very favorable situations. All of this goes without saying that it serves as a lesson for readers that you should never count out people when their backs are against the wall because that is when they might perform at their best and surprise you when it is least expected. CC BY Ajey Sasimugunthan (contact)

    1. Reviewer #3 (Public Review):

      Summary:

      Well-illustrated new material is documented for Acanthomeridion, a formerly incompletely known Cambrian arthropod. The formerly known facial sutures are proposed be associated with ventral plates that the authors homologise with the free cheeks of trilobites (although also testing alternative homologies). An update of a published phylogenetic dataset permits reconsideration of whether dorsal ecdysial sutures have a single or multiple origins in trilobites and their relatives.

      Strengths:

      Documentation of an ontogenetic series makes a sound case that the proposed diagnostic characters of a second species of Acanthomeridion are variation within a single species. New microtomographic data shed light on appendage morphology that was not formerly known. The new data on ventral plates and their association with the ecdysial sutures are valuable in underpinning homologies with trilobites.

      I think the revision does a satisfactory job of reconciling the data and analyses with the conclusions drawn from them. Referee 1's valid concerns about whether a synonymy of Acanthomeridion anacanthus is justified have been addressed by the addition of a length/width scatterplot in Figure 6. Referee 2's doubts about homology between the librigenae of trilobites and ventral plates of Acanthomeridion have been taken on board by re-running the phylogenetic analyses with a coding for possible homology between the ventral plates and the doublure of olenelloid trilobites. The authors sensibly added more trilobite terminals to the matrix (including Olenellus) and did analyses with and without constraints for olenelloids being a grade at the base of Trilobita. My concerns about counting how many times dorsal sutures evolved on a consensus tree have been addressed (the authors now play it safe and say "multiple" rather than attempting to count them on a bushy topology). The treespace visualisation (Figure 9) is a really good addition to the revised paper.

      Weaknesses:

      The question of how many times dorsal ecdysial sutures evolved in Artiopoda was addressed by Hou et al (2017), who first documented the facial sutures of Acanthomeridion and optimised them onto a phylogeny to infer multiple origins, as well as in a paper led by the lead author in Cladistics in 2019. Du et al. (2019) presented a phylogeny based on an earlier version of the current dataset wherein they discussed how many times sutures evolved or were lost based on their presence in Zhiwenia/Protosutura, Acanthomeridion and Trilobita. The answer here is slightly different (because some topologies unite Acanthomeridion and trilobites). This paper is not a game-changer because these questions have been asked several times over the past seven years, but there are solid, worthy advances made here.

      I'd like to see some of the most significant figures from the Supplementary Information included in the main paper so they will be maximally accessed. The "stick-like" exopods are not best illustrated in the main paper; their best imagery is in Figure S1. Why not move that figure (or at least its non-redundant panels) as well as the reconstruction (Figure S7) to the main paper? The latter summarises the authors' interpretation that a large axe-shaped hypostome appears to be contiguous with ventral plates. The specimens depict evidence for three pairs of post-antennal cephalic appendages but it's a bit hard to picture how they functioned if there's no room between the hypostome and ventral plates. Also, a comment is required on the reconstruction involving all cephalic appendages originating against/under the hypostome rather the first pair being paroral near the posterior end of the hypostome and the rest being post-hypostomal as in trilobites.

    1. “You picked a hard row to hoe,” she said, and went back to her chess game. The rows were tubs of ice cream. The hoe was that scoop.

      I'm happy she broke that quote down, lol

    1. Many of today's electronics are basically specialized computers, though we don't always think of them that way.

      There are a lot of other electronics that can classify as a specialized computer. Some examples are phones, smart watches, game consoles, and TVs.