10,000 Matching Annotations
  1. Oct 2024
    1. The term didn’t really take off and become weaponized, however, until the growing resentment of “outsiders” and indie games that would culminate in Gamergate, after which it was retroactively applied with vitriol to games released much earlier like Dear Esther (originally 2008) and To the Moon (2011) (Clark 2017)

      Mainstream game companies slammed indie game companies for their format of having walking as an instrumental part of game play. Mainstream companies still use walking as a big part of their games in order to advance the plot even if they have other aspects in gameplay. Mainstream companies did not want to compete with indie games, so they made indie games undesirable to be played because they portrayed them as not games.

    2. In a game without spoken dialogue, these locations become a language for conveying meaning, and the player’s journey one of uncovering, step by literal step, the self of the character whose story is being told.

      While I agree that walking simulators allow players to grasp an understanding of the characters and story from solely exploring a physical space, the inclusion of Sam's dialogue in Gone Home definitely helped create a more immersive experience that was centered towards discovering Sam’s motivations.

    3. An ugly corollary to this argument, advanced by some, was that “real games” shouldn’t be about the disenfranchised. Game stories shouldn’t be about women or queer people—like Dear Esther, like Gone Home, like many of the games the genre would eventually include—nor should such people be included among their creators.

      Certain things shouldn't be included in games or else they are not considered real games. ----> What are considered real games? It's not like women and queer do not exist so why is including them make the game not "real." Wouldn't it make the game less realistic if it didn't include these things?

    4. Where procedural games differ from walking simulators is in their lack of curation: they let you walk wherever you want, including perhaps into uninteresting places, rather than down a well-prepared path that tells a particular story.

      Even though I am not a huge gamer myself, I have noticed that when I do play games, I often do find myself getting frustrated with parts of games that I deem “unnecessary” to the progression of the story. For example, even during my gameplay of “Gone Home,” I began to get frustrated when I couldn’t find the code to open the safe in the basement, which was making the gameplay take longer than anticipated. Overall, “Gone Home” is an uncomfortable game to play as it goes against many norms and conventions present in more typical online games, such as the existence of a clear and explicit objective or goal that keeps the player interested in the outcome (ex. “winning” the game, defeating a monster, etc.).

    5. The term didn’t really take off and become weaponized, however, until the growing resentment of “outsiders” and indie games that would culminate in Gamergate,

      The walking simulator term was popularized as an offensive against what many "gamers" saw as a threat to the traditional style of games that they had become familiar with and even integrated as a part of their identity. They saw "walking simulators" not as a new alternative form of game but as an outsider threat to what they knew.

    6. making similar demands on the reader/player to construct identities and narratives out of competing (and even conflicting) perspectives.

      I see Gone Home as working similarly to many mystery books, which could be the reason why many people craving the familiarity of structured games may not enjoy the walking game genre. The player is somewhat forced to piece things for themselves instead of to follow the game's plot. Even in horror games, the goal is often to "survive" or to "not get scared," yet Gone Home didn't fall into any obvious category at first glance.

    7. These games were originally dubbed “walking simulators” as an insult to exclude them and their creators from being considered “real games” or real game makers.

      The fact that "walking" games were called "walking simulators" in a negative way surprised me as "walking" games are so common nowadays I didn't even think that these new games would have faced pushback in the 2010s. I also didn't understand why these games would be excluded in the first place since it's a new form that will attract more gamers and people, thus bringing games more revenue and attention

    1. he showed up to the game (10 minutes late) with his secretary (future wife) and took his regular place.

      What would have happened if Campbell had not gone to the game? There was still so much anger towards him that I wonder if the riot would have just been delayed or if anything had happened at all.

    1. “Because I always try so hard to win and had my troubles in Boston, I was suspended. At playoff time, it hurts not to be in the game with the boys. However, I want to do what is good for the people of Montreal and the team. So that no further harm will be done, I would like to ask everyone to get behind the team and to help the boys win from the Rangers and Detroit. I will take my punishment and come back next year to help the club and younger players to win the Cup.” His words had a palliative effect. The next night nobody threw galoshes, nobody broke any more windows, nobody stopped streetcars.

      I'm glad the people listened to his words and that he was so calm about it even after probably being upset that he was suspended.

    2. “Bailey tried to gouge his [Richard’s] eyes out,” Red Storey, who refereed that game, later told a reporter, “Rocket just went berserk.”

      Wow hockey is a lot more violent than I thought

    1. During the first round of this exercise, students inevitably take so many fish that there are none left in the lake. Students then discuss what has happened and what they ought to do differently in the next round. Some students have strong intuitions that everybody should take an equal amount, while others insist that all that matters is that in the end there are enough fish left to repopulate the lake. Not only is this exercise pedagogically engaging, but it leads students to develop proposals and to evaluate them critically.

      Here’s a more streamlined version of your scenario:

      In class, another student and I were presented with a pile of money and given three options. We could either steal the money or leave it. If we both chose to steal, we would both end up with nothing. If we both left the money, neither of us would get anything either. However, if one of us stole and the other left the money, the person who stole would get the money, while the other would walk away with nothing. For me, this scenario is a reflection of both philosophical dilemmas and social constructs, particularly in the context of game theory and moral philosophy. It echoes the Prisoner’s Dilemma, a situation where individual self-interest leads to worse outcomes for both parties than cooperation would. This scenario also reflects the role of trust in social interactions. If neither party trusts the other to act cooperatively, both might choose to steal, resulting in mutual loss. Social structures like laws, norms, and ethical guidelines exist to cultivate trust and reduce the risks of selfish behavior, enabling cooperation.

  2. Sep 2024
    1. However, nothing may have happened if Campbell hadn't made a tactical error — he showed up to the game (10 minutes late) with his secretary (future wife) and took his regular place.

      I wonder why he showed up. He had to know that something was going to happen if he did show up.

    1. I divide students into groups and ask them to imagine that each group is a family subsisting by fishing from a lake

      This could also be seen as Game Theory to an extent, which is more of an economical idea. Keeping the two fish in the lake is also an interesting concept. It creates an additional layer, where we could accomplish feeding everyone this year only to make everyone suffer the following year.

    2. The imagination allows Plato to crystalize his answer to the question of how we ought to live into a vision that we can subject to critical examination. This is the constructive step that we so often fail to take.

      I personally believe that Plato was able to crystalize his answer on how to live subject to a critical examination because he saw himself in the fantastic imagination in The Republic from the 3rd perspective, as a reader. When I was traveling Japan I came across this book which I forgot the name of, however, the book gave me an inspirational insight: to be the person watching your life as a movie in a theater. We often feel sentimental or jealous when we see others living their everyday life from a third perspective. From washing dishes in a dimmed light kitchen to just having a casual family dinner. Seeing yourself and your life as a movie lets you to see yourself objectively. Seeing the movie (your life) objectively allows the audience, who is only yourself, to critically analyze your actions, emotions and thoughts. Another method is to see yourself from an above, just like a drone flying above your head and seeing yourself as one of a player in a game. This may be more effective than imagining seeing your life as a movie in a theater, however, this method's risk becomes uncorrelated to the reward as you get older, because you have more responsibilities in your actions.

    3. The imagination allows Plato to crystalize his answer to the question of how we ought to live into a vision that we can subject to critical examination.

      I personally believe that Plato was able to crystalize his answer on how to live subject to a critical examination because he saw himself in the fantastic imagination in The Republic from the 3rd perspective, as a reader. When I was traveling Japan I came across this book which I forgot the name of, however, the book gave me an inspirational insight: to be the person watching your life as a movie in a theater. We often feel sentimental or jealous when we see others living their everyday life from a third perspective. From washing dishes in a dimmed light kitchen to just having a casual family dinner. Seeing yourself and your life as a movie lets you to see yourself objectively. Seeing the movie (your life) objectively allows the audience, who is only yourself, to critically analyze your actions, emotions and thoughts. Another method is to see yourself from an above, just like a drone flying above your head and seeing yourself as one of a player in a game. This may be more effective than imagining seeing your life as a movie in a theater, however, this method's risk becomes uncorrelated to the reward as you get older, because you have more responsibilities in your actions.

    1. After a successful season with Sporting that brought the young player to the attention of Europe’s biggest football clubs, Ronaldo signed with English powerhouse Manchester United in 2003. He was an instant sensation and soon came to be regarded as one of the best forwards in the game. His finest season with United came in 2007–08, when he scored 42 League and Cup goals and earned the Golden Shoe award as Europe’s leading scorer, with 31 League goals. After helping United to a Champions League title in May 2008, Ronaldo captured Fédération Internationale de Football Association (FIFA) World Player of the Year honors for his stellar 2007–08 season. He also led United to an appearance in the 2009 Champions League final, which they lost to FC Barcelona.

      ronaldos transfer to man united and achievements

    1. in

      I see this happening in the game industry and it actually somewhat pains me a lot. There is an argument to be made though about values. I remember in one game (Genshin) there are regions essentially representative of real life cultures and there was backlash whenever the African/Latin American region had zero dark skinned characters. I think in that situation, it is justifiable since these are real people being represented here even if it fictional. In most cases though if the artwork was never good in the first place, it is difficult to justify inclusion.

    1. Welcome back and in this lesson I want to cover a few really important topics which will be super useful as you progress your general IT career, but especially so for anyone who is working with traditional or hybrid networking.

      Now I want to start by covering what a VLAN is and why you need them, then talk a little bit about Trump connections and finally cover a more advanced version of VLANs called Q in Q.

      Now I've got a lot to cover so let's just jump in and get started straight away.

      Let's start with what I've talked about in my technical fundamentals lesson so far.

      This is a physical network segment.

      It has a total of eight devices, all connected to a single, layer 2 capable device, a switch.

      Each LAN, as I talked about before, is a shared broadcast domain.

      Any frames which are addressed to all Fs will be broadcast on all ports of the switch and reach all devices.

      Now this might be fine with eight devices but it doesn't scale very well.

      Every additional device creates yet more broadcast traffic.

      Because we're using a switch, each port is a different collision domain and so by using a switch rather than a layer 1 hub we do improve performance.

      Now this local network also has three distinct groups of users.

      We've got the game testers in orange, we've got sales in blue and finance in green.

      Now ideally we want to separate the different groups of devices from one another.

      In larger businesses you might have a requirement for different segments of the network from normal devices, for servers and for other infrastructure.

      Different segments for security systems and CCTV and maybe different ones for IoT devices and IP telephony.

      Now if we only had access to physical networks this would be a challenge.

      Let's have a look at why.

      Let's say that we talk each of the three groups and split them into either different floors or even different buildings.

      On the left finance, in the middle game testers and on the right sales.

      Each of these buildings would then have its own switch and the switches in those buildings would be connected to devices also in those buildings.

      Which for now is all the finance, all the game tester and all the sales teams and machines.

      Now these switches aren't connected and because of that each one is its own broadcast domain.

      This would be how things would look in the real world if we only had access to physical networking.

      And this is fine if different groups don't need to communicate with us so we don't require cross domain communication.

      The issue right now is that none of these switches are connected so the switches have no layer 2 communications between them.

      If we wanted to do cross building or cross domain communications then we could connect the switches.

      But this creates one larger broadcast domain which moves us back to the architecture on the previous screen.

      What's perhaps more of a problem in this entirely physical networking world is what happens if a staff member changes role but not building.

      In this case moving from sales to game tester.

      In this case you need to physically run a new cable from the middle switch to the building on the right.

      If this happens often it doesn't scale very well and that is why some form of virtual local area networking is required.

      And that's why VLANs are invaluable.

      Let's have a look at how we support VLANs using layer 2 as the OSI 7-Line model.

      This is a normal Ethernet frame.

      In the context of this lesson what's important is that it has a source and destination MAC address fields together with a payload.

      Now the payload carries the data.

      The source MAC address is the MAC address of the device which is creating and sending the frame.

      The destination MAC address can contain a specific MAC address which means that it's a unique S-frame to a frame that's destined for one other device.

      Or it can contain all F's which is known as a broadcast.

      And it means that all of the devices on the same layer 2 network will see that frame.

      What a standard frame doesn't offer us is any way to isolate devices into different parts, different networks.

      And that's where a new standard comes in handy which is known as 802.1Q, also known as .1Q. .1Q changes the frame format of the standard Ethernet frame by adding a new field, a 32-bit field in the middle in Scion.

      The maximum size of the frame as a result can be larger to accommodate this new data. 12 bits of this 32-bit field can be used to store values from 0 through to 4095.

      This represents a total of 4096 values.

      This is used for the VLAN ID or VID.

      A 0 in this 12-bit value signifies no VLAN and 1 is generally used to signify the management VLAN.

      The others can be used as desired by the local network admin.

      What this means is that any .1Q frames can be a member of over 4,000 VLANs.

      And this means that you can create separate virtual LANs or VLANs in the same layer 2 physical network.

      A broadcast frame so anything that's destined to all PEPs would only reach all the devices which are in the same VLAN.

      Essentially, it creates over 4,000 different broadcast domains in the same physical network.

      You might have a VLAN for CCTV, a VLAN for servers, a VLAN for game testing, a VLAN for guests and many more.

      Anything that you can think of and can architect can be supported from a networking perspective using VLANs.

      But I want you to imagine even bigger.

      Think about a scenario where you as a business have multiple sites and each site is in a different area of the country.

      Now each site has the same set of VLANs.

      You could connect them using a dedicated wide area network and carry all of those different company specific VLANs and that would be fine.

      But what if you wanted to use a comms provider, a service provider who could provide you with this wide area network capability?

      What if the comms provider also uses VLANs to distinguish between their different clients?

      Well, you might face a situation where you use VLAN 1337 and another client of the comms provider also uses VLAN 1337.

      Now to help with this scenario, another standard comes to the rescue, 802.1AD.

      And this is known as Q in Q, also known as provider bridging or stacked VLANs.

      This adds another space in the frame for another VLAN field.

      So now instead of just the one field for 802.1Q VLANs, now you have two.

      You keep the same customer VLAN field and this is known as the C tag or customer tag.

      But you add another VLAN field called the service tag or the S tag.

      This means that the service provider can use VLANs to isolate their customer traffic while allowing each customer to also use VLANs internally.

      As the customer, you can tag frames with your VLANs and then when those frames move onto the service provider network, they can tag with the VLAN ID which represents you as a customer.

      Once the frame reaches another of your sites over the service provider network, then the S tag is removed and the frame is passed back to you as a standard .1Q frame with your customer VLAN still tagged.

      Q in Q tends to be used for larger, more complex networks and .1Q is used in smaller networks as well as cloud platforms such as AWS.

      For the remainder of this lesson, I'm going to focus on .1Q though if you're taking an advanced networking course of mine, I will be returning to the Q in Q topic in much more detail.

      For now though, let's move on and look visually at how .1Q works.

      This is a cut down version of the previous physical network I talked about, only this time instead of the three groups of devices we have two.

      So on the left we have the finance building and on the right we have game testers.

      Inside these networks we have switches and connected to these switches are two groups of machines.

      These switches have been configured to use 802.1Q and ports have been configured in a very specific way which I'm going to talk about now.

      So what makes .1Q really cool is that I've shown these different device types as separate buildings but they don't have to be.

      Different groupings of devices can operate on the same layer to switch and I'll show you how that works in a second.

      With 802.1Q ports and switches are defined as either access ports or trunk ports and access ports generally has one specific VLAN ID or vid associated with it.

      A trunk conceptually has all VLAN IDs associated with it.

      So let's say that we allocate the finance team devices to VLAN 20 and the game tester devices to VLAN 10.

      We could easily hit any other numbers, remember we have over 4,000 to choose from, but for this example let's keep it simple and keep 10 and 20.

      Now right now these buildings are separate broadcast domains because they have separate switches which are not connected and they have devices within them.

      Two laptops connected to switch number one for the finance team and two laptops connected to switch number two for the game tester team.

      Now I mentioned earlier that we have two types of switch ports in a VLAN cable network.

      The first are access ports and the ports which the orange laptops on the right are connected to are examples of access ports.

      Access ports communicate with devices using standard Ethernet which means no VLAN tags are applied to the frames.

      So in this case the laptop at the top right sends a frame to the switch and let's say that this frame is a broadcast frame.

      When the frame exits an access port it's tagged with a VLAN that the access port is assigned to.

      In this case VLAN 10 which is the orange VLAN.

      Now because this is a broadcast frame the switch now has to decide what to do with the frame and the default behaviour for switches is to forward the broadcast out of all ports except the one that it was received on.

      For switches using VLANs this is slightly different.

      First it forwards to any other access ports on the same VLAN but the tagging will be removed.

      This is important because devices connected to access ports won't always understand 802.1Q so they expect normal untagged frames.

      In addition the switch will fold frames over any trunk ports.

      A trunk port in this context is a port between two switches for example this one between switch two and switch one.

      Now a trunk port is a connection between two dot 1Q capable devices.

      It forwards all frames and it includes the VLAN tagging.

      So in this case the frame will also be forwarded over to switch one tagged as VLAN 10 which is the gain tester VLAN.

      So tagged dot 1Q frames they only get forwarded to other access ports with the same VLAN but they have the tag stripped or they get forwarded across trunk ports with the VLAN tagging intact.

      And this is how broadcast frames work.

      For unicast ones which go to a specific single MAC address well these will be either forwarded to an access port in the same VLAN that the specific device is on or if the switch isn't aware of the MAC address of that device in the same VLAN then it will do a broadcast.

      Now let's say that we have a device on the finance VLAN connected to switch two.

      And let's say that the bottom left laptop sends a broadcast frame on the finance VLAN.

      Can you see what happens to this frame now?

      Well first it will go to any other devices in the same VLAN using access ports meaning the top left laptop and in that case the VLAN tag will be removed.

      It will also be forwarded out of any trunk ports tagged with VLAN 20 so the green finance VLAN.

      It will arrive at switch two with the VLAN tag still there and then it will be forwarded to any access ports on the same VLAN so VLAN 20 on that switch but the VLAN tagging will be removed.

      Using virtual LANs in this way allows you to create multiple virtual LANs or VLANs.

      With this visual you have two different networks.

      The finance network in green so the two laptops on the left and the one at this middle bottom and then you have the gain testing network so VLAN 10 meaning the orange one on the right.

      Both of these are isolated.

      Devices cannot communicate between VLANs which are led to networks without a device operating between them such as a layer 3 router.

      Both of these virtual networks operate over the top of the physical network and it means that now we can configure this network in using virtual configuration software which can be configured on the switches.

      Now VLANs are how certain things within AWS such as public and private vifs on direct connect works so keep this lesson in mind when I'm talking about direct connect.

      A few summary points though that I do want to cover before I finish up with this lesson.

      First VLANs allow you to create separate layer 2 network segments and these provide isolation so traffic is isolated within these VLANs.

      If you don't configure and deploy a router between different VLANs then frames cannot leave that VLAN boundary so they're virtual networks and these are ideal if you want to configure different virtual networks for different customers or if you want to access different networks for example when you're using direct connect to access VPCs.

      VLANs offer separate broadcast domains and this is important.

      They create completely separate virtual network segments so any broadcast frames within a VLAN won't leave that VLAN boundary.

      If you see any mention of 802.1Q then you know that means VLANs.

      If you see any mention of VLANs stacking or provide a bridging or 802.1AD or Q in Q this means nested VLANs.

      So having a customer tag and a service tag allowing you to have VLANs in VLANs and these are really useful if you want to use VLANs on your internal business network and then use a service provider to provide wide area network connectivity who also uses VLANs and if you are doing any networking exams then you will need to understand Q in Q as well as 802.1Q.

      So with that being said that's everything I wanted to cover.

      Go ahead and complete this video and when you're ready I'll look forward to you joining me in the next.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Summary:

      This manuscript by Meissner and colleagues described a novel take on a classic social cognition paradigm developed for marmosets. The classic pull task is a powerful paradigm that has been used for many years across numerous species, but its analog approach has several key limitations. As such, it has not been feasible to adopt the task for neuroscience experiments. Here the authors capture the spirit of the classic task but provide several fundamental innovations that modernize the paradigm - technically and conceptually. By developing the paradigm for marmosets, the authors leverage the many advantages of this primate model for studies of social brain functions and their particular amenability to freely-moving naturalistic approaches.

      Strengths:

      The current manuscript describes one of the most exciting paradigms in primate social cognition to be developed in many years. By allowing for freely-moving marmosets to engage in high numbers of trials, while precisely quantifying their visual behavior (e.g. gaze) and recording neural activity this paradigm has the potential to usher in a new wave of research on the cognitive and neural mechanisms underlying primate social cognition and decision-making. This paradigm is an elegant illustration of how naturalistic questions can be adapted to more rigorous experimental paradigms. Overall, I thought the manuscript was well written and provided sufficient details for others to adopt this paradigm. I did have a handful of questions and requests about topics and information that could help to further accelerate its adoption across the field.

      Weaknesses:

      LN 107 - Otters have also been successful at the classic pull task (https://link.springer.com/article/10.1007/s10071-017-1126-2)

      We have added this reference to the manuscript.

      LN 151 - Can you provide a more precise quantification of timing accuracy than the 'sub-second level'. This helps determine synchronization with other devices.

      We have included more precise timing details, noting that data is stored at the millisecond level.

      Using this paradigm, the marmosets achieved more trials than in the conventional task (146 vs 10). While this is impressive, given that only ~50 are successful Mutual Cooperation trials it does present some challenges for potential neurophysiology experiments and particular cognitive questions. The marmosets are only performing the task for 20 minutes, presumably because they become sated and are no longer motivated. This seems a limitation of the task and is something worth discussing in the manuscript. Did the authors try other food rewards, reduce the amount of reward, food/water restrict the animals for more than the stated 1-3 hours? How might this paradigm be incorporated into in-cage approaches that have been successful in marmosets? Any details on this would help guide others seeking to extend the number of trials performed each day.

      We have added a discussion addressing the use of liquid rewards, minimal food and water restriction, and the potential for further optimization to increase task engagement and trial numbers. This is now reflected in the revised manuscript.

      Can you provide more details on the DLC/Anipose procedure? How were the cameras synchronized? What percentage of trials needed to be annotated before the model could be generalized? Did each monkey require its own model, or was a single one applied to all animals?

      We have added more detailed information on the DLC and Anipose tracking which can be found in the Multi-animal 3D tracking section under Materials & Methods.

      Will the schematics and more instructions on building this system be made publicly available? A number of the components listed in Table 1 are custom-designed. Although it is stated that CAD files will be made available upon request, sharing a link to these files in an accessible folder would significantly add to the potential impact of this paradigm by making it easier for others to adopt.

      We have made the SolidWorks CAD files publicly available. They can now be found in the Github repository alongside the apparatus and task code.

      In the Discussion, it would be helpful to have some discussion of how this paradigm might be used more broadly. The classic pulling paradigm typically allows one to ask a specific question about social cognition, but this task has the potential to be more widely applied to other social decision-making questions. For example, how might this task be adopted to ask some of the game-theory-type approaches common in this literature? Given the authors' expertise in this area, this discussion could serve to provide a roadmap for the broader field to adopt.

      Although this paradigm was developed specifically for marmosets, it seems to me that it could readily be adopted in other species with some modifications. Could the authors speak to this and their thoughts on what may need to be changed to be used in other species? This is particularly important because one of the advantages of the classic paradigm is that it has been used in so many species, providing the opportunity to compare how different species approach the same challenge. For example, though both chimps and bonobos are successful, their differences are notably illuminating about the nuances of their respective social cognitive faculties.

      We have expanded the discussion for the broader applications of this apparatus both for other decision-making research questions as well as its adaptability for use in other species.

      Reviewer #2 (Public Review):

      Summary:

      This important work by Meisner et al., developed an automated apparatus (MarmoAPP) to collect a wide array of behavioral data (lever pulling, gaze direction, vocalizations) in marmoset monkeys, with the goal of modernizing collection of behavioral data to coincide with the investigation of neurological mechanisms governing behavioral decision making in an important primate neuroscience model. The authors show a variety of "proof-of-principle" concepts that this apparatus can collect a wide range of behavioral data, with higher behavioral resolution than traditional methods. For example, the authors highlight that typical behavioral experiments on primate cooperation provide around 10 trials per session, while using their approach the authors were able to collect over 100 trials per 20-minute session with the MarmoAAP.

      Overall the authors argue that this approach has a few notable advantages:<br /> (1) it enhances behavioral output which is important for measuring small or nuanced effects/changes in behavior;<br /> (2) allows for more advanced analyses given the higher number of trials per session;<br /> (3) significantly reduces the human labor of manually coding behavioral outcomes and experimenter interventions such as reloading apparatuses for food or position;<br /> (4) allows for more flexibility and experimental rigor in measuring behavior and neural activity simultaneously.

      Strengths:

      The paper is well-written and the MarmoAPP appears to be highly successful at integrating behavioral data across many important contexts (cooperation, gaze, vocalizations), with the ability to measure significantly many more behavioral contexts (many of which the authors make suggestions for).

      The authors provide substantive information about the design of the apparatus, how the apparatus can be obtained via a long list of information Apparatus parts and information, and provide data outcomes from a wide number of behavioral and neurological outcomes. The significance of the findings is important for the field of social neuroscience and the strength of evidence is solid in terms of the ability of the apparatus to perform as described, at least in marmoset monkeys. The advantage of collecting neural and freely-behaving behavioral data concurrently is a significant advantage.

      Weaknesses:

      While this paper has many significant strengths, there are a few notable weaknesses in that many of the advantages are not explicitly demonstrated within the evidence presented in the paper. There are data reported (as shown in Figures 2 and 3), but in many cases, it is unclear if the data is referenced in other published work, as the data analysis is not described and/or self-contained within the manuscript, which it should be for readers to understand the nature of the data shown in Figures 2 and 3.

      (1) There is no data in the paper or reference demonstrating training performance in the marmosets. For example, how many sessions are required to reach a pre-determined criterion of acceptable demonstration of task competence? The authors reference reliably performing the self-reward task, but this was not objectively stated in terms of what level of reliability was used. Moreover, in the Mutual Cooperation paradigm, while there is data reported on performance between self-reward vs mutual cooperation tasks, it is unclear how the authors measured individual understanding of mutual cooperation in this paradigm (cooperation performance in the mutual cooperation paradigm in the presence or absence of a partner; and how, if at all, this performance varied across social context). What positive or negative control is used to discern gained advantages between deliberate cooperation vs two individuals succeeding at self-reward simultaneously?

      Thank you for your comment. This Tools & Resources paper is focused solely on the development of the apparatus and methods. Future publications will provide more details on training performance, learning behaviors, and include appropriate controls to distinguish deliberate cooperation from simultaneous success in self-reward tasks.

      (2) One of the notable strengths of this approach argued by the authors is the improved ability to utilize trials for data analysis, but this is not presented or supported in the manuscript. For example, the paper would be improved by explicitly showing a significant improvement in the analytical outcome associated with a comparison of cooperation performance in the context of ~150 trials using MarmoAAP vs 10-12 trials using conventional behavioral approaches beyond the general principle of sample size. The authors highlight the dissection of intricacies of behavioral dynamics, but more could be demonstrated to specifically show these intricacies compared to conventional approaches. Given the cost and expertise required to build and operate the MarmoAAP, it is critical to provide an important advantage gained on this front. The addition of data analysis and explicit description(s) of other analytical advantages would likely strengthen this paper and the advantages of MarmoAAP over other behavioral techniques.

      Thank you for the suggestion. While this manuscript focuses on the apparatus and methods, the increase in trial numbers itself provides clear advantages, including greater statistical power and more robust analyses of behavioral dynamics. Future publications will offer more in-depth analyses comparing the performance and cooperation behavior observed with MarmoAAP, further demonstrating these analytical benefits.

      Reviewer #3 (Public Review):

      Summary:

      The authors set out to devise a system for the neural and behavioral study of socially cooperative behaviors in nonhuman primates (common marmosets). They describe instrumentation to allow for a "cooperative pulling" paradigm, the training process, and how both behavioral and neural data can be collected and analyzed. This is a valuable approach to an important topic, as the marmoset stands as a great platform to study primate social cognition. Given that the goals of such a methods paper are to (a) describe the approach and instrumentation, (b) show the feasibility of use, and (c) quantitatively compare to related approaches, the work is easily able to meet those criteria. My specific feedback on both strengths and weaknesses is therefore relatively limited in scope and depth.

      Strengths:

      The device is well-described, and the authors should be commended for their efforts in both designing this system but also in "writing it up" so that others can benefit from their R&D.

      The device appears to generate more repetitions of key behavior than other approaches used in prior work (with other species).

      The device allows for quantitative control and adjustment to control behavior.

      The approach also supports the integration of markerless behavioral analysis as well as neurophysiological data.

      Weaknesses:

      A few ambiguities in the descriptions are flagged below in the "Recommendations for authors".

      The system is well-suited to marmosets, but it is less clear whether it could be generalized for use in other species (in which similar behaviors have been studied with far less elegant approaches). If the system could impact work in other species, the scope of impact would be significantly increased, and would also allow for more direct cross-species comparisons. Regardless, the future work that this system will allow in the marmoset will itself be novel, unique, and likely to support major insights into primate social cognition.

      Thank you for this feedback. We have expanded the discussion to include how the apparatus could be adapted for use in other species, highlighting the potential modifications required, such as adjusting the size and strength of the servo motor and components. These changes would enable broader applications and facilitate cross-species comparisons.

    1. Reviewer #3 (Public review):

      Summary:

      The study investigates reinforcement learning across the lifespan with a large sample of participants recruited for an online game. It finds that children gradually develop their abilities to learn reward probability, possibly hindered by their immature spatial processing and probabilistic reasoning abilities. Motor noise, reinforcement learning rate, and exploration after a failure all contribute to children's subpar performance.

      Strengths:

      (1) The paradigm is novel because it requires continuous movement to indicate people's choices, as opposed to discrete actions in previous studies.

      (2) A large sample of participants were recruited.

      (3) The model-based analysis provides further insights into the development of reinforcement learning ability.

      Weaknesses:

      (1) The adequacy of model-based analysis is questionable, given the current presentation and some inconsistency in the results.

      (2) The task should not be labeled as reinforcement motor learning, as it is not about learning a motor skill or adapting to sensorimotor perturbations. It is a classical reinforcement learning paradigm.

    1. The faculty of re-solution{d} is possibly much invigorated by mathematical study, and especially by that highest branch of it which, unjustly, and merely on account of its retrograde operations, has been called, as if par excellence, analysis. Yet to calculate is not in itself to analyse. A chess-player, for example, does the one without effort at the other. It follows that the game of chess, in its effects upon mental character, is greatly misunderstood. I am not now writing a treatise, but simply prefacing a somewhat peculiar narrative by observations very much at random; I will, therefore, take occasion to assert that the higher powers of the reflective intellect are more decidedly and more usefully tasked{e} by the unostentatious game of draughts than by all the elaborate frivolity of chess. In this latter, where the pieces have different and bizarre{f} motions, with various and variable values, what{g} is only complex is mistaken (a not unusual error) for what{h} is profound. The attention is here called powerfully into play. If it flag for an instant, an oversight is committed, resulting in injury or defeat. The possible moves being not only manifold but involute, the chances of such oversights are multiplied; and in nine cases out of ten it is the more concentrative rather than the more acute player who conquers. In draughts, on the contrary, where the moves are unique{i} and have but little variation, the probabilities of inadvertence are diminished, [page 529:] and the mere attention being left comparatively unemployed, what advantages are obtained by either party are obtained by superior acumen.{j} To be less abstract — Let us suppose a game of draughts where the pieces are reduced to four kings, and where, of course, no oversight is to be expected. It is obvious that here the victory can be decided (the players being at all equal) only by some recherché{k} movement, the result of some strong exertion of the intellect. Deprived of ordinary resources, the analyst throws himself into the spirit of his opponent, identifies himself therewith, and not unfrequently sees thus, at a glance, the sole methods (sometimes indeed absurdly simple ones) by which he may seduce into {ll}error or hurry into miscalculation.{ll}

      Using chess to connect with the concept of analysis at the beginning of the story is innovative, however, I have to admit that this "chess metaphor" doesn't work for me ----It neither provides me any necessary background information nor triggers my interest and curiosity to read on.

    2. A chess-player, for example, does the one without effort at the other. It follows that the game of chess, in its effects upon mental character, is greatly misunderstood. I am not now writing a treatise, but simply prefacing a somewhat peculiar narrative by observations very much at random; I will, therefore, take occasion to assert that the higher powers of the reflective intellect are more decidedly and more usefully tasked{e} by the unostentatious game of draughts than by all the elaborate frivolity of chess. In this latter, where the pieces have different and bizarre{f} motions, with various and variable values, what{g} is only complex is mistaken (a not unusual error) for what{h} is profound. The attention is here called powerfully into play. If it flag for an instant, an oversight is committed, resulting in injury or defeat. The possible moves being not only manifold but involute, the chances of such oversights are multiplied; and in nine cases out of ten it is the more concentrative rather than the more acute player who conquers. In draughts, on the contrary, where the moves are unique{i} and have but little variation, the probabilities of inadvertence are diminished, [page 529:] and the mere attention being left comparatively unemployed, what advantages are obtained by either party are obtained by superior acumen.

      This surprised me, as I initially thought both games should be played with a unique move to mess up the opponent's plan. Instead, because of the lack of possible moves in chess, the moves will not be as unique as playing draughts.

    1. The chemist said it would be all right, but I’ve never been the same. You are a proper fool, I said. Well, if Albert won’t leave you alone, there it is, I said, What you get married for if you don’t want children?

      Eliot’s “A Game of Chess” comments on the patriarchy through the board game Chess, especially in terms of gender, sex, and the female body. On lines 145-149 of “A Game of Chess”, the narrator speaks to a woman about her boyfriend/husband. The narrator states, “He said, I swear, I can’t bear to look at you. // And no more can’t I, I said, and think of poor Albert, // He’s been in the army four years, he wants a good time.” This dialogue already comments on the the beauty of a woman in relation to the desires of her husband, who is soon to return from war. Eliot’s depiction of the husband as a soldier is interesting- not only does it refer to the Great War, but chess pieces can also be interpreted as types of soldiers, engaged in strategic combat. In “A Game of Chess” by Thomas Middleton, we begin to see some of the roots of such social commentary in terms of chess gameplay. In Middleton’s play, the Jesuit Black Bishop’s pawn, recieves a letter written by the Black King, both chess pieces. The Kings letter demands the Black Bishop’s capturing the Virgin White Queens person, stating, “These are therefore to require you, by the burning affection I bear to the rape of devotion that speedily upon the surprisal of her by all watchful advantage you make some attempt upon the White Queen’s person, whose fall or prostitution our lust most violently rages for.” The King’s letter to the Bishop mentions the “rape of devotion”, which evokes a sense of abuse and violation in committed relationships, such as marraiges. The mention of “our list” describes a shared masculine desire for “fall or prostitution”. While it is clear what these figures desire in regards to prositution, “fall” is left seemingly ambiguous. Interpreting the “fall” of the woman, we can assume that these figures desire the destruction or death of the female body. Another noticable aspect of this letter is that the letter comes from a King, a figure of male authority, to a pawn, a weaker masculine figure. These misogynistic commands reflect how the patriarchy is perpetuated and constantly enforced in our society - a process also shown in the narrator’s scolding of the woman in “A Game of Chess”. The King’s letter also describes how the Virgin White Queen’s person, “passed the general rule, the large extent of our prescriptions for obedience”. This idea of obedience fits into the discussion of the patriarchy, as women are expected to be obedient and subservient. This call for obedience evokes an earlier-mentioned notion of women giving their sexuality for men, as well as the disturbing control men have over a woman’s body during pregancy. In lines 161- 165 of “A Game of Chess”, we return to the woman speaking about her abortion, stating “The chemist said it would be alright, but I’ve never been the same// You are a proper fool, I said// Well, if Albert won’t leave you alone, there it is, I said”. Again, the narrator berates the woman because of her choice to terminate her pregnancy- a process she has also struggled with, saying she has never been the same. The woman is called a fool, and the theme of harassment is evoked again, as Albert “won’t leave you [the woman] alone”. Both the Virgin White Queen’s person and the woman in “A Game of Chess” are objectified and subject to patriarchal expectations and harassment, reported through the metaphor of chess.

    2. I think we are in rats’ alley Where the dead men lost their bones.

      Hamlet’s most famous line questions whether “to be, or not to be.” That is indeed the nihilistic question that seems to play upon Eliot’s mind as he writes “I think we are in rats’ alley/Where the dead men lost their bones.” Immediately, this image invokes a sense of urban decay and despair as the alley–presumably a home for humans–is possessed, grammatically and literally, by the “rats.” Additionally, Eliot’s notion of men who “lost their bones” is quite peculiar. In typical instances of human decomposition, flesh is lost immediately while bones persist for an extended period. While death and decomposition are naturally morbid affairs, Eliot seems to stretch their impact on erasing an individual’s vestiges by suggesting that their bones–the enduring structure that had carried them through life and is meant to remain intact for a period after death–are “lost.” Furthermore, the notion that these bones are lost in a rat-possesed alley rids the scenario of any shred of dignity that it could possibly have. So, why is Eliot ridding the process of decomposition–one’s material departure from Earth–of its natural duration, instead suggesting a more instant and final end to life?

      Perhaps he is casting an argument towards the “not to be” side of Hamlet’s ballot by despairingly exaggerating the frailty of human life and the limited control one has over their existence. In the context of a “Game of Chess,” this argument makes much sense. During a chess game, each piece plays a critical role; even an advantage of one pawn can make all the difference in one’s endgame. Yet, even the potent queen is but a pawn in the overall quest to win the game and preserve the king. A lost piece is forever gone–barring the technicality of promotion–with no lingering bones to preserve its memory. Eliot seems to be suggesting that humans, too, are constantly manipulated, with their agency having limited meaning in a rats’–representative of some cabalistic (generally hidden/evil) controlling force–where not even their core structure of bones remains with them.

      The irony of exercising agency despite ultimately being at the mercy of the rats who truly possess control emerges in Pound’s “The Game of Chess.” Pound writes that “these pieces are living in form/Their moves break and reform the pattern.” This is inherently paradoxical because it is impossible for a chess piece to be “living” and exercise agency to “break and reform” while also being pawns in the human player's larger strategic game. The ontology of a chess piece prevents it from having its own independent agency. Then, Pound’s closing phrase, “Renewing of contest,” furthers the notion of the pieces’ existences being transient because, after the King disappears “down in the vortex,” the board is reset and the distinct role of a piece in a specific game is banished to be an irrelevant relic of history. The only force that remains dominant and constant is the agency of the entity “renewing the contest,” the overlord that presides over many chess games and pieces. Thus, the suggestion at the convergence of Eliot and Pound seems to be that humans are but pawns in a larger game, inherently limited in agency and frail in nature.

    3. 'Speak to me. Why do you never speak. Speak. 'What are you thinking of? What thinking? What?

      In the first part of The Game of Chess, the female character is isolated and defined by the lifeless objects surrounding her—perfumes, glass, candle flames, lacquer, and more. Only through these objects, do we get a chance to become acquainted with her. These inanimate items symbolize suffocation and entrapment in her loneliness. The readers can only speculate if it’s the notion of rape that Eliot continuoulsy references that led to this isolation. No matter the cause of it, however, the character strives to get herself out of this situation. She strives for human connection. Her plea, “Speak to me,” reflects this need, but the absence of a question mark in “Why do you never speak to me” suggests that the character already knows the answer. The reader, however, is left to speculate: does she see herself as undesirable because of her trauma? Or is it simply the years of a relationship that deteriorate this connection referencing Eliot’s own troubled marriage? In either case, this emotional disconnect is further demonstrated by the subsequent question, “What are you thinking of?” This time, the question is marked by a question mark, suggesting an actual attempt to break through this emotional barrier. These attempts, however, are ineffective, as the desperation rises and the questions shorten to “what thinking” and “what.” This fragmented monologue mirrors the fragmentation of her emotional state.

      In contrast, the second character suffers from the destructive excess of human connection. Shamed for her appearance, she faces the reality of her partner’s potential infidelity, reflected in the statement, “And if you don’t give it to him, there’s others.” The references to abortion intensify this degradation. She justifies her loss of beauty and confidence with the line, “It’s them pills I took, to bring it off.” Instead of finding fulfillment in connection, this character’s relationships strip her of her self-worth.

      In these two cases, Eliot presents women trapped at the opposite extremes of human connection: one suffers from its absence, the other from its destructive abundance. Yet, in both cases, external forces define and entrap them. The first woman is reduced to the objects around her, while the second is judged by an external voice—the pronoun “I” suggesting our, as readers, own judgment projected onto the character. We become not simply the judges, but also the victimes of this broken connection. The poem’s fragmented language, which severely affects our understanding of it, mirrors the emotional chaos, invoking feelings rather than rationality, similarly to Ophelia’s “mad” song in Hamlet.

      This pattern mirrors the nature of chess, where a single wrong move can drastically alter the entire game. Just as in chess, life’s unpredictability is highlighted in these women’s lives, as one extreme of human connection can quickly shift to another, with equally devastating outcomes. The title of the section, The Game of Chess, thus, reflects this instability —one wrong move leads to extremes of connection. In this case, these characters and we, as readers, are not simply entrapped in this game. Instead, we are playing with an unwinnable position from the very outset: each move only brings the inevitability of loss closer.

    4. The change of Philomel,

      9.25

      On many levels of interpretations, the section “A Game of Chess” depicts the rape of one or several female characters. The meticulous steps of the sport of chess bring to mind a careful, calculated series of sexual manipulation, and the rather disjunct imagery throughout these stanzas seem to be linked together by the metaphorical or literal subjugation of the female body. Of great interest to me here is the portrayal of Philomela’s metamorphosis, framed and “[displayed] above the antique mantel” (97).

      In Ovid’s Metamorphoses, Philomela transforms into a nightingale after revenging King Tereus for the assailment and mutilation of her body. In Eliot’s poem, however, she is forever frozen in this frame of incomplete mutation. In a way, her fate-like tragedy endures through this artwork; even from the modern observer’s perspective, “still she cried, and still the world pursues” (98). Philomela’s animalistic freedom is unachieved or at least incomplete. Recalling Sibyl’s male-given “liberation” from the natural laws of mortality and Marie’s temporary freedom sledding with an unnamed man “down [...] in the mountains,” Eliot seems to characterize female autonomy with a sense of ephemera throughout the poem (14-7).

      As Ira points out, the female nightingale is naturally mute in terms of singing – it is factually impossible for their voices to permeate the desert air. Rather, female nightingales can at best produce a soft collection of sounds resemblant of the “crying” that Eliot attributes to the pronoun “she” (98). Given that it traverses biological gender boundaries, one could argue that this line is yet another example of the disembodiment of perspectives.

      But I think there’s more to it. We cannot confirm what this intrusive non-female voice has filled the desert with, but we notice that the only sound emphasized in the rest of the stanza is an onomatopoeia “Jug Jug” (103). Having had her tongue cut off, the violated female exemplified by Philomela does not regain her voice through the transformation to a nightingale. Moreover, this transformation – this scene hanging over the ancient mantel, this “withered stumps of time” – can only generate impact through the interpretation of the (presumably male) observer (104). Sadly, all that the “dirty ears” of the viewer can perceive from the image is condensed into the obscene “Jug Jug.” The strength and resolution that Philomela supposedly represents does not resonate in the modern waste land. Only the vulgar, filthy perception seems to live on.

    5. (She’s had five already, and nearly died of young George.).

      The information contained within parentheses is perhaps the most honest and straightforward we receive in this section of devolving poetry. A Game of Chess begins in iambic pentameter, the chosen meter of Shakespeare and other prominent English playwrights, and mixes references to classical poetry which also depends on strict meter. As the section progresses, this meter devolves into jumbled lines and line breaks indicative of the psychosis all of the alluded women themselves devolve into (Dido, Cleopatra, Philomena, Ophelia, all defined by being "crazy"). Anchored amongst the chaos are these parenthetical statements, associated in my mind with being asides of truth, as the punctuation typically indicates. From these we gain three kernels of knowledge: Lil is 31, has 5 kids, and nearly died in childbirth. We are also told that her husband has been away at war for 4 years, so she must have had these 5 kids before the age of 27, meaning she most likely had the first one when she was about 20 or 21. Therefore, almost Lils entire adult life has been defined by motherhood, which has obviously aged her. Throughout most of history, women are defined by their ability to have children. In Ancient Greece, women were thought of as child-bearing vessels, holding little value beyond what their wombs could produce. So even as Lil time and time again proves her archaic value, at the cost of her own health, she ultimately fails in this job, succumbing to “madness” by taking abortion pills. As the meter devolves with the sanity of its female characters, Lils descent into craziness is marked by her long face, lack of beauty, and choice to abort her child. Her biggest failure is trying to avoid death.

    6. Flung their smoke into the laquearia,

      This line recalls the language and setting of the room where Dido is infected by Cupid to love Aeneas, which inevitably leads to her demise. A moment marked by Dido losing her autonomy and unwillingly being transformed into a pawn, is described not by the woman herself, but the candles that surround her. As she becomes a passive agent, the inorganic flames become the subject, described with an active tense. Even they have autonomy as to where they fling their smoke, but as Dido “burns” with love sickness, she has no control over where her smoke goes.

      This language depicts Dido as a chess piece in Venus’s cosmic plan, which she describes, “Wherefore I purpose to outwit the queen with guile and encircle her with love’s flame, that so no power may change her, but on my side she may be held fast in strong love for Aeneas” (Aeneid 1). The word “encircle” is indicative of predators circling around their prey before going in for the kill. Similarly, a powerful chess piece is typically killed by being surrounded, meaning no move will result in a safe square. Furthermore, “on my side” almost equates Dido as a tool in a belt, meant to be taken out when the user is ready.

      Like in a game of chess, where the player is like a God, moving pieces around and sacrificing them for the safety of the King, in this same way Venus plays with Dido. This queen, once a pawn in her brother's quest for power, traveled across the ocean, across the chess board, to become a queen in a new land, like a pawn becoming a new queen. Yet in the end, her power, her ability to become a queen and then move anywhere across the board, is reduced once more as an agent for the player to use, protecting the King, Aeneas. When the game finishes, Dido is still the pawn she once came from.

      The women of A Game of Chess are all similar pawns, victims of the use of divine power to serve a more powerful figure's needs. This pattern appears in all of the referenced women: Philomela becomes an object of lust for the King Tereus, and when she breaks from her role in the game, she is turned into a bird, doomed to sing her song of sorrow with a mute voice. The head being chopped off the cadaver woman brings a queen back down to the size of a pawn, and Madame Sosostris uses his “divine” powers to force women into his plan.

      And it is in this repetition that these women lose even more of their individuality; they join a long list of examples, an array of chess boards with the same goal. Unless the game itself is rewritten, their fate remains final.

    7. with inviolable voice And still she cried, and still the world pursues,

      Breaking Eliot’s own pattern, the line between life and death is not blurred for the female characters; instead, they are murdered physically, spiritually, and emotionally, with no possibility of redemption.

      In Ovid’s story of Philomela, King Tereus rapes her and, to ensure her silence, cuts out her tongue. Philomela seems to reclaim her control through her transformation into a nightingale, but even this reclamation is deceptive. On the surface, it appears as if she has regained her voice as a bird; yet only male nightingales can sing. The perceived revenge is hollow, as she remains voiceless. In a similar hollow attempt of revenge, Philomela’s sister, Procne, directs her rage not at Tereus but at their son. Tereus, the primary perpetrator of the violence, thus, becomes only a secondary victim of his brutality, while the raped Philomela and the now childless Procne bear the consequences of his violence.

      Interestingly, Ovid describes Tereus’s violent intents as “the flame of love”(Ovid,3) that has undertaken him. In this case, love gains a perverse connotation – what tends to be a source of warmth, becomes an all-consuming fire, synonymous with lust and destruction.

      Eliot’s reference to Cupid, traditionally a symbol of romantic love, is similarly ironic. On the surface, Cupid represents love, yet his story reveals the same pattern of female disempowerment. Cupid falls in love with a mortal - Psyche. She is passive, with others dictating her fate. In the so-called happy ending, Cupid brings her to Olympus, where Zeus grants her immortality. However, this transformation, similarly to Philomela’s, is controlled entirely by male figures. None of these female characters seem to regain any control over their fates.

      Eliot described nightingale’s voice in reference to Ovid’s myth as “inviolable” or unbreakable. The use of this word is once again ironic – the character’s voice is taken away from her in both her human and bird existences. Eliot follows up this irony of false hope with a line “And still she cried, and still the world pursues.” The use of past tense for a verb “cried” signifies a cry that has already ended, further emphasizing her inability to voice her pain. The world, however, “still pursues.” The double emphasis on the word “still” shows the consistent persecution and implies absolutely no justice for the female characters. “Pursue” itself can carry violent connotations, meaning “to follow or chase someone or something, especially in order to catch them,” (Oxford Dictionary) reinforcing the idea of female victimization and helplessness.

      While Eliot’s vision often blurs the boundaries between life and death, for the women of A Game of Chess and their mythic counterparts, these lines are brutally drawn. Their fate is not ambiguous—it is absolute. They are erased, their voices silenced, leaving them trapped in a world where love is violence, survival is silence, and any hope of redemption is only ironic.

    8. Under the firelight, under the brush, her hair Spread out in fiery points Glowed into words, then would be savagely still.

      At first, I was drawn to (and rather overwhelmed by) the series of physical materials we encounter in the opening. “The Chair” is equated to a “burnished throne” - a more authoritative rendition signifying a polished metal substance - before seeming to glow “on the marble, where the glass / Held up by standards wrough with fruited vines”. The light (and the reader’s focus) is tossed between shiny marble surfaces and glass refractions and metallic chairs, not once landing on the subject of the opening lines who we infer is indeed a queen. Amidst the influx of visual descriptions, the queen - an allusion to the title chess piece as well as the sitter of “The Chair”, is belittled. She, referring to Dido, Cleopatra, or Queen Mary (WW1/contemporary reference), is overshadowed by the focus on materialistic goods such as the “glitter of her jewels” and the “rich profusion” that poured from “satin cases”. Eliot embellishes this “Game of Chess”’s first key female figure with a plethora of perfumes, jewels, and silks instead of addressing her as a subject. Very few verbs are even connected to her, from which all are passive such as “sat” and “lurked”, with the most dramatic being “cried” in line 102. The woman in the scene feels very much secondary to the ornamentation (plug for capitalism? A critique on mass consumption? On vanity?).

      True, the faceless-de-individualization argument is not singular to women in The Waste Land. However, the allusions to rape of “Philomel, by the barbarous king / So rudely forcced” as in Ovid’s Metamorphoses, of Dido canonically in the Aeneid, and even with the romanticized possibility with Baudelaire’s cadaver “with eyes as provocative as the pose, / Reveals an unwholesome love, / Guilty joys and exotic revelries” suggests there may be a sexual connotation of objectifying these once-powerful women. The reader intrudes upon a crime scene (a setting deeply romanticized but still the scene of rape, suicide, regicide…) and our eyes flicker to everything but the victim. We can only reconcile with glimmers of light, hovering “under the brush,” observing “her hair / Spread out in points / Glowed into words, then would be savagely still” in the inhuman MO of a guilty perpetrator. Eliot’s readers are not only deeply uncomfortable with the prolonged gawking at unburied women (not deserving of the same solace or privacy soldiers or men do, but rather publicly displayed as a symbol of fleeting beauty) but also guilty of partaking in the crime that ruined them.

    9. The Chair she sat in

      I was particularly interested in the multidimensional and various representations of women in this passage. Just as Angela said last year, in this part of the poem, separate stories converge. However, they share a similar theme—women whose lives have been defined and constrained by their relationships with men. Last year, after studying books one and two of Vergil’s The Aeneid in my Latin class, I focused my final paper on the book as a piece of sophisticated propaganda during the Augustan era to reaffirm traditional Roman values in the aftermath of political turmoil. Specifically, I focused on the female character Dido—how she stood out for embodying qualities typically associated with noteworthy men, yet met her tragic fate at the end of book four. Such was Vergil’s way of illustrating the consequence of women daring to transcend conventional roles. Dido’s transformation from a regal, autonomous ruler to a figure destroyed by unrequited love and abandonment is reflected in the way women are presented in Baudelaire’s and Eliot’s work.

      In a similar vein, Baudelaire’s portrayal of the decapitated woman adorned in luxurious fabrics and jewels exposes the paradox of female beauty and power. The woman, once adorned and admired, now becomes an object of decay: “A headless cadaver pours out, like a river, / On the saturated pillow / Red, living blood.” Here, Baudelaire critiques the way society both glorifies and consumes women, reducing them to passive symbols of desire, even in death.

      The first part of “A Game of Chess” also invokes this tension between female power and subjugation, particularly in its reference to Philomel, a figure from Ovid's Metamorphoses, who was raped by Tereus and transformed into a nightingale. In TWL, Philomel’s story appears as a reminder of female suffering: "Above the antique mantel was displayed / The change of Philomel, by the barbarous king / So rudely forced; yet there the nightingale / Filled all the desert with inviolable voice." Philomel, like Dido and Baudelaire’s martyred woman, is a figure whose pain is immortalized but also aestheticized. Her "inviolable voice" suggests that even in her forced silence, her trauma cannot be erased, much like Dido’s lasting curse on Aeneas and his descendants. Yet, her transformation into a nightingale also represents a form of agency reclaimed in tragedy, as her voice persists despite her violation.

    10. ‘Nam Sibyllam quidem Cumis ego ipse oculis meis vidi in ampulla pendere, et cum illi pueri dicerent: Σίβυλλα τί θέλεις; respondebat illa: ἀποθανεῖν θέλω.’

      The original title of the poem, “He Do The Police In Different Voices,” originated from Dicken’s novel Our Mutual Friend, in which the character Sloppy reads newspaper reports in a variety of tones, mimicking different speakers. While an aspect of this choice echoes Eliot’s technique of blending different languages, perspectives, and cultural and mythic references throughout the poem, another intriguing facet can be unveiled through a deeper analysis of Sloppy as a “beautiful reader of a newspaper” (9). His ability to “do the police in different voices” reflects a talent for embodying the voices of authority figures and others in society, blurring the line between personal identity and social narrative. If Eliot had kept this title, it would have positioned the poet as a kind of ventriloquist, stepping into the roles of the voices he captures, making the poem less about detached observation and more about active engagement with the cacophony of modern life, just as indicated by the original first section of the poem: “The next thing we were out in the street…sopped up some gin, sat in to the cork game…then we thought we’d breeze along and take a walk” (3). However, in the final version of “The Waste Land,” the poet becomes a more detached observer, surveying the desolation of modernity from a distance. This shift is crucial to the tone of the poem, which no longer presents the poet as a mimic of society but rather as someone bearing witness to its fragmentation and decay. The tone of the title changes from playfulness to solemnity, a shift that is clearly reflected in the respective following sections.

      “Preludes” serves as a fitting example of this detached narrator. Throughout the first three sections, the narrator offers a series of fragmented, disconnected snippets of urban life, illustrating the grimy, monotonous existence of people living in industrialized cities. The only time the narrator appears in the first person is in the penultimate stanza, the only complete person amidst images of fragmented people. This approach highlights the alienation and isolation inherent in the modern world, and the poet’s role is merely to record them as part of a decaying, impersonal landscape.

    1. All interaction and connection with the Net isa game of identity-making and community-making, where we choose toshow off our position within certain established, acceptable social net-works

      This is genuinely fascinating - the idea that communities and social networks are forming just by participating in certain online forums, chat rooms, or just following the same news platforms as others. It seems to natural and effortless to find others with a shared interest (community) on the Net that it seems difficult to imagine how these kind of social groups formed before the Internet.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      In this manuscript, the authors use a large dataset of neuroscience publications to elucidate the nature of self-citation within the neuroscience literature. The authors initially present descriptive measures of self-citation across time and author characteristics; they then produce an inclusive model to tease apart the potential role of various article and author features in shaping self-citation behavior. This is a valuable area of study, and the authors approach it with an appropriate and well-structured dataset.

      The study's descriptive analyses and figures are useful and will be of interest to the neuroscience community. However, with regard to the statistical comparisons and regression models, I believe that there are methodological flaws that may limit the validity of the presented results. These issues primarily affect the uncertainty of estimates and the statistical inference made on comparisons and model estimates - the fundamental direction and magnitude of the results are unlikely to change in most cases. I have included detailed statistical comments below for reference.

      Conceptually, I think this study will be very effective at providing context and empirical evidence for a broader conversation around self-citation. And while I believe that there is room for a deeper quantitative dive into some finer-grained questions, this paper will be a valuable catalyst for new areas of inquiry around citation behavior - e.g., do authors change self-citation behavior when they move to more or less prestigious institutions? do self-citations in neuroscience benefit downstream citation accumulation? do journals' reference list policies increase or decrease self-citation? - that I hope that the authors (or others) consider exploring in future work.

      Thank you for your suggestions and your generally positive view of our work. As described below, we have made the statistical improvements that you suggested.

      Statistical comments:

      (1) Throughout the paper, the nested nature of the data does not seem to be appropriately handled in the bootstrapping, permutation inference, and regression models. This is likely to lead to inappropriately narrow confidence bands and overly generous statistical inference.

      We apologize for this error. We have now included nested bootstrapping and permutation tests. We defined an “exchangeability block” as a co-authorship group of authors. In this dataset, that meant any authors who published together (among the articles in this dataset) as a First Author / Last Author pairing were assigned to the same exchangeability block. It is not realistic to check for overlapping middle authors in all papers because of the collaborative nature of the field. In addition, we believe that self-citations are primarily controlled by first and last authors, so we can assume that middle authors do not control self-citation habits. We then performed bootstrapping and permutation tests in the constraints of the exchangeability blocks.

      We first describe this in the results (page 3, line 110):

      “Importantly, we accounted for the nested structure of the data in bootstrapping and permutation tests by forming co-authorship exchangeability blocks.”

      We also describe this in 4.8 Confidence Intervals (page 21, line 725):

      “Confidence intervals were computed with 1000 iterations of bootstrap resampling at the article level. For example, of the 100,347 articles in the dataset, we resampled articles with replacement and recomputed all results. The 95% confidence interval was reported as the 2.5 and 97.5 percentiles of the bootstrapped values.

      We grouped data into exchangeability blocks to avoid overly narrow confidence intervals or overly optimistic statistical inference. Each exchangeability block comprised any authors who published together as a First Author / Last Author pairing in our dataset. We only considered shared First/Last Author publications because we believe that these authors primarily control self-citations, and otherwise exchangeability blocks would grow too large due to the highly collaborative nature of the field. Furthermore, the exchangeability blocks do not account for co-authorship in other journals or prior to 2000. A distribution of the sizes of exchangeability blocks is presented in Figure S15.”

      In describing permutation tests, we also write (page 21, line 739):

      “4.9 P values

      P values were computed with permutation testing using 10,000 permutations, with the exception of regression P values and P values from model coefficients. For comparing different fields (e.g., Neuroscience and Psychiatry) and comparing self-citation rates of men and women, the labels were randomly permuted by exchangeability block to obtain null distributions. For comparing self-citation rates between First and Last Authors, the first and last authorship was swapped in 50% of exchangeability blocks.”

      For modeling, we considered doing a mixed effects model but found difficulties due to computational power. For example, with our previous model, there were hundreds of thousands of levels for the paper random effect, and tens of thousands of levels for the author random effect. Even when subsampling or using packages designed for large datasets (e.g., mgcv’s bam function: https://www.rdocumentation.org/packages/mgcv/versions/1.9-1/topics/bam), we found computational difficulties.

      As a result, we switched to modeling results at the paper level (e.g., self-citation count or rate). We found that results could be unstable when including author-level random effects because in many cases there was only one author per group. Instead, to avoid inappropriately narrow confidence bands, we resampled the dataset such that each author was only represented once. For example, if Author A had five papers in this dataset, then one of their five papers was randomly selected. We updated our description of our models in the Methods section (page 21, line 754):

      “4.10 Exploring effects of covariates with generalized additive models

      For these analyses, we used the full dataset size separately for First and Last Authors (Table S2). This included 115,205 articles and 5,794,926 citations for First Authors, and 114,622 articles and 5,801,367 citations for Last Authors. We modeled self-citation counts, self-citation rates, and number of previous papers for First Authors and Last Authors separately, resulting in six total models.

      We found that models could be computationally intensive and unstable when including author-level random effects because in many cases there was only one author per group. Instead, to avoid inappropriately narrow confidence bands, we resampled the dataset such that each author was only represented once. For example, if Author A had five papers in this dataset, then one of their five papers was randomly selected. The random resampling was repeated 100 times as a sensitivity analysis (Figure S12).

      For our models, we used generalized additive models from mgcv’s “gam” function in R 49. The smooth terms included all the continuous variables: number of previous papers, academic age, year, time lag, number of authors, number of references, and journal impact factor. The linear terms included all the categorical variables: field, gender affiliation country LMIC status, and document type. We empirically selected a Tweedie distribution 50 with a log link function and p=1.2. The p parameter indicates that the variance is proportional to the mean to the p power 49. The p parameter ranges from 1-2, with p=1 equivalent to the Poisson distribution and p=2 equivalent to the gamma distribution. For all fitted models, we simulated the residuals with the DHARMa package, as standard residual plots may not be appropriate for GAMs 51. DHARMa scales the residuals between 0 and 1 with a simulation-based approach 51. We also tested for deviation from uniformity, dispersion, outliers, and zero inflation with DHARMa. Non-uniformity, dispersion, outliers, and zero inflation were significant due to the large sample size, but small in effect size in most cases. The simulated quantile-quantile plots from DHARMa suggested that the observed and simulated distributions were generally aligned, with the exception of slight misalignment in the models for the number of previous papers. These analyses are presented in Figure S11 and Table S7.

      In addition, we tested for inadequate basis functions using mgcv’s “gam.check()” function 49. Across all smooth predictors and models, we ultimately selected between 10-20 basis functions depending on the variable and outcome measure (counts, rates, papers). We further checked the concurvity of the models and ensured that the worst-case concurvity for all smooth predictors was about 0.8 or less.”

      The direction of our results primarily stayed the same, with the exception of gender results. Men tended to self-cite slightly less (or equal self-citation rates) after accounting for numerous covariates. As such, we also modeled the number of previous papers to explain the discrepancy between our raw data and the modeled gender results. Please find the updated results text below (page 11, line 316):

      “2.9 Exploring effects of covariates with generalized additive models

      Investigating the raw trends and group differences in self-citation rates is important, but several confounding factors may explain some of the differences reported in previous sections. For instance, gender differences in self-citation were previously attributed to men having a greater number of prior papers available to self-cite 7,20,21. As such, covarying for various author- and article-level characteristics can improve the interpretability of self-citation rate trends. To allow for inclusion of author-level characteristics, we only consider First Author and Last Author self-citation in these models.

      We used generalized additive models (GAMs) to model the number and rate of self-citations for First Authors and Last Authors separately. The data were randomly subsampled so that each author only appeared in one paper. The terms of the model included several article characteristics (article year, average time lag between article and all cited articles, document type, number of references, field, journal impact factor, and number of authors), as well as author characteristics (academic age, number of previous papers, gender, and whether their affiliated institution is in a low- and middle-income country). Model performance (adjusted R2) and coefficients for parametric predictors are shown in Table 2. Plots of smooth predictors are presented in Figure 6.

      First, we considered several career and temporal variables. Consistent with prior works 20,21, self-citation rates and counts were higher for authors with a greater number of previous papers. Self-citation counts and rates increased rapidly among the first 25 published papers but then more gradually increased. Early in the career, increasing academic age was related to greater self-citation. There was a small peak at about five years, followed by a small decrease and a plateau. We found an inverted U-shaped trend for average time lag and self-citations, with self-citations peaking approximately three years after initial publication. In addition, self-citations have generally been decreasing since 2000. The smooth predictors showed larger decreases in the First Author model relative to the Last Author model (Figure 6).

      Then, we considered whether authors were affiliated with an institution in a low- and middle-income country (LMIC). LMIC status was determined by the Organisation for Economic Co-operation and Development. We opted to use LMIC instead of affiliation country or continent to reduce the number of model terms. We found that papers from LMIC institutions had significantly lower self-citation counts (-0.138 for First Authors, -0.184 for Last Authors) and rates (-12.7% for First Authors, -23.7% for Last Authors) compared to non-LMIC institutions. Additional results with affiliation continent are presented in Table S5. Relative to the reference level of Asia, higher self-citations were associated with Africa (only three of four models), the Americas, Europe, and Oceania.

      Among paper characteristics, a greater number of references was associated with higher self-citation counts and lower self-citation rates (Figure 6). Interestingly, self-citations were greater for a small number of authors, though the effect diminished after about five authors. Review articles were associated with lower self-citation counts and rates. No clear trend emerged between self-citations and journal impact factor. In an analysis by field, despite the raw results suggesting that self-citation rates were lower in Neuroscience, GAM-derived self-citations were greater in Neuroscience than in Psychiatry or Neurology.

      Finally, our results aligned with previous findings of nearly equivalent self-citation rates for men and women after including covariates, even showing slightly higher self-citation rates in women. Since raw data showed evidence of a gender difference in self-citation that emerges early in the career but dissipates with seniority, we incorporated two interaction terms: one between gender and academic age and a second between gender and the number of previous papers. Results remained largely unchanged with the interaction terms (Table S6).

      2.10 Reconciling differences between raw data and models

      The raw and GAM-derived data exhibited some conflicting results, such as for gender and field of research. To further study covariates associated with this discrepancy, we modeled the publication history for each author (at the time of publication) in our dataset (Table 2). The model terms included academic age, article year, journal impact factor, field, LMIC status, gender, and document type. Notably, Neuroscience was associated with the fewest number of papers per author. This explains how authors in Neuroscience could have the lowest raw self-citation rates but the highest self-citation rates after including covariates in a model. In addition, being a man was associated with about 0.25 more papers. Thus, gender differences in self-citation likely emerged from differences in the number of papers, not in any self-citation practices.”

      (2) The discussion of the data structure used in the regression models is somewhat opaque, both in the main text and the supplement. From what I gather, these models likely have each citation included in the model at least once (perhaps twice, once for first-author status and one for last-author status), with citations nested within citing papers, cited papers, and authors. Without inclusion of random effects, the interpretation and inference of the estimates may be misleading.

      Please see our response to point (1) to address random effects. We have also switched to GAMs (see point #3 below) and provided more detail in the methods. Notably, we decided against using author-level effects due to poor model stability, as there can be as few as one author per group. Instead, we subsampled the dataset such that only one paper appeared from each author.

      (3) I am concerned that the use of the inverse hyperbolic sine transform is a bit too prescriptive, and may be producing poor fits to the true predictor-outcome relationships. For example, in a figure like Fig S8, it is hard to know to what extent the sharp drop and sign reversal are true reflections of the data, and to what extent they are artifacts of the transformed fit.

      Thank you for raising this point. We have now switched to using generalized additive models (GAMs). GAMs provide a flexible approach to modeling that does not require transformations. We described this in detail in point (1) above and in Methods 4.10 Exploring effects of covariates with generalized additive models (page 21, line 754).

      “4.10 Exploring effects of covariates with generalized additive models

      For these analyses, we used the full dataset size separately for First and Last Authors (Table S2). This included 115,205 articles and 5,794,926 citations for First Authors, and 114,622 articles and 5,801,367 citations for Last Authors. We modeled self-citation counts, self-citation rates, and number of previous papers for First Authors and Last Authors separately, resulting in six total models.

      We found that models could be computationally intensive and unstable when including author-level random effects because in many cases there was only one author per group. Instead, to avoid inappropriately narrow confidence bands, we resampled the dataset such that each author was only represented once. For example, if Author A had five papers in this dataset, then one of their five papers was randomly selected. The random resampling was repeated 100 times as a sensitivity analysis (Figure S12).

      For our models, we used generalized additive models from mgcv’s “gam” function in R 48. The smooth terms included all the continuous variables: number of previous papers, academic age, year, time lag, number of authors, number of references, and journal impact factor. The linear terms included all the categorical variables: field, gender affiliation country LMIC status, and document type. We empirically selected a Tweedie distribution 49 with a log link function and p=1.2. The p parameter indicates that the variance is proportional to the mean to the p power 48. The p parameter ranges from 1-2, with p=1 equivalent to the Poisson distribution and p=2 equivalent to the gamma distribution. For all fitted models, we simulated the residuals with the DHARMa package, as standard residual plots may not be appropriate for GAMs 50. DHARMa scales the residuals between 0 and 1 with a simulation-based approach 50. We also tested for deviation from uniformity, dispersion, outliers, and zero inflation with DHARMa. Non-uniformity, dispersion, outliers, and zero inflation were significant due to the large sample size, but small in effect size in most cases. The simulated quantile-quantile plots from DHARMa suggested that the observed and simulated distributions were generally aligned, with the exception of slight misalignment in the models for the number of previous papers. These analyses are presented in Figure S11 and Table S7.

      In addition, we tested for inadequate basis functions using mgcv’s “gam.check()” function 48. Across all smooth predictors and models, we ultimately selected between 10-20 basis functions depending on the variable and outcome measure (counts, rates, papers). We further checked the concurvity of the models and ensured that the worst-case concurvity for all smooth predictors was about 0.8 or less.”

      (4) It seems there are several points in the analysis where papers may have been dropped for missing data (e.g., missing author IDs and/or initials, missing affiliations, low-confidence gender assessment). It would be beneficial for the reader to know what % of the data was dropped for each analysis, and for comparisons across countries it would be important for the authors to make sure that there is not differential missing data that could affect the interpretation of the results (e.g., differences in self-citation being due to differences in Scopus ID coverage).

      Thank you for raising this important point. In the methods section, we describe how the data are missing (page 18, line 623):

      “4.3 Data exclusions and missingness

      Data were excluded across several criteria: missing covariates, missing citation data, out-of-range values at the citation pair level, and out-of-range values at the article level (Table 3). After downloading the data, our dataset included 157,287 articles and 8,438,733 citations. We excluded any articles with missing covariates (document type, field, year, number of authors, number of references, academic age, number of previous papers, affiliation country, gender, and journal). Of the remaining articles, we dropped any for missing citation data (e.g., cannot identify whether a self-citation is present due to lack of data). Then, we removed citations with unrealistic or extreme values. These included an academic age of less than zero or above 38/44 for First/Last Authors (99th percentile); greater than 266/522 papers for First/Last Authors (99th percentile); and a cited year before 1500 or after 2023. Subsequently, we dropped articles with extreme values that could contribute to poor model stability. These included greater than 30 authors; fewer than 10 references or greater than 250 references; and a time lag of greater than 17 years. These values were selected to ensure that GAMs were stable and not influenced by a small number of extreme values.

      In addition, we evaluated whether the data were not missing at random (Table S8). Data were more likely to be missing for reviews relative to articles, for Neurology relative to Neuroscience or Psychiatry, in works from Africa relative to the other continents, and for men relative to women. Scopus ID coverage contributed in part to differential missingness. However, our exclusion criteria also contribute. For example, Last Authors with more than 522 papers were excluded to help stabilize our GAMs. More men fit this exclusion criteria than women.”

      Due to differential missingness, we wrote in the limitations (page 16, line 529):

      “Ninth, data were differentially missing (Table S8) due to Scopus coverage and gender estimation. Differential missingness could bias certain results in the paper, but we hope that the dataset is large enough to reduce any potential biases.”

      Reviewer #2 (Public Review):

      The authors provide a comprehensive investigation of self-citation rates in the field of Neuroscience, filling a significant gap in existing research. They analyze a large dataset of over 150,000 articles and eight million citations from 63 journals published between 2000 and 2020. The study reveals several findings. First, they state that there is an increasing trend of self-citation rates among first authors compared to last authors, indicating potential strategic manipulation of citation metrics. Second, they find that the Americas show higher odds of self-citation rates compared to other continents, suggesting regional variations in citation practices. Third, they show that there are gender differences in early-career self-citation rates, with men exhibiting higher rates than women. Lastly, they find that self-citation rates vary across different subfields of Neuroscience, highlighting the influence of research specialization. They believe that these findings have implications for the perception of author influence, research focus, and career trajectories in Neuroscience.

      Overall, this paper is well written, and the breadth of analysis conducted by authors, with various interactions between variables (eg. gender vs. seniority), shows that the authors have spent a lot of time thinking about different angles. The discussion section is also quite thorough. The authors should also be commended for their efforts in the provision of code for the public to evaluate their own self-citations. That said, here are some concerns and comments that, if addressed, could potentially enhance the paper:

      Thank you for your review and your generally positive view of our work.

      (1) There are concerns regarding the data used in this study, specifically its bias towards top journals in Neuroscience, which limits the generalizability of the findings to the broader field. More specifically, the top 63 journals in neuroscience are based on impact factor (IF), which raises a potential issue of selection bias. While the paper acknowledges this as a limitation, it lacks a clear justification for why authors made this choice. It is also unclear how the "top" journals were identified as whether it was based on the top 5% in terms of impact factor? Or 10%? Or some other metric? The authors also do not provide the (computed) impact factors of the journals in the supplementary.

      We apologize for the lack of clarity about our selection of journals. We agree that there are limitations to selecting higher impact journals. However, we needed to apply some form of selection in order to make the analysis manageable. For instance, even these 63 journals include over five million citations. We better describe our rationale behind the approach as follows (page 17, line 578):

      “We collected data from the 25 journals with the highest impact factors, based on Web of Science impact factors, in each of Neurology, Neuroscience, and Psychiatry. Some journals appeared in the top 25 list of multiple fields (e.g., both Neurology and Neuroscience), so 63 journals were ultimately included in our analysis. We recognize that limiting the journals to the top 25 in each field also limits the generalizability of the results. However, there are tradeoffs between breadth of journals and depth of information. For example, by limiting the journals to these 63, we were able to look at 21 years of data (2000-2020). In addition, the definition of fields is somewhat arbitrary. By restricting the journals to a set of 63 well-known journals, we ensured that the journals belonged to Neurology, Neuroscience, or Psychiatry research. It is also important to note that the impact factor of these journals has not necessarily always been high. For example, Acta Neuropathologica had an impact factor of 17.09 in 2020 but 2.45 in 2000. To further recognize the effects of impact factor, we decided to include an impact factor term in our models.”

      In addition, we have now provided the 2020 impact factors in Table S1.

      By exclusively focusing on high impact journals, your analysis may not be representative of the broader landscape of self-citation patterns across the neuroscience literature, which is what the title of the article claims to do.

      We agree that this article is not indicative of all neuroscience literature, but rather the top journals. Thus, we have changed the title to: “Trends in Self-citation Rates in High-impact Neurology, Neuroscience, and Psychiatry Journals”. We would also like to note that compared to previous bibliometrics works in neuroscience (Bertolero et al. 2020; Dworkin et al. 2020; Fulvio et al. 2021), this article includes a wider range of data.

      (2) One other concern pertains to the possibility that a significant number of authors involved in the paper may not be neuroscientists. It is plausible that the paper is a product of interdisciplinary collaboration involving scientists from diverse disciplines. Neuroscientists amongst the authors should be identified.

      In our opinion, neuroscience is a broad, interdisciplinary field. Individuals performing neuroscience research may have a neuroscience background. Yet, they may come from many backgrounds, such as physics, mathematics, biology, chemistry, or engineering. As such, we do not believe that it is feasible to characterize whether each author considers themselves a neuroscientist or not. We have added the following to the limitations section (page 16, line 528):

      “Eighth, authors included in this work may not be neurologists, neuroscientists, or psychiatrists. However, they still publish in journals from these fields.”

      (3) When calculating self-citation rate, it is important to consider the number of papers the authors have published to date. One plausible explanation for the lower self-citation rates among first authors could be attributed to their relatively junior status and short publication record. As such, it would also be beneficial to assess self-citation rate as a percentage relative to the author's publication history. This number would be more accurate if we look at it as a percentage of their publication history. My suspicion is that first authors (who are more junior) might be more likely to self-cite than their senior counterparts. My suspicion was further raised by looking at Figures 2a and 3. Considering the nature of the self-citation metric employed in the study, it is expected that authors with a higher level of seniority would have a greater number of publications. Consequently, these senior authors' papers are more likely to be included in the pool of references cited within the paper, hence the higher rate.

      While the authors acknowledge the importance of the number of past publications in their gender analysis, it is just as important to include the interplay of seniority in (1) their first and last author self-citation rates and (2) their geographic analysis.

      Thank you for this thoughtful comment. We agree that seniority and prior publication history play an important role in self-citation rates.

      For comparing First/Last Author self-citation rates, we have now included a plot similar to Figure 2a, where self-citation as a percentage of prior publication history is plotted.

      (page 4, line 161): “Analyzing self-citations as a fraction of publication history exhibited a similar trend (Figure S3). Notably, First Authors were more likely than Last Authors to self-cite when normalized by prior publication history.

      For the geographic analysis, we made two new maps: 1) that of the number of previous papers, and 2) that of the journal impact factor (see response to point #4 below).

      (page 5, line 185): “We also investigated the distribution of the number of previous papers and journal impact factor across countries (Figure S4). Self-citation maps by country were highly correlated with maps of the number of previous papers (Spearman’s r\=0.576, P=4.1e-4; 0.654, P=1.8e-5 for First and Last Authors). They were significantly correlated with maps of average impact factor for Last Authors (0.428, P=0.014) but not Last Authors (Spearman’s r\=0.157, P=0.424). Thus, further investigation is necessary with these covariates in a comprehensive model.”

      Finally, we included a model term for the number of previous papers (Table 2). We analyzed this both for self-citation counts and self-citation rates and found a strong relationship between publication history and self-citations. We also included the following section where we modeled the number of previous papers for each author (page 13, line 384):

      “2.10 Reconciling differences between raw data and models

      The raw and GAM-derived data exhibited some conflicting results, such as for gender and field of research. To further study covariates associated with this discrepancy, we modeled the publication history for each author (at the time of publication) in our dataset (Table 2). The model terms included academic age, article year, journal impact factor, field, LMIC status, gender, and document type. Notably, Neuroscience was associated with the fewest number of papers per author. This explains how authors in Neuroscience could have the lowest raw self-citation rates but the highest self-citation rates after including covariates in a model. In addition, being a man was associated with about 0.25 more papers. Thus, gender differences in self-citation likely emerged from differences in the number of papers, not in any self-citation practices.”

      (4) Because your analysis is limited to high impact journals, it would be beneficial to see the distribution of the impact factors across the different countries. Otherwise, your analysis on geographic differences in self-citation rates is hard to interpret. Are these differences really differences in self-citation rates, or differences in journal impact factor? It would be useful to look at the representation of authors from different countries for different impact factors.

      We made a map of this in Figure S4 (see our response to point #3 above).

      (page 5, line 185): “We also investigated the distribution of the number of previous papers and journal impact factor across countries (Figure S4). Self-citation maps by country were highly correlated with maps of the number of previous papers (Spearman’s r=0.576, P=4.1e-4; 0.654, P=1.8e-5 for First and Last Authors). They were significantly correlated with maps of average impact factor for Last Authors (0.428, P=0.014) but not Last Authors (Spearman’s r=0.157, P=0.424). Thus, further investigation is necessary with these covariates in a comprehensive model.”

      We also included impact factor as a term in our model. The results suggest that there are still geographic differences (Table 2, Table S5).

      (5) The presence of self-citations is not inherently problematic, and I appreciate the fact that authors omit any explicit judgment on this matter. That said, without appropriate context, self-citations are also not the best scholarly practice. In the analysis on gender differences in self-citations, it appears that authors imply an expectation of women's self-citation rates to align with those of men. While this is not explicitly stated, use of the word "disparity", and also presentation of self-citation as an example of self-promotion in discussion suggest such a perspective. Without knowing the context in which the self-citation was made, it is hard to ascertain whether women are less inclined to self-promote or that men are more inclined to engage in strategic self-citation practices.

      We agree that on the level of an individual self-citation, our study is not useful for determining how related the papers are. Yet, understanding overall trends in self-citation may help to identify differences. Context is important, but large datasets allow us to investigate broad trends. We added the following text to the limitations section (page 16, line 524):

      “In addition, these models do not account for whether a specific citation is appropriate, as some situations may necessitate higher self-citation rates.”

      Reviewer #3 (Public Review):

      This paper analyses self-citation rates in the field of Neuroscience, comprising in this case, Neurology, Neuroscience and Psychiatry. Based on data from Scopus, the authors identify self-citations, that is, whether references from a paper by some authors cite work that is written by one of the same authors. They separately analyse this in terms of first-author self-citations and last-author self-citations. The analysis is well-executed and the analysis and results are written down clearly. There are some minor methodological clarifications needed, but more importantly, the interpretation of some of the results might prove more challenging. That is, it is not always clear what is being estimated, and more importantly, the extent to which self-citations are "problematic" remains unclear.

      Thank you for your review. We attempted to improve the interpretation of results, as described in the following responses.

      When are self-citations problematic? As the authors themselves also clarify, "self-citations may often be appropriate". Researchers cite their own previous work for perfectly good reasons, similar to reasons of why they would cite work by others. The "problem", in a sense, is that researchers cite their own work, just to increase the citation count, or to promote their own work and make it more visible. This self-promotional behaviour might be incentivised by certain research evaluation procedures (e.g. hiring, promoting) that overly emphasise citation performance. However, the true problem then might not be (self-)citation practices, but instead, the flawed research evaluation procedures that emphasis citation performance too much. So instead of problematising self-citation behaviour, and trying to address it, we might do better to address flawed research evaluation procedures. Of course, we should expect references to be relevant, and we should avoid self-promotional references, but addressing self-citations may just have minimal effects, and would not solve the more fundamental issue.

      We agree that this dataset is not designed to investigate the downstream effects of self-citations. However, self-citation practices are more likely to be problematic when they differ across specific groups. This work can potentially spark more interest in future longitudinal designs to investigate whether differences in self-citation practices leads to differences in career outcomes, for example. We added the following text to clarify (page 17, line 565):

      “Yet, self-citation practices become problematic when they are different across groups or are used to “game the system.” Future work should investigate the downstream effects of self-citation differences to see whether they impact the career trajectories of certain groups. We hope that this work will help to raise awareness about factors influencing self-citation practices to better inform authors, editors, funding agencies, and institutions in Neurology, Neuroscience, and Psychiatry.”

      Some other challenges arise when taking a statistical perspective. For any given paper, we could browse through the references, and determine whether a particular reference would be warranted or not. For instance, we could note that there might be a reference included that is not at all relevant to the paper. Taking a broader perspective, the irrelevant reference might point to work by others, included just for reasons of prestige, so-called perfunctory citations. But it could of course also include self-citations. When we simply start counting all self-citations, we do not see what fraction of those self-citations would be warranted as references. The question then emerges, what level of self-citations should be counted as "high"? How should we determine that? If we observe differences in self-citation rates, what does it tell us?

      Our focus is when the self-citation practices differ across groups. We agree that, on a case-by-case basis, there is no exact number for a self-citation rate that is “high.” With a dataset of the current size, evaluating whether each individual self-citation is appropriate is not feasible. If we observe differences in self-citation rate, this may tell us about broad (not individual-level) trends and differences in self-citing practice. If one group is self-citing much more highly compared to another group–even after covarying relevant variables such as prior publication history–then the self-citation differences can likely be attributed to differences in self-citation practices/behaviors.

      For example, the authors find that the (any author) self-citation rate in Neuroscience is 10.7% versus 15.9% in Psychiatry. What does this difference mean? Are psychiatrists citing themselves more often than neuroscientists? First author men showed a self-citation rate of 5.12% versus a self-citation rate of 3.34% of women first authors. Do men engage in more problematic citation behaviour? Junior researchers (10-year career) show a self-citation rate of about 5% compared to a self-citation rate of about 10% for senior researchers (30-year career). Are senior researchers therefore engaging in more problematic citation behaviour? The answer is (most likely) "no", because senior authors have simply published more, and will therefore have more opportunities to refer to their own work. To be clear: the authors are aware of this, and also take this into account. In fact, these "raw" various self-citation rates may, as the authors themselves say, "give the illusion" of self-citation rates, but these are somehow "hidden" by, for instance, career seniority.

      We included numerous covariates in our model. In addition, to address the difference between “raw” and “modeled” self-citation rates, we added the following section (page 13, line 384):

      “2.10 Reconciling differences between raw data and models

      The raw and GAM-derived data exhibited some conflicting results, such as for gender and field of research. To further study covariates associated with this discrepancy, we modeled the publication history for each author (at the time of publication) in our dataset (Table 2). The model terms included academic age, article year, journal impact factor, field, LMIC status, gender, and document type. Notably, Neuroscience was associated with the fewest number of papers per author. This explains how authors in Neuroscience could have the lowest raw self-citation rates but the highest self-citation rates after including covariates in a model. In addition, being a man was associated with about 0.25 more papers. Thus, gender differences in self-citation likely emerged from differences in the number of papers, not in any self-citation practices.”

      Again, the authors do consider this, and "control" for career length and number of publications, et cetera, in their regression model. Some of the previous observations then change in the regression model. Neuroscience doesn't seem to be self-citing more, there just seem to be junior researchers in that field compared to Psychiatry. Similarly, men and women don't seem to show an overall different self-citation behaviour (although the authors find an early-career difference), the men included in the study simply have longer careers and more publications.

      But here's the key issue: what does it then mean to "control" for some variables? This doesn't make any sense, except in the light of causality. That is, we should control for some variable, such as seniority, because we are interested in some causal effect. The field may not "cause" the observed differences in self-citation behaviour, this is mediated by seniority. Or is it confounded by seniority? Are the overall gender differences also mediated by seniority? How would the selection of high-impact journals "bias" estimates of causal effects on self-citation? Can we interpret the coefficients as causal effects of that variable on self-citations? If so, would we try to interpret this as total causal effects, or direct causal effects? If they do not represent causal effects, how should they be interpreted then? In particular, how should it "inform author, editors, funding agencies and institutions", as the authors say? What should they be informed about?

      We apologize for our misuse of language. We will be more clear, as in most previous self-citation papers, that our analysis is NOT causal. Causal datasets do have some benefits in citation research, but a limitation is that they may not cover as wide of a range of authors. Furthermore, non-causal correlational studies can still be useful in informing authors, editors, funding agencies, and institutions. Association studies are widely used across various fields to draw non-causal conclusions. We made numerous changes to reduce our causal language.

      Before: “We then developed a probability model of self-citation that controls for numerous covariates, which allowed us to obtain significance estimates for each variable of interest.”

      After (page 3, line 113): “We then developed a probability model of self-citation that includes numerous covariates, which allowed us to obtain significance estimates for each variable of interest.”

      Before: “As such, controlling for various author- and article-level characteristics can improve the interpretability of self-citation rate trends.”

      After (page 11, line 321): “As such, covarying various author- and article-level characteristics can improve the interpretability of self-citation rate trends.”

      Before: “Initially, it appeared that self-citation rates in Neuroscience are lower than Neurology and Psychiatry, but after controlling for various confounds, the self-citation rates are higher in Neuroscience.”

      After (page 15, line 468): “Initially, it appeared that self-citation rates in Neuroscience are lower than Neurology and Psychiatry, but after considering several covariates, the self-citation rates are higher in Neuroscience.”

      We also added the following text to the limitations section (page 16, line 526):

      “Seventh, the analysis presented in this work is not causal. Association studies are advantageous for increasing sample size, but future work could investigate causality in curated datasets.”

      The authors also "encourage authors to explore their trends in self-citation rates". It is laudable to be self-critical and review ones own practices. But how should authors interpret their self-citation rate? How useful is it to know whether it is 5%, 10% or 15%? What would be the "reasonable" self-citation rate? How should we go about constructing such a benchmark rate? Again, this would necessitate some causal answer. Instead of looking at the self-citation rate, it would presumably be much more informative to simply ask authors to check whether references are appropriate and relevant to the topic at hand.

      We believe that our tool is valuable for authors to contextualize their own self-citation rates. For instance, if an author has published hundreds of articles, it is not practical to count the number of self-citations in each. We have added two portions of text to the limitations section:

      (page 16, line 524): “In addition, these models do not account for whether a specific citation is appropriate, though some situations may necessitate higher self-citation rates.”

      (page 16, line 535): “Despite these limitations, we found significant differences in self-citation rates for various groups, and thus we encourage authors to explore their trends in self-citation rates. Self-citation rates that are higher than average are not necessarily wrong, but suggest that authors should further reflect on their current self-citation practices.”

      In conclusion, the study shows some interesting and relevant differences in self-citation rates. As such, it is a welcome contribution to ongoing discussions of (self) citations. However, without a clear causal framework, it is challenging to interpret the observed differences.

      We agree that causal studies provide many benefits. Yet, association studies also provide many benefits. For example, an association study allowed us to analyze a wider range of articles than a causal study would have.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Statistical suggestions:

      (1) To improve statistical inference, nesting should be accounted for in all of the analyses. For example, the logistic regression model using citing/cited pairs should include random effects for article, author, and perhaps subfield, in order for independence of observations to be plausible. Similarly, bootstrapping and permutation would ideally occur at the author level rather than (or in addition to) the paper level.

      Detailed updates addressing these points are in the public review. In short, we found computational challenges with many levels of the random effects (>100,000) and millions of observations at the citation pairs level. As such, we decided to model citations rates and counts by paper. In this case, we found that results could be unstable when including author-level random effects because in many cases there was only one author per group. Instead, to avoid inappropriately narrow confidence bands, we resampled the dataset such that each author was only represented once. For example, if Author A had five papers in this dataset, then one of their five papers was randomly selected. We repeated the random resampling 100 times (Figure S12). We updated our description of our models in the Methods section (page 21, line 754).

      For permutation tests and bootstrapping, we now define an “exchangeability block” as a co-authorship group of authors. In this dataset, that meant any authors who published together (among the articles in this dataset) as a First Author / Last Author pairing were assigned to the same exchangeability block. It is not realistic to check for overlapping middle authors in all papers because of the collaborative nature of the field. In addition, we believe that self-citations are primarily controlled by first and last authors, so we can assume that middle authors do not control self-citation habits. We then performed bootstrapping and permutation tests in the constraints of the exchangeability blocks.

      (2) In general, I am having trouble understanding the structure of the regression models. My current belief is that rows are composed of individual citations from papers' reference lists, with the outcome representing their status as a self-citation or not, and with various citing article and citing author characteristics as predictors. However, the fact that author type is included in the model as a predictor (rather than having a model for FA self-citations and another for LA self-citations) suggests to me that each citation is entered as two separate rows - once noting whether it was a FA self-citation and once noting whether it was an LA self-citation - and then it is run as a single model.

      (2a) If I am correct, the model is unlikely to be producing valid inference. I would recommend breaking this analysis up into two separate models, and including article-, author-, and subfield-level random effects. You could theoretically include a citation-level random effect and keep it as one model, but each 'group' would only have two observations and the model would be fairly unstable as a result.

      (2b) If I am misunderstanding (and even if not), I would encourage you to provide a more detailed description of the dataset structure and the model - perhaps with a table or diagram

      We split the data into two models and decided to model on the level of a paper (self-citation rate and self-citation count). In addition, we subsampled the dataset such that each author only appears once to avoid misestimation of confidence intervals (see point (1) above). As described in the public review, we included much more detail in our methods section now to improve the clarity of our models.

      (3) I would suggest removing the inverse hyperbolic sine transform and replacing it with a more flexible approach to estimating the relationships' shape, like generalized additive models or other spline-based methods to ensure that the chosen method is appropriate - or at the very least checking that it is producing a realistic fit that reflects the underlying shape of the relationships.

      More details are available in the public review, but we now use GAMs throughout the manuscript.

      (4) For the "highly self-citing" analysis, it is unclear why papers in the 15-25% range were dropped rather than including them as their own category in an ordinal model. I might suggest doing the latter, or explaining the decision more fully

      We previously included this analysis as a paper-level model because our main model was at the level of citation pairs. Now, we removed this analysis because we model self-citation rates and counts by paper.

      (5) It would be beneficial for the reader to know what % of the data was dropped for each analysis, and for your team to make sure that there is not differential missing data that could affect the interpretation of the results (e.g., differences in self-citation being due to differences in Scopus ID coverage).

      Thank you for this suggestion. We added more detailed missingness data to 4.3 Data exclusions and missingness. We did find differential missingness and added it to the limitations section. However, certain aspects of this cannot be corrected because the data are just not available (e.g., Scopus coverage issues). Further details are available in the public review.

      Conceptual thoughts:

      (1) I agree with your decision to focus on the second definition of self-citation (self-cites relative to my citations to others' work) rather than the first (self-cites relative to others' citations to my work). But it does seem that the first definition is relevant in the context of gaming citation metrics. For example, someone who writes one paper per year with a reference list of 30% self-citations will have much less of an impact on their H-index than someone who writes 10 papers per year with 10% self-citations. It could be interesting to see how these definitions interact, and whether people who are high on one measure tend to be high on the other.

      We agree this would be interesting to investigate in the future. Unfortunately, our dataset is organized at the level of the paper and thus does not contain information regarding how many times the authors cite a particular work. We hope that we can explore this interaction in the future.

      (2) This is entirely speculative, but I wonder whether the increasing rate of LA self-citation relative to FA self-citation is partly due to PIs over-citing their own lab to build up their trainees' citation records and help them succeed in an increasingly competitive job market. This sounds more innocuous than doing it to benefit their own reputation, but it would provide another mechanism through which students from large and well-funded labs get a leg-up in the job market. Might be interesting to explore, though I'm not exactly sure how :)

      This is a very interesting point. We do not have any means to investigate this with the current dataset, but we added it to the discussion (page 14, line 421):

      “A third, more optimistic explanation is that principal investigators (typically Last Authors) are increasingly self-citing their lab’s papers to build up their trainee’s citation records for an increasingly competitive job market.”

      Reviewer #2 (Recommendations For The Authors):

      (1) In regards to point 1 in the public review: In the spirit of transparency, the authors would benefit from providing a rationale for their choice of top journals, and the methodology used to identify them. It would also be valuable to include the impact factor of each journal in the S1 table alongside their names.

      Given the availability and executability of code, it would be useful to see how and if the self-citation trends vary amongst the "low impact" journals (as measured by the IF). This could go in any of the three directions:

      a. If it is found that self-citations are not as prevalent in low impact journals, this could be a great starting point for a conversation around the evaluation of journals based on impact factor, and the role of self-citations in it.

      b. If it is found that self-citations are as prevalent in low impact journals as high impact journals, that just strengthens your results further.

      c. If it is found that self-citations are more prevalent in low impact journals, this would mean your current statistics are a lower bound to the actual problem. This is also intuitive in the sense that high impact journals get more external citations (and more exposure) than low impact journals, as such authors (and journals) may be less likely to self-cite.

      Expanding the dataset to include many more journals was not feasible. Instead, we included an impact factor term in our models, as detailed in the public review. We found no strong trends in the association between impact factor and self-citation rate/count. Another important note is that these journals were considered “high impact” in 2020, but many had lower impact factors in earlier years. Thus, our modeling allows us to estimate how impact factor is related to self-citations across a wide range of impact factors.

      It is crucial to consider utilizing such a comprehensive database as Scopus, which provides a more thorough list of all journals in Neuroscience, to obtain a more representative sample. Alternatively, other datasets like Microsoft Academic Graph, and OpenAlex offer information on the field of science associated with each paper, enabling a more comprehensive analysis.

      We agree that certain datasets may offer a wider view of the entire field. However, we included a large number of papers and journals relative to previous studies. In addition, Scopus provides a lot of detailed and valuable author-level information. We had to limit our calls to the Scopus API so restricted journals by 2020 impact factor.

      (2) In regards to point 2 in the public review: To enhance the accuracy and specificity of the analysis, it would be beneficial to distinguish neuroscientists among the co-authors. This could be accomplished by examining their publication history leading up to the time of publication of the paper, and identify each author's level of engagement and specialization within the field of neuroscience.

      Since the field of neuroscience is largely based on collaborations, we find that it might be impossible to determine who is a neuroscientist. For example, a researcher with a publication history in physics may now be focusing on computational neuroscience research. As such, we feel that our current work, which ensures that the papers belong to neuroscience, is representative of what one may expect in terms of neuroscience research and collaboration.

      (3) In regards to point 3 in the public review: I highly recommend plotting self-citation rate as the number of papers in the reference list over the number of total publications to date of paper publication.

      As described in the public review, we have now done this (Figure S3).

      (4) In regards to point 5 in the public review: It would be useful to consider the "quality" of citations to further the discussion on self-citations. For instance, differentiating between self-citations that are perfunctory and superficial from those that are essential for showing developmental work, would be a valuable contribution.

      Other databases may have access to this information, but ours unfortunately does not. We agree that this is an interesting area of work.

      (5) The authors are to be commended for their logistic regression models, as they control for many confounders that were lacking in their earlier descriptive statistics. However, it would be beneficial to rerun the same analysis but on a linear model whereby the outcome variable would be the number of self-citations per author. This would possibly resolve many of the comments mentioned above.

      Thank you for your suggestion. As detailed in the public review, we now model the number of self-citations. This is modeled on the paper level, not the author level, because our dataset was downloaded by paper, not by author.

      Minor suggestions:

      (1) Abstract says one of your findings is: "increasing self-citation rates of First Authors relative to Last Authors". Your results actually show the opposite (see Figure 1b).

      Thank you for catching this error. We corrected it to match the results and discussion in the paper:

      “…increasing self-citation rates of Last Authors relative to First Authors.”

      (2) It might be interesting to compute an average academic age for each paper, and look at self-citation vs average academic age plot.

      We agree that this would be an interesting analysis. However, to limit calls to the API, we collected academic age data only on First and Last Authors.

      (3) It may be interesting to look at the distribution of women in different subfields within neuroscience, and the interaction of those in the context of self-citations.

      Thank you for this interesting suggestion. We added the following analysis (page 9, line 305):

      “Furthermore, we explored topic-by-gender interactions (Figure S10). In short, men and women were relatively equally represented as First Authors, but more men were Last Authors across all topics. Self-citation rates were higher for men across all topics.”

      Reviewer #3 (Recommendations For The Authors):

      - In the abstract, "flaws in citation practices" seems worded rather strongly.

      We respectfully disagree, as previous works have shown significant bias in citation practices. For example, Dworkin et al. (Dworkin et al. 2020) found that neuroscience reference lists tended to under-cite women, even after including various covariates.

      - Links of the references to point to (non-accessible) paperpile references, you would probably want to update this.

      We apologize for the inconvenience and have now removed these links.

      - p 2, l 24: The explanation of ref. (5) seems to be a bit strangely formulated. The point of that article is that citations to work that reinforce a particular belief are more likely to be cited, which *creates* unfounded authority. The unfounded authority itself is hence no part of the citation practices

      Thank you for catching our misinterpretation. We have now removed this part of the sentence.

      - p 3, l 16: "h indices" or "citations" instead of "h-index".

      We now say “h-indices”.

      - p 5, l 5: how was the manual scoring done?

      We added the following to the caption of Figure S1.

      “Figure S1. Comparison between manual scoring of self-citation rates and self-citation rates estimated from Python scripts in 5 Psychiatry journals: American Journal of Psychiatry, Biological Psychiatry, JAMA Psychiatry, Lancet Psychiatry, and Molecular Psychiatry. 906 articles in total were manually evaluated (10 articles per journal per year from 2000-2020, four articles excluded for very large author list lengths and thus high difficulty of manual scoring). For manual scoring, we downloaded information about all references for a given article and searched for matching author names.”

      - p 5, l 23: Why this specific p-value upper bound of 4e-3? From later in the article, I understand that this stems from the 10000 bootstrap sample, with then taking a Bonferroni correction? Perhaps good to clarify this briefly somewhere.

      Thank you for this suggestion. We now perform Benjamini/Hochberg false discovery rate (FDR) correction, but we added a description of the minimum P value from permutations (page 21, line 748):

      “All P values described in the main text were corrected with the Benjamini/Hochberg 16 false discovery rate (FDR) correction. With 10,000 permutations, the lowest P value after applying FDR correction is P=2.9e-4, which indicates that the true point would be the most extreme in the simulated null distribution.”

      - Fig. 1, caption: The (a) and (b) labelling here is a bit confusing, because the first sentence suggests both figures portray the same, but do so for different time periods. Perhaps rewrite, so that (a) and (b) are both described in a single sentence, instead of having two different references to (a) and (b).

      Thank you for pointing this out. We fixed the labeling of this caption:

      “Figure 1. Visualizing recent self-citation rates and temporal trends. a) Kernel density estimate of the distribution of First Author, Last Author, and Any Author self-citation rates in the last five years. b) Average self-citation rates over every year since 2000, with 95% confidence intervals calculated by bootstrap resampling.”

      - p7, l 9: Regarding "academic age", note that there might be a difference between "age" effects and "cohort" effects. That is, there might be difference between people with a certain career age who started in 1990 and people with the same career age, but who started in 2000, which would be a "cohort" effect.

      We agree that this is a possible effect and have added it to the limitations (page 16, line 532):

      “Tenth, while we considered academic age, we did not consider cohort effects. Cohort effects would depend on the year in which the individual started their career.”

      - p 7, l 15: "jumps" suggests some sort of sudden or discontinuous transition, I would just say "increases".

      We now say “increases.”

      - Fig. 2: Perhaps it should be made more explicit that this includes only academics with at least 50 papers. Could the authors please clarify whether the same limitation of at least 50 papers also features in other parts of the analysis where academic age is used? This selection could affect the outcomes of the analysis, so its consequences should be carefully considered. One possibility for instance is that it selects people with a short career length who have been exceptionally productive, namely those that have had 50 papers, but only started publishing in 2015 or so. Such exceptionally productive people will feature more highly in the early career part, because they need to be so productive in order to make the cut. For people with a longer career, the 50 papers would be less of a hurdle, and so would select more and less productive people more equally.

      We apologize for the lack of clarity. We did not use this requirement where academic age was used. We mainly applied this requirement when aggregating by country, as we did not want to calculate self-citation rate in a country based on only several papers. We have clarified various data exclusions in our new section 4.3 Data exclusions and missingness.

      - p 8, l 11: The affiliated institution of an author is not static, but rather changes throughout time. Did the authors consider this? If not, please clarify that this refers to only the most recent affiliation (presumably). Authors also often have multiple affiliations. How did the authors deal with this?

      The institution information is at the time of publication for each paper. We added more detail to our description of this on page 19, line 656:

      “For both First and Last Authors, we found the country of their institutional affiliation listed on the publication. In the case of multiple affiliations, the first one listed in Scopus was used.”

      - p 10, l 6: How were these self-citation rates calculated? This is averaged per author (i.e. only considering papers assigned to a particular topic) and then averaged across authors? (Note that in this way, the average of an author with many papers will weigh equally with the average of an author with few papers, which might skew some of the results).

      We calculate it across the entire topic (i.e., do NOT calculate by author first). We updated the description as follows (page 7, line 211):

      “We then computed self-citation rates for each of these topics (Figure 4) as the total number of self-citations in each topic divided by the total number of references in each topic…”

      - p 13, l 18: Is the academic age analysis here again limited to authors having at least 50 papers?

      This is not limited to at least 50 papers. To clarify, the previous analysis was not limited to authors with 50 papers. It was instead limited to ages in our dataset that had at least 50 data points. e.g., If an academic age of 70 only had 20 data points in our dataset, it would have been excluded.

      - Fig. 5: Here, comparing Fig. 5(d) and 5(f) suggests that partly, the self-citation rate differences between men and women, might be the result of the differences in number of papers. That is, the somewhat higher self-citation rate at a given academic age, might be the result of the higher number of papers at that academic age. It seems that this is not directly described in this part of the analysis (although this seems to be the case from the later regression analysis).

      We agree with this idea and have added a new section as follows (page 13, line 384):

      “2.10 Reconciling differences between raw data and models

      The raw and GAM-derived data exhibited some conflicting results, such as for gender and field of research. To further study covariates associated with this discrepancy, we modeled the publication history for each author (at the time of publication) in our dataset (Table 2). The model terms included academic age, article year, journal impact factor, field, LMIC status, gender, and document type. Notably, Neuroscience was associated with the fewest number of papers per author. This explains how authors in Neuroscience could have the lowest raw self-citation rates by highest self-citation rates after including covariates in a model. In addition, being a man was associated with about 0.25 more papers. Thus, gender differences in self-citation likely emerged from differences in the number of papers, not in any self-citation practices.”

      - Section 2.10. Perhaps the authors could clarify that this analysis takes individual articles as the unit of analysis, not citations.

      We updated all our models to take individual articles and have clarified this with more detailed tables.

      - p 18, l 10: "Articles with between 15-25% self-citation rates were 10 discarded" Why?

      We agree that these should not be discarded. However, we previously included this analysis as a paper-level model because our main model was at the level of citation pairs. Now, we removed this analysis because we model self-citation rates and counts by paper.

      - p 20, l 5: "Thus, early-career researchers may be less incentivized to 5 self-promote (e.g., self-cite) for academic gains compared to 20 years ago." How about the possibility that there was less collaboration, so that first authors would be more likely to cite their own paper, whereas with more collaboration, they will more often not feature as first author?

      This is an interesting point. We feel that more collaboration would generally lead to even more self-citations, if anything. If an author collaborates more, they are more likely to be on some of the references as a middle author (which by our definition counts toward self-citation rates).

      - p 20, l 15: Here the authors call authors to avoid excessive self-citations. Of course, there's nothing wrong with calling for that, but earlier the authors were more careful to not label something directly as excessive self-citations. Here, by stating it like this, the authors suggest that they have looked at excessive self-citations.

      We rephrased this as follows:

      Before: “For example, an author with 30 years of experience cites themselves approximately twice as much as one with 10 years of experience on average. Both authors have plenty of works that they can cite, and likely only a few are necessary. As such, we encourage authors to be cognizant of their citations and to avoid excessive self-citations.”

      After: “For example, an author with 30 years of experience cites themselves approximately twice as much as one with 10 years of experience on average. Both authors have plenty of works that they can cite, and likely only a few are necessary. As such, we encourage authors to be cognizant of their citations and to avoid unnecessary self-citations.”

      - p 22, l 11: Here again, the same critique as p 20, l15 applies.

      We switched “excessively” to “unnecessarily.”

      - p 23, l 12: The authors here critique ref. (21) of ascertainment bias, namely that they are "including only highly-achieving researchers in the life 12 sciences". But do the authors not do exactly the same thing? That is, they also only focus on the top high-impact journals.

      We included 63 high-impact journals with tens of thousands of authors. In addition, some of these journals were not high-impact at the time of publication. For example, Acta Neuropathologica had an impact factor of 17.09 in 2020 but 2.45 in 2000. This still is a limitation of our work, but we do cover a much broader range of works than the listed reference (though their analysis also has many benefits since it included more detailed information).

      - p 26, l 22-26: It seems that the matching is done quite broadly (matching last names + initials at worst) for self-citations, while later (in section 4.9, p 31, l 9), the authors switch to only matching exact Scopus Author IDs. Why not use the same approach throughout? Or compare the two definitions (narrow / broad).

      Thank you for catching this mistake. We now use the approach of matching Scopus Author IDs throughout.

      - S8: it might be nice to explore open alternatives, such as OpenAlex or OpenAIRE, instead of the closed Scopus database, which requires paid access (which not all institutions have, perhaps that could also be corrected in the description in GitHub).

      Thank you for this suggestion. Unfortunately, switching databases would require starting our analysis from the beginning. On our GitHub page, we state: “Please email matthew.rosenblatt@yale.edu if you have trouble running this or do not have institutional access. We can help you run the code and/or run it for you and share your self-citation trends.” We feel that this will allow us to help researchers who may not have institutional access. In addition, we released our aggregated, de-identified (title and paper information removed) data on GitHub for other researchers to use.

    1. 34. "Fun." Last, but not least - games must be fun. No amount of emotional depth will save a game that is boring.

      We talked a lot about this last class but I'd love to know what he meant by this

    2. 26. "Connective Tissue" Techniques. These create a feeling of connectivity across the various locales in a game, even if they are distant in terms of space and time.

      I talked about this in relation to how Undertale uses music as part of it's storytelling in a class a took last spring!

    1. The track with the next closest number of comments was a freestyle in which Jay-Z allegedly impugned Game. It was released a week later and garnered 589 commentsand over 79.000 listens over the same period. In fact, four out of five of the longestmessage board threads (as of March 20, 2005) were posted on beef tracks. Much like theradio call-in forum discussed above, the postings on these message boards manifest oneform of popular participation in beef, additionally, they reveal how meaningful publicparticipation in beef discourse is enmeshed in the network of media broadcast andconsumption.

      This shows online platforms' role in shaping the discourse around hip-hop beef. By looking at the engagement data from sites like hiphopgame.com, it becomes clear that public interest in beef is a substantial and structural part of hip-hop culture. The overwhelming response to tracks like 50 Cent’s “Piggybank” highlights how fans actively participate in these dialogues, transforming music into a communal experience. This interplay between media consumption and public commentary underscores the dynamic nature of hip-hop as both an art form and a site of cultural discussion.

    2. Norman Kelly analyzes the rap music industry asan extension of colonial economic structures that exploit African Americans. Accordingto Kelly, the white-owned music industry has agency over the content of hip hop becausethey control the apparatus of distribution and the means of production. Since blacks failedto develop a viable alternative to corporate music production, when hip hop becamecommercialized, black artists lost their creative control over hip hop to the marketplace

      I never really thought about the music being some sort of economic strategy for economic dominance and power. After reading this part I realize that it has now become a chain of reactions. Black artist create their art, but don't have the resources needed to produce their music, thus forcing them to rely on the individuals that can produce their music and provide connections. The taking advantage of colored artist truly has become just another white-mans game and it's interesting to see how art has now become a business deal.

    1. Résumé de la vidéo [00:00:12][^1^][1] - [00:26:56][^2^][2]:

      Ce webinaire présente la formation "Enseigner avec le jeu" proposée par le collectif UNICAMP. Jérôme Le Gris Pages de l'Université de Caen Normandie explique les objectifs, le contenu et les modalités de cette formation.

      Temps forts: + [00:02:00][^3^][3] Présentation de la formation * Formation par UNICAMP * Micro-certification reconnue * 12 heures sur 3 semaines + [00:03:18][^4^][4] Intervention de Jérôme Le Gris Pages * Historien des idées * Vice-président de l'Université de Caen Normandie * Expert en ludopédagogie + [00:07:02][^5^][5] Public cible * Enseignants et formateurs * Conseillers pédagogiques * Game designers et développeurs + [00:09:00][^6^][6] Objectifs de la formation * Scénarisation pédagogique * Animation de séances ludo-pédagogiques * Évaluation de l'efficacité des jeux + [00:17:01][^7^][7] Méthodes et outils * Gamification de la formation * Jeu de rôle sur forum * Ateliers paragogiques et tutorat motivationnel

      Résumé de la vidéo [00:27:00][^1^][1] - [00:44:11][^2^][2]:

      Cette vidéo présente un webinaire sur la formation "Enseigner avec le jeu" de l'Université de Caen. Elle aborde les différents formats de jeux utilisés, les modalités de formation synchrones et asynchrones, ainsi que les questions des participants sur l'inscription, la certification et l'adaptation de la formation à différents publics.

      Points forts: + [00:27:12][^3^][3] Formats de jeux abordés * Jeux vidéo et analogiques * Jeux de rôle et de plateau * Jeux compétitifs et non compétitifs + [00:28:01][^4^][4] Modalités synchrones et asynchrones * Heures asynchrones pour travail autonome * Heures synchrones pour interactions en direct * Importance de l'interactivité en synchrone + [00:30:29][^5^][5] Modalités d'inscription et de financement * Inscription via la page dédiée * Contrat de formation professionnelle * Possibilité de remboursement par l'employeur + [00:33:00][^6^][6] Certification et évaluation * Micro-certification universitaire * Exigence de participation et d'assiduité * Évaluation par quiz final + [00:35:01][^7^][7] Adaptation à différents publics * Enseignants en maternelle * Accompagnement de seniors * Utilisation du jeu pour l'animation et la concertation

    1. Growing up, she remembers her father watching wrestling on TV. “Because he didn’t have a son, he used to make us, his five daughters, bout with each other at home. I am the oldest of them and the best at the game.”

      Including a personal anecdote here is an excellent choice, as it evokes the reader's emotions. By learning more about her background and circumstances, it becomes clear why she is passionate about pursuing this career.

    1. Learning through GamesThe final and most prescriptive type of play-based learning is learning through games. This type ofplay was implemented in all nine classrooms to promote the development of discrete math andlanguage skills. In this manner, teachers sought to make the learning of these mandated academicstandards more engaging for the students: “There are always some academic things that we have todeal with too. We do try to make it engaging” (Participant 3). One of the ways in which theseteachers promoted this engagement was through the playing of games. “Because then it is a game.We are not learning. We are playing” (Participant 7). In these play episodes the teacher directed theoutcomes and prescribed the process while the children followed the rules of the games. Forinstance, the students in Class 5 played Words With Friends, a game that involved using lettertiles to spell words and names on a game board. Students in Classes 6 and 9 played word and letterbingo, whereas in Classes 12 and 14 the students went fishing for letters with their magnetic fishingpoles. Math games were also prevalent in several of the classrooms. Class 2 played Go Fish withnumber cards, whereas in Class 12 students used Play-Doh to make the assigned number of wormsand place them in the scene on themed placemats.In these nine classrooms, multiple types of play were integrated. These variations of play providedthe opportunity for both play and play-based learning. The play episodes were child directed andfree from the confines of academic standards. The play-based learning activities involved varyinglevels of teacher involvement and integrated varying degrees of academic learning.

      Found to be most structured to develop discrete math and language skills, but in a more hands on engaging way.

    1. Alternating game play with novelistic components, interactive fictions expand the repertoire of the literary through a variety of techniques, including visual displays, graphics, animations, and clever modifications of traditional literary devices.

      "Visual Novels" are a type of EL and are popular in Japan. They are usually like games, where you can see pictures and the text of the story underneath them. The player can interact with the text by clicking to continue or making choices about the characters' actions in the plot, which changes it.

    1. nstead of speculating on their history, the ubiquity of mancalagames is shown to be particularly useful to dispel a long-standing belief thatso-called complex societies are more likely to play strategic games.

      This ubiquity shows that even simpler societies engage with games involving strategy, breaking the stereotype of game complexity being tied to societal development

    2. games and gaming have often been used as agents and arenas ofrestriction of the non-privileged.

      Luxury game sets and exclusive access reinforced social hierarchies, restricting lower classes from full participation in gaming, thereby maintaining societal divisions

    1. In other words, the move from tax equality to tax breaks for the married cannot be Pareto-optimal: the benefit for the married can be achieved only at the expense of the unmarried.116

      another game theory here

    2. The symbolic pressure on women to marry, and the idea that they are worthless if unmarried, means that if marriage exists women are better off married than unmarried.

      this is like game theory, and the foot binding example from PPE gateway

    1. That’s how you can consistently exist in the current finite game, and leave yourself open to the surprises (and the possibility of being surprising) in games that don’t yet exist that you don’t know you’re already playing. And that’s how you continue playing.

      This reminds me of, Leto II Atreides allowing for surprises within his Eugenics program, Paul Atreides dies because he tried to use his power of the spice to articulate a exact future rather than getting hints and adapting to it as it came along.

      In reality it is better to have guide rails on a bowling alley rather than a machine that can get strikes for you. Cause when you go play Croquet or lawn bowling or Batchi ball that machine ain't going to help you but you still got some skill from Bowling and still got a high score from playing bowling.

    2. One way to remember this is to treat the infinite game of evolutionary success as a sort of Zeno’s paradox turned around. You never reach the finish line because when you’re mediocre, you only take a step that’s halfway to the finish, so there’s always more room left to continue the game.
    3. In Douglas Hofstadter’s Metamagical Themas, there is a description of a game (I forget the details) where the goal is not to get the top score, but the average score. The subtlety is that after playing multiple rounds, the overall winner is not the one with the highest total score, but the most average total score. So to illustrate, if Alice, Bob, and Charlie are playing such a game and their scores in a series of 6 games are: Alice: 7 5 3 5 6 2 Bob: 5 8 2 1 9 7 Charlie: 3 1 5 4 5 5 We have the following outcome. Bob wins game 1, Alice wins game 2, Bob wins game 3, Charlie wins game 4, 5, and 6. So Alice gets 1 point, Bob gets 2 points, and Charlie gets 3 points. The overall winner is Bob, not Charlie. Charlie is the most mediocre, but Bob is mediocre mediocre. His prize is (perhaps) highest probability of continuing the game.
    4. There are instances of programs respecting the rules of the game while blatantly violating its spirit.

      Usually most of the work of a task is defining and understanding it rather than actually doing it

    5. For instance, there are instances of programs figuring out how to use tiny rounding errors in simulated game environments to violate the simulated law of conservation of energy, and milking the simulation itself for a winning strategy. Like the characters in The Matrix bend the laws of physics when inside.

      I wonder when AI will start speed running video games?

    6. Every principal-agent game is of this sort. Every sort of moral hazard is marked by the ability of one side to pursue mediocrity rather than excellence. In each case, there is an information asymmetry powering the mediocrity.

      This is the cure to being a perfectionist

    7. This kind of indifference-driven mediocrity is the hallmark of games where one side is playing a finite game and the other side is playing an infinite game that isn’t necessarily evil in the Carse sense of wanting to end the game for the other, but isn’t striving for excellence either.

      Can you compare this to WOW(World of Warcraft) proper and the PvP and PvE parts of the game. One of an infinite game but there are finite games within the infinite game that people play.

    8. This is just a different way of playing a finite game. Instead of optimizing (playing to win), you minimize effort to stay in the specific finite game. If you can perform consistently without disqualifying errors, you are satisficing. Most automation and quality control is devoted to raising the floor of this kind of performance.

      This phrasing reminds me of "War Games", the only way to win is not to play

    9. Mediocrity is the functionally embodied and situated form of what Sarah Perry called deep laziness. To be mediocre at something is to be less than excellent at it in order to conserve energy for the indefinitely long haul. Mediocrity is the ethos of perpetual beta at work in a domain you’re not sure what the “product” is even for. Functionally unfixed self-perpetuation.

      Wana talk about being Mediocre, check out Sam Larson who understood to win the game show alone, you just need to get really really fat and just sit around not wasting calories trying to get more calories in an environment where that can't really be done.

      • To draw a card -> to pick up another paper from the pile
      • To move a game pice -> to rotate opportunities to play.
      • to take turns -> to advance a token
      • to read instructions -> to learn from the written directions
      • to get points -> to obtain a hegher score number
    1. 1- Do you like to play games? why or why not? I love playing all kinds of games, whether they are board games or video games. Sometimes I take the games a bit seriously because I'm a bit competitive. But they usually represent a sense of calm, focus, and fun for me at the same time.

      2- What kind of games do you like to play? Now, I am a fan of starcraft 2 or any RTS game. But now I have become a fan of dota 2 and I think I'm going to give it a try. also when I get together with my friends, we play card games like Uno.

      3- always i have a mate with play duo and is the same mate with i play the cards games always i have a mate to play duo and is the same mate witch i play the cards games, his name is bruno. But once a week we are playing with a most than 5 friends more.

    1. players can literally write the rules and behavior of decentralized applications, and therefore, any Smart Assembly created in the game

      It seems that the protocol of a smart object is given through the Solidity code.

      Protocol code as contract code.

    1. The final season of Game of Thrones resulted in a petition of more than a million signatures for HBO to remake it. Ridiculous? Yes. But maybe that was the point

      I heard they did this because the last couple seasons were lacking, I watch a lot of anime and This is so common inside of that community

    2. The last decade or so has witnessed huge changes in the awareness, perception and tools of fandom. In terms of television and film, the enormous successes of Game of Thrones and the Marvel Cinematic Universe have introduced geek culture – and its brand of participatory fandom – to the mainstream. At the same time, the internet – and more specifically social media – has amplified fans’ voices, while also breaking down the boundaries between them and the artists they love/hate.

      due to the increasing ways that's the fans can interact with artist I feel that shows or movies have grown because now the artist knows exactly what the people want and can give it to them

    3. And to the 1920s, where fan groups would write thousands of letters to movie studios demanding their favourite actor be given better roles. “It was the same thing,” he says, “as Sonic the Hedgehog having weird teeth and people going, ‘No, that’s not the game I played as a kid, you need to fix it or I am not giving you any money.’”

      I feel that this is a great way to look at it, if the fans have to pay money to watch it then them having some input as to how the characters are portrayed isn't necessarily a bad thing

    1. "People were marching around the building with placards denouncing Campbell. They were shouting and their mood was very ugly. When I got to my seat near the ice, the game had already started and the Canadiens couldn't seem to get untracked, Then Campbell made his grand entrance. I looked up and I could see some fans beginning to menace him. On one hand I felt pleased because I hated him for what he had done to me and on the other hand I didn't want to see harm come to him. Then a tear-gas bomb went off and the arena was getting filled with smoke."

      Retrieved from: https://www.nhl.com/news/voices-from-the-past-maurice-rocket-richard

    1. On March 13, 1955, in a game in Boston, Richard got into a fight with Hal Laycoe after he was high-sticked in the head. Richard needed five stitches to close the cut on his forehead. When the whistle was blown to end the play, Richard skated up to Laycoe and hit him in the face with his stick. A linesman attempted to restrain Richard who repeatedly tried to attack Laycoe. Richard eventually broke the stick over the body of Laycoe. Linesman Cliff Thompson attempted to contain Richard and Richard punched him twice in the face, knocking him out. Richard was given a match penalty and an automatic $100 fine.In the dressing room after the game, Boston police attempted to arrest Richard but were blocked from getting into the dressing room by Canadiens players. Eventually, the Bruins convinced the officers to let the Canadiens leave on condition that the NHL would take care of the issue.

      Retrieved from :https://hyp.is/go?url=https%3A%2F%2Fcanadaehx.com%2F2022%2F04%2F23%2Fthe-richard-riot%2F&group=world

    1. But how is the author supposed to accommodate them? What if the audience runs away with the story? And how do we handle the blur — not just between fiction and fact, but between author and audience, entertainment and advertising, story and game? A lot of smart people — in film, in television, in videogames, in advertising, in technology, even in neuroscience — are trying to sort these questions out. The Art of Immersion is their story.

      This passage discusses issues with modern storytelling and how difficult it can be with audience interaction since new boundaries and changing narrative.

    1. Welcome back, this is part two of this lesson. We're going to continue immediately from the end of part one. So let's get started.

      Now that you know the structure of a segment, let's take a look at how it's used within TCP.

      Let's take a few minutes to look at the architecture of TCP.

      TCP, like IP, is used to allow communications between two devices.

      Let's assume a laptop and a game server.

      TCP is connection-based, so it provides a connection architecture between two devices.

      And let's refer to these as the client and the server.

      Once established, the connection provides what's seen as a reliable communication channel between the client and the server, which is used to exchange data.

      Now let's step through how this actually works, now that you understand TCP segments.

      The actual communication between client and server, this will still use packets at layer three.

      We know now that these are isolated.

      They don't really provide error checking, any ordering, and they're isolated, so there's no association between each other.

      There's no connection as such.

      Because they can be received out of order, and because there are no ports, you can't use them in a situation where there will be multiple applications or multiple clients, because the server has no way of separating what relates to what.

      But now we have layer four, so we can create segments.

      Layer four takes data provided to it and chops that data up into segments, and these segments are encapsulated into IP packets.

      These segments contain a sequence number, which means that the order of segments can be maintained.

      If packets arrive out of order, that's okay, because the segments can be reordered.

      If a packet is damaged or lost in transit, that's okay, because even though that segment will be lost, it can be retransmitted, and segments will just carry on.

      TCP gives you this guaranteed reliable ordered set of segments, and this means that layer four can build on this platform of reliable ordered segments between two devices.

      It means that you can create a connection between a client and the server.

      In this example, let's assume segments are being exchanged between the client and the game server.

      The game communicates to a TCP port 443 on the server.

      Now, this might look like this architecturally, so we have a connection from a random port on the client to a well-known port, so 443 on the game server.

      So between these two ports, segments are exchanged.

      When the client communicates to the server, the source port is 23060, and the destination port is 443.

      This architecturally is now a communication channel.

      TCP connections are bi-directional, and this means that the server will send data back to the client, and to do this, it just flips the ports which are in use.

      So then the source port becomes TCP443 on the server, and the destination port on the client is 23060.

      And again, conceptually, you can view this as a channel.

      Now, these two channels you can think of as a single connection between the client and the server.

      Now, these channels technically aren't real, they're created using segments, so they build upon the concept of this reliable ordered delivery that segments provide, and give you this concept at a stream or a channel between these two devices over which data can be exchanged, but understand that this is really just a collection of segments.

      Now, when you communicate with the game server in this example, you use a destination port of 443, and this is known as a well-known port.

      It's the port that the server is running on.

      Now, as part of creating the connection, you also create a port on your local machine, which is temporary, this is known as the ephemeral port.

      This tends to use a higher port range, and it's temporary.

      It's used as a source port for any segments that you send from the client to the server.

      When the server responds, it uses the well-known port number as the source, and the ephemeral port as the destination.

      It reverses the source and destination for any responses.

      Now, this is important to understand, because from a layer 4 perspective, you'll have two sets of segments, one with a source port of 23060 and a destination of 443, and ones which are the reverse, so a source port of 443, and a destination of 23060.

      From a layer 4 perspective, these are different, and it's why you need two sets of rules on a network ACL within AWS.

      One set for the initiating part, so the laptop to the server, and another set for the response part, the server to the laptop.

      When you hear the term ephemeral ports or high ports, this means the port range that the client picks as the source port.

      Often, you'll need to add firewall rules, allowing all of this range back to the client.

      Now, earlier, when I was stepping through TCP segment structure, I mentioned the flags field.

      Now, this field contains, as the name suggests, some actual flags, and these are things which can be set to influence the connection.

      So, Finn will finish a connection, Akk is an acknowledgement, and Sin is used at the start of connections to synchronize sequence numbers.

      With TCP, everything is based on connections.

      You can't send data without first creating a connection.

      Both sides need to agree on some starting parameters, and this is best illustrated visually.

      So, that's what we're going to do.

      So, the start of this process is that we have a client and a server.

      And as I mentioned a moment ago, before any data can be transferred using TCP, a connection needs to be established, and this uses a three-way handshake.

      So, step one is that a client needs to send a segment to the server.

      So, this segment contains a random sequence number from the client to the server.

      So, this is unique in this direction of travel for segments.

      And this sequence number is initially set to a random value known as the ISN or initial sequence number.

      So, you can think of this as the client saying to the server, "Hey, let's talk," and setting this initial sequence number.

      So, the server receives the segment, and it needs to respond.

      So, what it does is it also picks its own random sequence number.

      We're going to refer to this as SS, and it picks this as with the client side randomly.

      Now, what it wants to do is acknowledge that it's received all of the communications from the client.

      So, it takes the client sequence number, received in the previous segment, and it adds one.

      And it sets the acknowledgement part of the segment that it's going to send to the CS plus one value.

      What this is essentially doing is informing the client that it's received all of the previous transmission, so CS, and it wants it to send the next part of the data, so CS plus one.

      So, it's sending this segment back to the client.

      It's picking its own server sequence, so SS, and it's incrementing the client sequence by one, and it sends this back to the client.

      So, in essence, this is responding with, "Sure, let's talk."

      So, this type of segment is known as a SIN-AC.

      It's used to synchronize sequence numbers, but also to acknowledge the receipt of the client sequence number.

      So, when the first segment was called a SIN, to synchronize sequence numbers, the next segment is called a SIN-AC.

      It serves two purposes.

      It's also used to synchronize sequence numbers, but also to acknowledge the segment from the client.

      The client receives the segment from the server.

      It knows the server sequence, and so, to acknowledge to the server that it's received all of that information, it takes the server sequence, so SS, and it adds one to it, and it puts this value as the acknowledgement.

      Then it also increments its own client sequence value by one, and puts that as the sequence, and then sends an acknowledgement segment, containing all this information through to the server.

      Essentially, it's saying, "Autumn, let's go."

      At this point, both the client and server agree on the sequence values.

      The client has acknowledged the initial sequence value decided by the server, and the server has acknowledged the initial value decided by the client.

      So, both of them are synchronized, and at this point, data can flow over this connection between the client and the server.

      Now, from this point on, any time either side sends data, they increment the sequence, and the other side acknowledges the sequence value plus one, and this allows for retransmission when data is lost.

      So, this is a process that you need to be comfortable with, so just make sure that you understand every step of this process.

      Okay, so let's move on, and another concept which I want to cover is sessions, and the state of sessions.

      Now, you've seen this architecture before, a client communicating with the game server.

      The game server is running on a well-known port, so TCP 443, and the client is using an ephemeral port 23060 to connect with port 443 on the game server.

      So, response traffic will come up from the game server, its source port will be 443, and it will be connecting to the client on destination port 23060.

      Now, imagine that you want to add security to the laptop, let's say using a firewall.

      The question is, what rules would you add?

      What types of traffic would you allow from where and to where in order that this connection will function without any issues?

      Now, I'm going to be covering firewalls in more detail in a separate video.

      For now though, let's keep this high level.

      Now, there are two types of capability levels that you'll encounter from a security perspective.

      One of them is called a stateless firewall.

      With a stateless firewall, it doesn't understand the state of a connection.

      So, when you're looking at a layer 4 connection, you've got the initiating traffic, and you've got the response traffic.

      So, the initiating traffic in light with the bottom, and the response traffic in red at the top.

      With a stateless firewall, you need two rules.

      A rule allowing the outbound segments, and another rule which allows the response segments coming in the reverse direction.

      So, this means that the outbound connection from the laptop's IP, using port 23060, connecting to the server IP, using port 443.

      So, that's the outgoing part.

      And then the inbound response coming from the service IP on port 443, going to the laptop's IP on a femoral port 23060.

      So, the stateless firewall, this is two rules, one outbound rule and one inbound rule.

      So, this is a situation where we're securing an outbound connection.

      So, where the laptop is connecting to the server.

      If we were looking to secure, say, a web server, where connections would be made into our server, then the initial traffic would be inbound, and the response would be outbound.

      There's always initiating traffic, and then the response traffic.

      And you have to understand the directionality to understand what rules you need with a stateless firewall.

      So, that's a stateless firewall.

      And if you have any AWS experience, that's what a network access control list is.

      It's a stateless firewall which needs two rules for each TCP connection, one in both directions.

      Now, a stateless firewall is different.

      This understands the state of the TCP segment.

      So, with this, it sees the initial traffic and the response traffic as one thing.

      So, if you allow the initiating connection, then you automatically allow the response.

      So, in this case, if we allowed the initial outbound connection from the client laptop to the server, then the response traffic, the inbound traffic, would be automatically allowed.

      In AWS, this is how a security group works.

      The difference is that a stateless firewall understands level and the state of the traffic.

      It's an extension of what a stateless firewall can achieve.

      Now, this is one of those topics where there is some debate about whether this is layer four or layer five.

      Layer four uses TCP segments and concerns itself with ID addresses and port numbers.

      Strictly speaking, the concept of a session or an ongoing communication between two devices, that is layer five.

      It doesn't matter if this level item can by layer four and layer five anyway, because it's just easier to explain.

      But you need to remember the term stateless and the term stateful and how they change how you create security rules.

      For this point, that's everything I wanted to cover.

      So, go ahead and complete this video. And when you're ready, I'll look forward to you joining me in the next video of this series.

    1. Welcome back, this is part three of this lesson. We're going to continue immediately from the end of part two. So let's get started.

      The address resolution protocol is used generally when you have a layer three packet and you want to encapsulate it inside a frame and then send that frame to a MAC address.

      You don't initially know the MAC address and you need a protocol which can find the MAC address for a given IP address.

      For example, if you communicate with AWS, AWS will be the destination of the IP packets.

      But you're going to be forwarding via your home router which is the default gateway.

      And so you're going to need the MAC address of that default gateway to send the frame to containing the packet.

      And this is where ARP comes in.

      ARP will give you the MAC address for a given IP address.

      So let's step through how it works.

      For this example, we're going to keep things simple.

      We've got a local network with two laptops, one on the left and one on the right.

      And this is a layer three network which means it has a functional layer two and layer one.

      What we want is the left laptop which is running a game and it wants to send the packets containing game data to the laptop on the right.

      This laptop has an IP address of 133.33.3.10.

      So the laptop on the left takes the game data and passes it to its layer three which creates a packet.

      This packet has its IP address as the source and the right laptop as the destination.

      So 133.33.3.10.

      But now we need a way of being able to generate a frame to put that packet in for transmission.

      We need the MAC address of the right laptop.

      This is what ARP or the address resolution protocol does for us.

      It's a process which runs between layer two and layer three.

      It's important to point out at this point that now you know how devices can determine if two IP addresses are on the same local network.

      In this case, the laptop on the left because it has its subnet mask and IP address as well as the IP address of the laptop on the right.

      It knows that they're both on the same network.

      And so this is a direct local connection.

      Routers aren't required.

      We don't need to use any routers for this type of communication.

      Now ARP broadcasts on layer two.

      It sends an ARP frame to all Fs as a MAC address.

      And it's asking who has the IP address 133.33.3.10 which is the IP address of the laptop on the right.

      Now the right laptop because it has a full layer one, two and three networks stack is also running the address resolution protocol.

      The ARP software sees this broadcast and it responds by saying I'm that IP address.

      I'm 133.33.3.10.

      Here's my MAC address ending 5B colon 7, 8.

      So now the left laptop has the MAC address of the right one.

      Now it can use this destination MAC address to build a frame, encapsulate the packet in this frame.

      And then once the frame is ready, it can be given to layer one and sent across the physical network to layer one of the right laptop.

      Layer one of the right laptop receives this physical orbit stream and hands it off to the layer two software also on the right laptop.

      Now it's layer two software reviews the destination MAC address and sees that it's destined for itself.

      So it strips off the frame and it sends the packet to its layer three software.

      Layer three reviews the packet, sees that it is the intended destination and it de-encapsulates the data.

      So strips away the packet and hands the data back to the game.

      Now it's critical to understand as you move through this lesson series, even if two devices are communicating using layer three, they're going to be using layer two for local communications.

      If the machines are on the same local network, then it will be one layer two frame per packet.

      But if you'll see in a moment if the two devices are remote, then you can have many different layer two frames which are used along the way.

      And ARP, or the address resolution protocol, is going to be essential to ensure that you can obtain the MAC address for a given IP address.

      This is what facilitates the interaction between layer three and layer two.

      So now that you know about packets, now that you know about subnet masks, you know about routes and route tables, and you know about the address resolution protocol or ARP, let's bring this all together now and look at a routing example.

      So we're going to go into a little bit more detail now.

      In this example, we have three different networks.

      We've got the orange network on the left, we've got the green network in the middle, and then finally the pink network on the right.

      Now between these networks are some routers.

      Between the orange and green networks is router one, known as R1, and between the green and pink networks is router two, known as R2.

      Each of these routers has a network interface in both of the networks that it touches.

      Routers are layer three devices, which means that they understand layer one, layer two, and layer three.

      So the network interfaces in each of these networks work at layer one, two, and three.

      In addition to this, we have three laptops.

      We've got two in the orange network, so device one at the bottom and device two at the top, and then device three in the pink network on the right.

      Okay, so what I'm going to do now is to step through two different routing scenarios, and all of this is bringing together all of the individual concepts which I've covered at various different parts of this part of the lesson series.

      First, let's have a look at what happens when device one wants to communicate with device two using its IP address.

      First, device one is able to use its own IP address and subnet mask together with device two's IP address, and calculate that they're on the same local network.

      So in this case, router R1 is not required.

      So a packet gets created called P1 with a D2 IP address as the destination.

      The address resolution protocol is used to get D2's MAC address, and then that packet is encapsulated in a frame with that MAC address as the destination.

      Then that frame is sent to the MAC address of D2.

      Once the frame arrives at D2, it checks the frame, hits the destination, so it accepts it and then strips the frame away.

      It passes the packet to layer three.

      It sees that it's the destination IP address, so it strips the packet away and then passes the game data to the game.

      Now all of this should make sense.

      This is a simple local network communication.

      Now let's step through a remote example.

      Device two communicating with device three.

      These are on two different networks.

      Device two is on the orange network, and device three is on the pink network.

      So first, the D2 laptop, it compares its own IP address to the D3 laptop IP address, and it uses its subnet mask to determine that they're on different networks.

      Then it creates a packet P2, which has the D3 laptop as its destination IP address.

      It wraps this up in a frame called F2, but because D3 is remote, it knows it needs to use the default gateway as a router.

      So for the destination MAC address of F2, it uses the address resolution protocol to get the MAC address of the local router R1.

      So the packet P2 is addressed to the laptop D3 in the pink network, so the packet's destination IP address is D3.

      The frame F2 is now addressed to the router R1 at MAC address, so this frame is sent to router R1.

      R1 is going to see that the MAC address is addressed to itself, and so it will strip away the frame F2, leaving just the packet P2.

      Now a normal network device such as your laptop or phone, if it received a packet which wasn't destined for it, it would just drop that packet.

      A router though, it's different.

      The router's job is to route packets, so it's just fine to handle a packet which is addressed somewhere else.

      So it reviews the destination of the packet P2, it sees that it's destined for laptop D3, and it has a route for the pink network in its route table.

      It knows that for anything destined for the pink network, then router R2 should be the next hop.

      So it takes packet P2 and it encapsulates it in a new frame F3.

      Now the destination MAC address of this frame is the MAC address of router R2, and it gets this by using the address resolution protocol or ARP.

      So it knows that the next hop is the IP address of router R2, and it uses ARP to get the MAC address of router R2, and then it sends this frame off to router R2 as the next hop.

      So now we're in a position where router R2 has this frame F3 containing the packet P2 destined for the machine inside the pink network.

      So now the router R2 has this frame with the packet inside.

      It sees that it's the destination of that frame.

      The MAC address on the frame is its MAC address, so it accepts the frame and it removes it from around packet P2.

      So now we've just got packet P2 again.

      So now router R2 reviews the packet and it sees that it's not the destination, but that doesn't matter because R2 is a router.

      It can see that the packet is addressed to something on the same local network, so it doesn't need to worry anymore about routing.

      Instead, it uses ARP to get the MAC address of the device with the intended destination IP address, so laptop D3.

      It then encapsulates the packet P2 in a new frame, F4, whose destination MAC address is that of laptop D3, and then it sends this frame through to laptop D3. laptop D3 receives the frame, D3 sees that it is the intended destination of the frame because the MAC address matches its MAC address.

      It strips off the frame, it also sees that it's the intended destination of the IP packet, it strips off the packet, and then the data inside the packet is available for the game that's running on this laptop.

      So it's a router's job to move packets between networks.

      Router's doing this by reviewing packets, checking route tables for the next hop or target addresses, and then adding frames to allow the packets to pass through intermediate layer 2 networks.

      A packet during its life might move through any number of layer 2 networks and be re-encapsulated many times during its trip, but normally the packet itself remains unchanged all the way from source to destination.

      A router is just a device which understands physical networking, it understands data link networking, and it understands IP networking.

      So that's layer 3, the network layer, and let's review what we've learned quickly before we move on to the next layer of the OSI model.

      Now this is just an opportunity to summarize what we've learned, so at the start of this video, at layer 2 we had media access control, and we had device to device or device to all device communications, but only within the same layer 2 network.

      So what does layer 3 add to this?

      Well it adds IP addresses, either version 4 or version 6, and this is cross network addressing.

      It also adds the address, resolution, protocol, or ARP, which can find the MAC address for this IP address or for a given IP address.

      Layer 3 adds routes, which define where to forward a packet to, and it adds route tables, which contain multiple routes.

      It adds the concept of a device called a router, which moves packets from source to destination, encapsulating these packets in different layer 2 frames along the way.

      This altogether allows for device to device communication over the internet, so you can access this video, which is stored on a server, which has several intermediate networks away from your location.

      So you can access this server, which has an IP address, and packets can move from the server through to your local device, crossing many different layer 2 networks.

      Now what IP doesn't provide?

      It provides no method of individual channels of communication.

      Layer 3 provides packets, and packets only have source IP and destination IP, so for a given two devices, you can only have one stream of communication, so you can't have different applications on those devices communicating at the same time.

      And this is a critical limitation, which is resolved by layers 4 and above.

      Another element of layer 3 is that in theory packets could be delivered out of order.

      Individual packets move across the internet through intermediate networks, and depending on network conditions, there's no guarantee that those packets will take the same route from source to destination, and because of different network conditions, it's possible they could arrive in a different order.

      And so if you've got an application which relies on the same ordering at the point of receipt as at the point of transmission, then we need to add additional things on top of layer 3, and that's something that layer 4 protocols can assist with.

      Now at this point we've covered everything that we need to for layer 3.

      There are a number of related subjects which I'm going to cover in dedicated videos, such as network address translation, and how the IP address space functions, as well as IP version 6, which in this component of the lesson series, we've covered how the architecture of layer 3 of the OSI model works.

      So at this point, go ahead and complete this video, and then when you're ready, I'll look forward to you joining me in the next part of this lesson series where we're going to look at layer 4.

    1. Welcome back.

      Now that we've covered the physical and data link layers, next we need to step through layer 3 of the OSI model, which is the network layer.

      As I mentioned in previous videos, each layer of the OSI model builds on the layers below it, so layer 3 requires one or more operational layer 2 networks to function.

      The job of layer 3 is to get data from one location to another.

      When you're watching this video, data is being moved from the server hosting the video through to your local device.

      When you access AWS or stream from Netflix, data is being moved across the internet, and it's layer 3 which handles this process of moving data from a source to a destination.

      To appreciate layer 3 fully, you have to understand why it's needed.

      So far in the series, I've used the example of 2-4 friends playing the game on a local area network.

      Now what if we extended this, so now we have 2 local area networks and they're located with some geographic separation.

      Let's say that one is on the east coast of the US and another is on the west coast, so there's a lot of distance between these 2 separate layer 2 networks.

      Now LAN1 and LAN2 are isolated layer 2 networks at this point.

      Devices on each local network can communicate with each other, but not outside of that local layer 2 network.

      Now you could pay for and provision a point-to-point link across the entire US to connect these 2 networks, but that would be expensive, and if every business who had multiple offices needed to use point-to-point links, it would be a huge mess and wouldn't be scalable.

      Additionally, each layer 2 network uses a shared layer 2 protocol.

      In the example so far, this has been Ethernet.

      Any networks where only using layer 2, if we want them to communicate with each other, they need to use the same layer 2 protocol to communicate with another layer 2 network.

      Now not everything uses the same layer 2 protocol, this presents challenges, because you can't simply join 2 layer 2 networks together, which use different layer 2 protocols and have them work out of the box.

      With the example which is on screen now, imagine if we had additional locations spread across the continental US.

      Now in between these locations, let's add some point-to-point links, so we've got links in pink which are tabled connections, and these go between these different locations.

      Now we also might have point-to-point links which use a different layer 2 protocol.

      In this example, let's say that we had a satellite connection between 2 of these locations.

      This is in blue, and this is a different layer 2 technology.

      Now Ethernet is one layer 2 technology which is generally used for local networks.

      It's the most popular wired connection technology for local area networks.

      But for point-to-point links and other long distance connections, you might also use things such as PPP, MPLS or ATM.

      Not all of these use frames with the same format, so we need something in common between them.

      Layer 2 is the layer of the OSI stack which moves frames, it moves frames from a local source to a local destination.

      So to move data between different local networks, which is known as inter-networking, this is where the name internet comes from.

      We need a layer 3.

      Layer 3 is this common protocol which can span multiple different layer 2 networks.

      Now layer 3 or the network layer can be added onto one or more layer 2 networks, and it adds a few capabilities.

      It adds the internet protocol or IP.

      You get IP addresses which are cross-networking addresses, which you can assign to devices, and these can be used to communicate across networks using routing.

      So the device that you're using right now, it has an IP address.

      The server which stores this video, it too has an IP address.

      And the internet protocol is being used to send requests from your local network across the internet to the server hosting this video, and then back again.

      IP packets are moved from source to destination across the internet through many intermediate networks.

      Devices called routers, which are layer 3 devices, move packets of data across different networks.

      They encapsulate a packet inside of an ethernet frame for that part of the journey over that local network.

      Now encapsulation just means that an IP packet is put inside an ethernet frame for that part of the journey.

      Then when it needs to be moved into a new network, that particular frame is removed, and a new one is added around the same packet, and it's moved onto the next local network.

      So as this video data is moving from my server to you, it's been wrapped up in frames.

      Those frames are stripped away, new frames are added, all while the packets of IP data move from my video server to you.

      So that's why IP is needed at a high level, to allow you to connect to all that remote networks, crossing intermediate networks on the way.

      Now over the coming lesson, I want to explain the various important parts of how layer 3 works.

      Specifically IP, which is the layer 3 protocol used on the internet.

      Now I'm going to start with the structure of packets, which are the data units used within the internet protocol, which is a layer 3 protocol.

      So let's take a look at that next.

      Now packets in many ways are similar to frames.

      It's the same basic concept.

      They contain some data to be moved, and they have a source and destination address.

      The difference is that with frames, both the source and destination are generally local.

      With an IP packet, the destination and source addresses could be on opposite sides of the planet.

      During their journey from source to destination packets remain the same, as they move across layer 2 networks.

      They're placed inside frames, which is known as encapsulation.

      The frame is specific to the local network that the packet is moving through, and changes every time the packet moves between networks.

      The packet though doesn't change.

      Normally it's constant for the duration for its entire trip between source and destination.

      Although there are some exceptions that I'll be detailing in a different lesson, when I talk about things like network address translation.

      Now there are two versions of the internet protocol in use.

      Version 4, which has been used for decades, and version 6, which adds more scalability.

      And I'll be covering version 6 and its differences in a separate lesson.

      An IP packet contains various different fields, much like frames that we discussed in an earlier video.

      At this level there are a few important things within an IP packet which you need to understand, and some which are less important.

      Now let's just skip past the less relevant ones.

      I'm not saying any of these are unimportant, but you don't need to know exactly what they do at this introductory level.

      Things which are important though, every packet has a source and destination IP address field.

      The source IP address is generally the device IP which generates the packet, and the destination IP address is the intended destination IP for the packet.

      In the previous example we have two networks, one east coast and one west coast.

      The source might be a west coast PC, and the destination might be a laptop within the east coast network.

      But crucially these are both IP addresses.

      There's also the protocol field, and this is important because IP is layer 3.

      It generally contains data provided by another layer, a layer 4 protocol, and it's this field which stores which protocol is used.

      So examples of protocols which this might reference are things like ICMP, TCP or UDP.

      If you're storing TCP data inside a packet this value will be 6, for PINs known as ICMP this value will be 1, and if you're using UDP as a layer 4 protocol then this value will be 17.

      This field means that the network stack at the destination, specifically the layer 3 component of that stack, will know which layer 4 protocol to pass the data into.

      Now the bulk of the space within a packet is taken up with the data itself, something that's generally provided from a layer 4 protocol.

      Now lastly there's a field called time to live or TTL.

      Remember the packets will move through many different intermediate networks between the source and the destination, and this is a value which defines how many hops the packet can move through.

      It's used to stop packets looping around forever.

      If for some reason they can't reach their destination then this defines a maximum number of hops that the packet can take before being discarded.

      So just in summary a packet contains some data which it carries generally for layer 4 protocols.

      It has a source and destination IP address, the IP protocol implementation which is on routers moves packets between all the networks from source to destination, and it's these fields which are used to perform that process.

      As packets move through each intermediate layer 2 network, it will be inserted or encapsulated in a layer 2 frame, specific for that network.

      A single packet might exist inside tens of different frames throughout its route to its destination, one for every layer 2 network or layer 2 point to point link which it moves through.

      Now IP version 6 from a packet structure is very similar, we also have some fields which matter less at this stage.

      They are functional but to understand things at this level it's not essential to talk about these particular fields.

      And just as with IP version 4, IP version 6 packets also have both source and destination IP address fields.

      But these are bigger IP version 6 addresses are bigger which means there are more possible IP version 6 addresses.

      And I'm going to be covering IP version 6 in detail in another lesson.

      It means though that space taken in a packet to store IP version 6 source and destination addresses is larger.

      Now you still have data within an IP version 6 packet and this is also generally from a layer 4 protocol.

      Now strictly speaking if this were to scale then this would be off the bottom of the screen, but let's just keep things simple.

      We also have a similar field to the time to live value within IP version 4 packets, which in IP version 6 this is called the hop limit.

      Functionally these are similar, it controls the maximum number of hops that the packet can go through before being discarded.

      So these are IP packets, generally they store data from layer 4 and they themselves are stored in one or more layer 2 frames as they move around networks or links which fall on the internet.

      Okay so this is the end of part 1 of this lesson.

      It was getting a little bit on the long side and I wanted to give you the opportunity to take a small break, maybe stretch your legs or make a coffee.

      Now part 2 will continue immediately from this point, so go ahead complete this video and when you're ready I look forward to you joining me in part 2.

    1. Welcome back and in this part of the lesson series I'm going to be discussing layer one of the seven layer OSI model which is the physical layer.

      Imagine a situation where you have two devices in your home let's say two laptops and you want to play a local area network or LAN game between those two laptops.

      To do this you would either connect them both to the same Wi-Fi network or you'd use a physical networking cable and to keep things simple in this lesson I'm going to use the example of a physical connection between these two laptops so both laptops have a network interface card and they're connected using a network cable.

      Now for this part of the lesson series we're just going to focus on layer one which is the physical layer.

      So what does connecting this network cable to both of these devices give us?

      Well we're going to assume it's a copper network cable so it gives us a point-to-point electrical shared medium between these two devices so it's a piece of cable that can be used to transmit electrical signals between these two network interface cards.

      Now physical medium can be copper in which case it uses electrical signals it can be fiber in which case it uses light or it can be Wi-Fi in which case it uses radio frequencies.

      Whatever type of medium is used it needs a way of being able to carry unstructured information and so we define layer one or physical layer standards which are also known as specifications and these define how to transmit and receive raw bitstream so ones and zeros between a device and a shared physical medium in this case the piece of copper networking cable between our two laptops so the standard defines things like voltage levels, timings, data rates, distances which can be used, the method of modulation and even the connector type on each end of the physical cable.

      The specification means that both laptops have a shared understanding of the physical medium so the cable.

      Both can use this physical medium to send and receive raw data.

      For copper cable electrical signals are used so a certain voltage is defined as binary 1 say 1 volt and a certain voltage as binary 0 say -1 volt.

      If both network cards in both laptops agree because they use the same standard then it means that zeros and ones can be transmitted onto the medium by the left laptop and received from the medium by the right laptop and this is how two networking devices or more specifically two network interface cards can communicate at layer one.

      If I refer to a device as layer X so for example layer one or layer three then it means that the device contains functionality for that layer and below so a layer one device just understands layer one and a layer three device has layers one, two and three capability.

      Now try to remember that because it's going to make much of what's coming over the remaining videos of this series much easier to understand.

      So just to reiterate what we know to this point we've taken two laptops we've got two layer one network interfaces and we've connected them using a copper cable a copper shared medium and because we're using a layer one standard it means that both of these cards can understand the specific way that binary zeros and ones are transmitted onto the shared medium.

      Now on the previous screen I use the example of two devices so two laptops with network interface cards communicating with each other.

      Two devices can use a point-to-point layer one link a fancy way of talking about a network cable but what if we need to add more devices a two-player game isn't satisfactory we need to add two more players for a total of four.

      Well we can't really connect these four devices to a network cable with only two connectors but what we can do is to add a networking device called a hub in this example it's a four-port hub and the laptop on the left and right instead of being connected to each other directly and now connected to two ports of that hub because it's a four-port hub this also means that it has two ports free and so it can accommodate the top and bottom laptops.

      Now hubs have one job anything which the hub receives on any of its ports is retransmitted to all of the other ports including any errors or collisions.

      Conceptually a hub creates a four connector network cable one single piece of physical medium which four devices can be connected to.

      Now there are a few things that you really need to understand at this stage about layer one networking.

      First there are no individual device addresses at layer one one laptop cannot address traffic directly at another it's a broadcast medium the network card on the device on the left transmits onto the physical medium and everything else receives it it's like shouting into a room with three other people and not using any names.

      Now this is a limitation but it is fixed by layer two which will cover soon in this lesson series.

      The other consideration is that it is possible that two devices might try and transmit at once and if that happens there will be a collision this corrupts any transmissions on the shared medium only one thing can transmit at once on a shared medium and be legible to everything else if multiple things transmit on the same layer one physical medium then collisions occur and render all of the information useless.

      Now related to this layer one has no media access control so no method of controlling which devices can transmit so if you decide to use a layer one architecture so a hub and all of the devices which is shown on screen now then collisions are almost guaranteed and the likelihood increases the more layer one devices are present on the same layer one network.

      Layer one is also not able to detect when collisions occur remember these network cards are just transmitting via voltage changes on the shared medium it's not digital they can in theory all transmit at the same time and physically that's okay it means that nobody will be able to understand anything but at layer one it can happen so layer one is done it doesn't have any intelligence beyond defining the standards that all of the devices will use to transmit onto the shared medium and receive from the shared medium because of how layer one works and because of how a hub works because it simply retransmits everything even collisions then the layer one network is said to have one broadcast and one collision domain and this means that layer one networks tend not to scale very well the more devices are added to a layer one network the higher the chance of collisions and data corruption.

      Now layer one is fundamental to networking because it's how devices actually communicate at a physical level but for layer one to be useful for it to be able to be used practically for anything else then we need to add layer two and layer two runs over the top of a working layer one connection and that's what we'll be looking at in the next part of this lesson series.

      As a summary of the position that we're in right now assuming that we have only layer one networking we know that layer one focuses on the physical shared medium and it focuses on the standards for transmitting onto the medium and receiving from the shared medium so all devices which are part of the same layer one network need to be using the same layer one medium and device standards generally this means a certain type of network card and a certain type of cable or it means why vicar's using a certain type of antennas and frequency ranges what layer one doesn't provide is any form of access control of the shared medium and it doesn't give us uniquely identifiable devices and this means we have no method for device to device communication everything is broadcast using transmission onto the shared physical medium.

      Now in the next video of this series I'm going to be stepping through layer two which is the data link layer and this is the layer which adds a lot of intelligence on top of layer one and allows device to device communication and it's layer two which is used by all of the upper layers of the OSI model to allow effective communication but it's important that you understand how layer one works because this physically is how data moves between all devices and so you need to have a good fundamental understanding of layer one.

      Now this seems like a great place to take a break so I'm going to end this video here so go ahead and complete this video and then when you're ready I look forward to you joining me in the next part of this lesson series where we'll be looking at layer two or the data link layer.

    1. wavelength? Are you wide-awake and your roommate almost asleep? Is the baseball game really important to you but totally boring to the person you are talking with? It is important that everyone involved understands the context of the conversation. Is it a party, which lends itself to frivolous banter? Is the conversation about something serious that occurred? What are some of the relevant steps to understanding context? First of all, pay attention to timing. Is there enough time to cover what you are trying to say? Is it the right time to talk to the boss about a raise? What about the location? Should your conversation take

      I think it could be helpful for others to understand the context of the conversation.

    1. We're here to support our customers affected by wildfires. Learn more about relief options available through the TD Helps program.  Skip to main content Personal Small Business Commercial Investing About TD Select Country Canada Selected US Select Language English English Selected Français 简体中文 繁體中文 My Accounts Products Bank Accounts Credit Cards Mortgages Borrowing Personal Investing Insurance Promotions & Offers Ways to Bank TD App Online Banking Ways to Pay Ways to Send Money Phone, Branch & ATM Cross Border Banking Foreign Exchange Services Green Banking Learn Advice Hub TD Calculators and Tools Youth and Parent Students Post Grads New to Canada Seniors (60+) Estate Support Find Us Help Search Login Easy Web Web Broker U.S. Banking My Accounts & Profile EasyWeb My Accounts My Profile Password and Security Account Settings Notification and Alerts Other Accounts Web Broker U.S. Banking Logout Open Menu Login Open Menu My Accounts & Profile open menu Logout TD: Personal Personal Selected Small Business Commercial Investing About TD Home My Accounts Bank Accounts Credit Cards Mortgages Borrowing Personal Investing Insurance Promotions & Offers Ways to Bank TD App Online Banking Ways to Pay Ways to Send Money Phone, Branch & ATM Cross Border Banking Foreign Exchange Services Green Banking Learn Advice Hub TD Calculators and Tools Youth and Parent Students Post Grads New to Canada Banking Advice for Seniors (60+) Estate Support Find Us Help Country: Canada Canada Selected country US Language: English English Select language Français 简体中文 繁體中文 Login EasyWeb WebBroker U.S. Banking Logout EasyWeb My Accounts My Profile Password and Security Account Settings Notifications & Alerts Other Accounts WebBroker U.S. Banking @media (min-width: 320px) { #id-homepage-top1.cmp-banner-container { background-image: url(/content/dam/tdct/images/personal-banking/advice_hub.jpeg/jcr:content/renditions/cq5dam.web.320.320.jpeg); } } @media (min-width: 768px) { #id-homepage-top1.cmp-banner-container { background-image: url(/content/dam/tdct/images/personal-banking/advice_hub.jpeg/jcr:content/renditions/cq5dam.web.768.768.jpeg); } } @media (min-width: 1024px) { #id-homepage-top1.cmp-banner-container { background-image: url(/content/dam/tdct/images/personal-banking/advice_hub.jpeg/jcr:content/renditions/cq5dam.web.1024.1024.jpeg); } } @media (min-width: 1200px) { #id-homepage-top1.cmp-banner-container { background-image: url(/content/dam/tdct/images/personal-banking/advice_hub.jpeg/jcr:content/renditions/cq5dam.web.1200.1200.jpeg); } } Don't be afraid to talk money. Get financial advice to help you move forward. Speak with a TD advisor today. TD Ready Advice is here for you. Learn more EasyWeb: Online Banking Secure Login Register Security Guarantee WebBroker Online Trading Secure Login Register Security Guarantee Welcome to TD Personal Banking Explore TD Canada Trust and related products and services Find a chequing account For daily spending, making bill payments and more Find a savings account Accounts to help you grow your savings Find a credit card TD credit cards offer a host of benefits and features Explore mortgage options Get specialized advice to help with your home ownership journey Personal investing Registered plans and investments to help you reach your goals Borrowing Find a borrowing option that fits your life Invest and trade online TD Direct Investing – innovative tools for self-directed investors Personalized wealth advice Goals-based planning and advice with a TD Wealth advisor Today's rates Current rates for borrowing & investing products Plan your move to Canada Start learning about the Canadian banking system, the immigration process and what you could expect when you arrive. Learn more TD Advice - ready to help you move forward Looking for financial advice? Read through our articles, videos, and tools with helpful information for everyday banking, borrowing, saving and financial planning needs. Learn more Protect against fraud Find out what to do if you suspect fraud, and learn how to keep your accounts and identity more secure Learn more Learn more Plan for your financial future with our unique banking solutions slide 1 of 3 I'm a teen/parent It's never too late – or too early – to plan for your financial future. Check out our convenient banking options to help you grow and manage your money. I'm a teen/parent Get started I'm a student Discover TD banking solutions and resources to help you gain confidence about staying on top of your finances while in school. I'm a student Start planning I'm a post grad/young adult From everyday banking to investment advice, we're here to help you improve your financial game. I'm a post grad/young adult Enter your next phase Estate planning and settlement advice If you are looking for information on how to plan your own Estate or need support with settling an Estate of a loved one, we're here to help. Estate planning and settlement advice Learn more Check your financial health Try the Financial Health Assessment Tool. This tool can help assess your financial health and provide tips and information by answering a few questions. Get started It's never been more important to bank digitally slide 1 of 2 Easy, safe and secure banking from home At this time, we encourage you to bank digitally. Easy, safe and secure banking from home Learn more Bank from anywhere with confidence It’s easy to get started with online and mobile banking. Bank from anywhere with confidence View All Tutorials Updates from us slide 1 of 3 Personal TD Chequing Accounts Pay bills, send money, make purchases, and manage your cash flow. Learn more Accelerate your savings Get a premium rate with TD high interest savings account Find out more Looking for a TD Credit Card? Our Credit Card Selector Tool can help you choose. Learn more slide 1 of 2 Get advice virtually, or in person. Ask a TD advisor and get the advice you need. Don’t be afraid to talk money. Learn more Digital Tools Our digital tools are safe and secure, and help make everyday banking easy. Digital Tools Learn more Why Bank with TD Convenient banking At TD, there are so many ways to bank. You can bank online, in branch, on the phone and with the TD apps. See the ways in which TD enriches lives slide 1 of 4 Sustainability Learn how we're embedding sustainability into our business TD Ready Commitment We're targeting $1 billion in community giving by 2030 Customer Appreciation We're celebrating 10 years of #TDThanksYou with chances for our customers to win prizes Publications 2023 Public Accountability Statement is now available Have a question? Find answers here What's your question? Ask Us Popular questions Sorry, we didn't find any results. You could check for misspelled words or try a different term or question. We're sorry. Service is currently unavailable. We found a few responses for you: We matched that to: Popular questions View morepopular questions Helpful related questions View morehelpful related questions Did you find what you were looking for? YesNo Thank you We're sorry. Service is currently unavailable. Sorry this didn't help. Would you leave us a comment about your search? Submit Thank you slide 1 of 2 Your deposits may be insurable by the Canada Deposit Insurance Corporation. Learn more CIRO regulation applies to TD Investment Services Inc., a separate company from and a wholly-owned subsidiary of The Toronto-Dominion Bank. www.ciro.ca Need to talk to us directly? Contact us Follow TD Twitter Facebook Instagram YouTube Linkedin Privacy & Security Legal Accessibility CDIC Member About Us Careers Manage online experience Site Index

      Robust - The website has a responsive design allowing people to adjust content size to the desired specification

    1. But, is there an inflection point of consequence that changes the name of the “game” oflife on earth for everybody and everything?

      To me this question was something that could not be easily answered. I say that because I would say yes and no to this question. I feel us as humans create the not so good changes to the earth and we also adapt to a lot of things that may not always benefit us. Even though there may be an inflection point of consequence that slows us down persuading us to create a change, I think that more than likely it is something that will be subsided or something that will soon be deemed as "normal".

    1. TELEVISION PROTOCOL
      • TV picture: 60fps, each frame drawn line by line
      • 1 frame (Atari's research data for max TV compat)
      • 262/312 lines, 228 clocks per line
      • 3 x VSYNC + 37/45 x VBLANK + 192/228 display + 30/36 overscan
      • VSYNC = signal TV to start new frame
      • VBLANK = ?
      • overscan = ?
      • 1 line = 68 cycle horizontal blank + 160 cycle display
      • horizontal timing handled by TIA
      • WSYNC stops CPU till start of next line
      • vertical timing by CPU
      • after completion of a frame => VSYNC + VBLANK + pic + VBLANK
      • 1 CPU cycle = 3 TIA cycle
      • TV pic
      • drawn line at a time
      • CPU put data for line into TIA, TIA convert data to video signals
      • TIA has data only for current line
      • 70/85 blank lines for game logic
    1. My goal here is to engagein a productive conversation with proceduralism, bringing back playersand advocating, finally, for a player-centric approach to the design ofgames, particularly the design of ethics and politics in(to) games

      Yes, players are very important as the center of game design. Because the player's choices and behaviors actually participate in the creation of the game's narrative and experience, and at the same time, the player's actions can be fed back to the creator, thereby promoting the game to be more perfect.

    1. This is important because it reminds us that video games are software, and everything that happens in a game is based on its programming.

      In my opinion, I think interactivity is a "mental concept", since computers operate on a "input, process, output"(Stang, 2022). Interactivity is just how we percieve the uniqueness of computer reactions to our inputs.

    2. video games, like all computer-based media, are reactive rather than interactive: "a chain of reactions" in which "the player does not act so much as he reacts to what the game presents to him, and similarly, the game then reacts to his input" (pp. 119–20).

      I feel like this is literally describing interactivity/an interaction.

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Du et al. report 16 new well-preserved specimens of atiopodan arthropods from the Chengjiang biota, which demonstrate both dosal and vental anatomies of a pothential new taxon of atiopodans that are closely related to trolobites. Authors assigned their specimens to Acanthomeridion serratum, and proposed A. anacanthus as a junior subjective synonym of Acanthomeridion serratum. Critially, the presence of ventral plates (interpreted as cephalic liberigenae), together with phylogenic results, lead authors to conclude that the cephalic sutures originated multiple times within the Artiopoda.

      Strengths:

      New specimens are highly qualified and informative. The morphology of dorsal exoskeleton, except for the supposed free cheek, were well illustrated and described in detail, which provide a wealth of information for taxonmic and phylogenic analyses.

      Weaknesses:

      The weaknesses of this work is obvious in a number of aspects. Technically, ventral morphlogy is less well revealed and is poorly illustrated. Additional diagrams are necessary to show the trunk appendages and suture lines. Taxonomically, I am not convinced by authors' placement. The specimens are markedly different from either Acanthomeridion serratum Hou et al. 1989 or A. anacanthus Hou et al. 2017. The ontogenetic description is extremely weak and the morpholical continuity is not established. Geometric and morphomitric analyses might be helpful to resolve the taxonomic and ontogenic uncertainties. I am confused by author's description of free cheek (libragena) and ventral plate. Are they the same object? How do they connect with other parts of cephalic shield, e.g. hypostome and fixgena. Critically, homology of cephalic slits (eye slits, eye notch, doral suture, facial suture) not extensivlely discussed either morphologically or functionally. Finally, authors claimed that phylogenic results support two separate origins rather than a deep origin. However, the results in Figure 4 can be explain a deep homology of cephalic suture in molecular level and multiple co-options within the Atiopoda.

      Comments on the revised version:

      I have seen the extensive revision of the manuscript. The main point "Multiple origins of dorsal ecdysial sutures in atiopoans" is now partially supported by results presented by the authors. I am still unsatisfied with descriptions and interpretations of critical features newly revealed by authors. The following points might be useful for the author to make further revisions.

      (1) The antennae were well illustrated in a couple of specimens, while it was described in a short sentence.

      Some more details of the changing article shape and overall length of antennae has been added to the description.

      (2) There are also imprecise descriptions of features.

      Measurements, dimensions and multiple figures are provided for many features in the text and the supplement includes more figures. In total, 11 figures are provided with details (photographs or measurements) of the material.

      (3) Ontogeny of the cephalon was not described.

      A sentence has been added to the description to note the changing width:length of the cephalon during ontogeny, with a reference to Figure 6.

      (3) The critical head element is the so called "ventral plate". How this element connects with the cephalic shield is not adequately revealed. The authors claimed that the suture is along the cephalic margin. However, the lateral margin of cephalon is not rounded but exhibit two notches (e.g. Fig 3C) . This gives an indication that the supposed ventral plates have a dorsal extension to fit the notches. Alternatively, the "ventral plate" can be interpreted as a small free cheek with a large ventral extension, providing evidence for librigenal hypothesis.

      As noted in the diagnosis for the genus, these notches are interpreted to accommodate the eye stalks. The homology of the ventral plates is discussed at length in the manuscript, and is the focus of the three sets of phylogenetic analyses performed.

      Reviewer #3 (Public Review):

      Summary:

      Well-illustrated new material is documented for Acanthomeridion, a formerly incompletely known Cambrian arthropod. The formerly known facial sutures are proposed be associated with ventral plates that the authors homologise with the free cheeks of trilobites (although also testing alternative homologies). An update of a published phylogenetic dataset permits reconsideration of whether dorsal ecdysial sutures have a single or multiple origins in trilobites and their relatives.

      Strengths:

      Documentation of an ontogenetic series makes a sound case that the proposed diagnostic characters of a second species of Acanthomeridion are variation within a single species. New microtomographic data shed light on appendage morphology that was not formerly known. The new data on ventral plates and their association with the ecdysial sutures are valuable in underpinning homologies with trilobites.

      I think the revision does a satisfactory job of reconciling the data and analyses with the conclusions drawn from them. Referee 1's valid concerns about whether a synonymy of Acanthomeridion anacanthus is justified have been addressed by the addition of a length/width scatterplot in Figure 6. Referee 2's doubts about homology between the librigenae of trilobites and ventral plates of Acanthomeridion have been taken on board by re-running the phylogenetic analyses with a coding for possible homology between the ventral plates and the doublure of olenelloid trilobites. The authors sensibly added more trilobite terminals to the matrix (including Olenellus) and did analyses with and without constraints for olenelloids being a grade at the base of Trilobita. My concerns about counting how many times dorsal sutures evolved on a consensus tree have been addressed (the authors now play it safe and say "multiple" rather than attempting to count them on a bushy topology). The treespace visualisation (Figure 9) is a really good addition to the revised paper.

      Weaknesses:

      The question of how many times dorsal ecdysial sutures evolved in Artiopoda was addressed by Hou et al (2017), who first documented the facial sutures of Acanthomeridion and optimised them onto a phylogeny to infer multiple origins, as well as in a paper led by the lead author in Cladistics in 2019. Du et al. (2019) presented a phylogeny based on an earlier version of the current dataset wherein they discussed how many times sutures evolved or were lost based on their presence in Zhiwenia/Protosutura, Acanthomeridion and Trilobita. The answer here is slightly different (because some topologies unite Acanthomeridion and trilobites). This paper is not a game-changer because these questions have been asked several times over the past seven years, but there are solid, worthy advances made here.

      I'd like to see some of the most significant figures from the Supplementary Information included in the main paper so they will be maximally accessed. The "stick-like" exopods are not best illustrated in the main paper; their best imagery is in Figure S1. Why not move that figure (or at least its non-redundant panels) as well as the reconstruction (Figure S7) to the main paper? The latter summarises the authors' interpretation that a large axe-shaped hypostome appears to be contiguous with ventral plates.

      We have moved these figures from the supplementary information to the main text, and renumbered figures accordingly. Fig S1 has now been split – panels a and b are in the main text (new Fig. 4), with the remainder staying as Fig S1. Fig S7 is now Fig. 8 in the main text.

      The specimens depict evidence for three pairs of post-antennal cephalic appendages but it's a bit hard to picture how they functioned if there's no room between the hypostome and ventral plates. Also, a comment is required on the reconstruction involving all cephalic appendages originating against/under the hypostome rather the first pair being paroral near the posterior end of the hypostome and the rest being post-hypostomal as in trilobites.

      A short comment has been added to the caption.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      I have seen the extensive revision of the manuscript. The main point "Multiple origins of dorsal ecdysial sutures in atiopoans" is now partially supported by results presented by the authors. I am still unsatisfied with descriptions and interpretations of critical features newly revealed by authors. The following points might be useful for the author to make further revisions.

      (1) The antennae were well illustrated in a couple of specimens, while it was described in a short sentence.

      (2) There are also imprecise descriptions of features (see my annotations in submitted ms).

      (3) Ontogeny of the cephalon was not described.

      (3) The critical head element is the so called "vental plate". How this element connects with the cephalic shield is not adequately revealed. The authors claimed that the suture is along the cephalic margin. However, the lateral margin of cephalon is not rounded but exhibit two notches (e.g. Fig 3C) . This gives a indication that the supposed ventral plates have a dorsal extension to fit the notches. Alternatively, the "ventral plate" can be interpreted as a small free cheek with a large ventral extension, providing evidence for librigenal hypothesis.

      Reviewer #3 (Recommendations For The Authors):

      The references swap back and forth between journal titles being abbreviated or written out in full. Please standardise this to journal format rather than alternating between two different styles.

      Line 145: Perez-Peris et al. (2021) should be cited as the source for the Anacheirurus appendages.

      Added, thank you.

      Line 310: The El Albani et al (2024) paper on ellipsocephaloid appendages should be noted in connection with an A+4 (rather than A+3) head in trilobites.

      Added.

      Minor or trivial corrections:

      Line 51: move the three citations to follow "arthropods" rather than following "artiopodans", as none of these papers are specifically about Artiopoda.

      Changed thank you

      Caption to Figure 1 and line 100: Acanthomeridion appears in Figure 1 and in the text with no context. Please weave it into the text appropriately.

      Line 136: The data were...

      Corrected

      Line 164: upper case for Morphobank.

      Corrected

      Line 183: spelling of "Village" (not "Vallige").

      Corrected

      Line 197: I suggest using "articles" rather than "podomeres" for the antenna (as you did in line 232).

      Changed thank you

      Line 269: "gnathobasal spine (rather than "spin").

      Changed thank you

      Line 272: "Exopods" is used here but elsewhere "exopodites" is used.

      Exopodites is now used throughout

      Line 359: "can been seen" is awkward and, as evolutionary patterns are inferred rather than "seen", could be reworded as "... loss of the eye slit has been inferred...".

      Reworded as suggested

      Line 422 and 423: As two referees asked in the first round of review, delete "iconic" and "symbolic".

      Deleted as suggested

      Line 467: "librigena-like".

      Corrected

    1. Type-in activities:<br /> - learning about typewriters, both using and how they work;<br /> - demonstrations of typewriter maintenance, cleaning, how to change a ribbon, and small repairs for common problems - typewriter tool showcases - what tools might you need to maintain, clean, and repair your typewriter? - lots of machines to try out, which might best suit your writing style, and typing touch? If you don't have a typewriter, this is a great way to try some out before buying your first one - typewriter purchasing and collecting advice - encouraging typing as a distraction and screen-free writing tool - writing (fiction, non-fiction, poetry) along with potential writing prompts (this could dovetail in with other library-related writing endeavors); - A group story: A single typewriter is reserved to one side upon which each participant can contribute by typing a single sentence to create a collaborative/group story "exquisite corpse"-style; - typewriter art and arttyping - Speed typing contest (with small prizes) - We'll bring stationery (paper, envelopes, stamps) to encourage participants to type letters to friends or family (bring an address for someone you'd like to write to); - typewriter swap and sale (optional depending on the venue's perspective; sales are not the primary purpose here) - typewriter repairs using 3D printing or designing replacement parts (if the venue can support this) - typewriter handicrafts (typewriter covers and sewing/repairing cases) - typewriter resources (repair shops, where to find ribbon, how old is my machine?, et al.) - a possible typewriter mystery game? - share stories - encourage community

      For a local library-specific type in:<br /> - library card applications which can be typewritten for potential patrons who don't have a library card - typewriter books (particularly if hosted at a library; place a hold on several typewriter-related books which attendees can browse through at the event and check out afterward) - 3D printing typewriter keys, spare parts; design of replacement parts for 3D printing

      Attendees are encouraged to bring one (or many more of their own favorite manual typewriters) to use, showcase, or demonstrate to others, but having your own typewriter is NOT a requirement for attendance.

    1. oral culture was the primary means of cultural transmission

      When looking at both versions, I believe this is an a great way to think deeper about what Plato and Socrates were saying because this sentence makes me think about the game telephone and how easy it was back then to distort a story or conversation

    1. The growing consensus now is that game-engine renderers can model cameras well enough not only to test a perception system in simulation, but even to train perception systems in simulation and expect them to work in the real world!

      This is super cool if it actually works. Are there any examples of this being demonstrated?

    1. So plowing/bulldozing is OK? Doing it doesn't violate control rules but I guess it has to do with length of contact. Plowing on purpose is probably a violation.

    1. media consumers have the ability to add their input and criticism, and this is an important function for users.

      this is a facade created by companies to keep viewers engaged. we believe we have the freedom to speak but in reality if one comments or not they are playing into the game of letting what we see on our devices influence our thoughts. Even if one just creates an opinion and does not share, the post does its planned job of making the consumer think.

    1. The centrality of touch is a unique characteristic of board games

      The tactile nature of board games is a crucial differentiator from other media. Unlike video games, where touch is limited to controllers, board games engage players with direct interaction with the game pieces, making the tactile experience a significant part of the gameplay. This physical involvement may lead to deeper emotional responses than in other media.

    2. Players engage with boardgames for a variety of reasons

      The text highlights that different players appreciate different aspects of board games, some focusing more on mechanics, while others on materiality or aesthetics. It suggests a holistic approach to game design, where both elements are interwoven for an enriched experience.

    3. This is because there is no preexisting world that thegame’s rules need to limit;

      Would the goal to win a game be considered a generative or restrictive rule? Is it a rule? Who decides what's a rule?

    4. learning by playing may even be called the oldest learning method

      I agree, it's always easier to learn how to play a game by actually playing rather than listen to someone read the rules off of the rule book

    5. simulations and artificial intelligence, and thehumanities computing field. It was particularly from the last of these where the contemporary wave of gamestudies started to emerge. Many of the people working within this paradigm approached computers as a po-tential new medium

      a lot of popular games are based in simulation most notably (the sims) and others like goat simulator, minecraft etc...so its no suprise that lots of popular games have the siimulation aspect. the range of control in an artifical siutation has a pleasing disposition to be a game

    6. The situation is changing, and in the future the issue is likely to be put the opposite way – why should therenot be game studies represented in a modern university?

      to my above question

    7. tudies at the time of writing (in 2006) has not yet reached this stage in many universities,

      i wonder now in this day and age with digital game culture having such large platforms and cultural impact in media (streamers on twitch, five nights at freedies, flappy bird, amoung us, candy crush) if the field had also helped expand this into something larger that just for developers

    1. I am a better learner because I have found ways to use a more diverse range of studying tactics.”

      I can agree with this, there are many ways to learning for example some people have notes and some chop it down to flashcards and make a game out of that.

    1. Rapture of the Rhizome

      The Tangled Rhizome relates to "Adventures with Anxiety" because a rhizome is a root system with several different routes to different things. The game acts as a rhizome since there is a list of options for each activity, ranging from positive to negative options.

    2. But in a narrative experience not structured as a win-lose contest the movement forward has the feeling of enacting a meaningful experience both consciously chosen and surprising.

      reminds me of the different experiences that were shared last seminar about "adventures with anxiety." for some, the experience was cut short and left them feeling confused and unsatisfied, while others reached the true ending, completing the objective of the game. i would argue that the complexity and enjoyment of a maze-like game could also rely on the choices of the person who's playing, not just the creator of the game

    3. The Journey Story and the Pleasure of Problem Solving

      In Adventures with Anxiety, the game introduces two distinct journeys. The journey of the girl and the wolf. At each battle that the two have, they have to problem-solve in order to get through the ordeal and learn a lesson. Ultimately, this culminates in the two characters solving their problems as they learn where each character is coming from.

    4. The boundlessness of the rhizome experience is crucial to its comforting side. In this it is as much of a game as the adventure maze.

      The boundlessness of the rhizome allows the reader to connect with inner emotions, as seen in "Adventures With Anxiety," when the story asked the reader to pick the anxiety emotions they most strongly connected with.

    5. Minos’s maze was therefore a frightening place, full of danger and bafflement, but successful navigation of it led to great rewards

      a good foundation to create a story and/or a game from because there's a plot, an objective, the opportunity to make choices, and a clear win or lose ending

    6. a story and a game pattern derives from the melding of a cognitive problem (finding the path) with an emotionally symbolic pattern (facing what is frightening and unknown).

      correlating between story and game by describing the appeal of a problem and symbolistic pattern

    1. But the unsolvable maze does hold promise as an expressive structure.

      Stand different from adventures with anxiety since unsolvable maze does not prevail as a feature in the game even with the freedom of choice.

    2. The Journey Story and the Pleasure of Problem Solving

      Adventures with Anxiety is a mix of both a journey story/game and a maze story/game because players get to go through a story line and incorporates real world problem solving based on the player's preferences.

    3. This kind of narrative structure need not be limited to such simplistic content or to an explicitly mazelike interface.

      Really mazes can come from a variety of different settings, is this why they are probably one of the most popular form of in-game storytelling?

    4. Rapture of the Rhizome

      The Rhizome style of game can be freeing from anxiety because there is freedom from consequence and any point can lead to another. This is very different from Adventures with Anxiety, where your actions will a have set path towards a set ending.

    5. one enacts a story of wandering, of being enticed in conflicting directions, of remaining always open to surprise, of feeling helpless to orient oneself or to find an exit, but the story is also oddly reassuring

      While feeling helpless, there is also a sense of comfort of not having an end goal. You are able to explore more and take time to really consider things without worrying about reaching the end goal. People are able to spend as long or as little as time at specific points in the game.

    6. The postmodern hypertext tradition celebrates the indeterminate text as a liberation from the tyranny of the author and an affirmation of the reader’s freedom of interpretation.

      The Postmodern hypertext can be translated as a freedom of choice that a reader holds which directly reflects the purpose of adventures with anxiety in serving as a game that interplays liberation from anxiety with the liberation from the tyranny of the author

    7. navigational space of the computer

      The internet has made it suitable to gain more agency within games/stories for games such as "Adventures With Anxiety" where each player had options to chose from about how the story/game should move forward

    8. But in a narrative experience not structured as a win-lose contest the movement forward has the feeling of enacting a meaningful experience

      This comment made the game choices made in Adventures with Anxiety a lot more meaningful. The game initally followed the "win-lose contest," but when the game wanted to talk about something serious such as dealing with anxiety, got rid of the win-lose format

    9. are related to mazes but offer additional opportunities for exercising agency.

      How are mazes and journey stories different? Could a film or game potentially be a combination of both?

    10. If you forget to get it, you must retrace your steps through many perils. The game is like a treasure hunt in which a chain of discoveries acts as a kind of Ariadne’s thread to lead you through the maze to the treasure at the center. (11)

      The game builds upon itself to make progress seem meaningful towards reaching a certain goal instead of simply as an optional experience within the game.

    11. After two hours of this surreal activity, my husband became restless and began asking every five minutes or so if the game was almost over.

      Ironic considering kids often ask "how much longer" or "when we are done." This parallel that the author makes emphasizes that the rhizome experience can cause everyone, no matter their age, to feel emotional whether it be exasperated or entthralled.

    12. Both the overdetermined form of the single-path maze adventure and the underdetermined form of rhizome fiction work against the interactor’s pleasure in navigation.

      The enjoyment of a game/story is diminished by both too little freedom (the single path maze) and too much freedom (rhizome).

    13. they offer no end point and no way out.

      Often how anxiety feels. For example during the "Adventures with Anxiety" game, when she's on the roof she felt like there was no way out except jumping in certain scenarios.

    1. To announce it publicly; and the penalty ––Stoning to death I the public square

      reminds me of the "shame" scene from Game of Throne with Cersei Lannister. It wasn't to the death but it was a stoning regardless

    1. “historical materialism”

      Historical materialism, like a puppet in a chess game, always win effortlessly but relies on the hidden influence of theology, which isn't openly recognized. Even though theology is often dismissed, it plays a crucial role in helping historical materialism succedent though it has to stay out of light.

    1. but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills

      Depends on what you mean by "efficient." A computer can play a game thousands of times in a second, whereas it could take a human hours to play the same game one time. It took AlphaZero only NINE hours to play those forty-four million games and master the game of chess better than any human (or other computer program) in history. Which is more efficient?

    1. “Gamification is a good motivator, because one key aspect is reward, which is very powerful,” said Schwartz. The downside? Rewards are specific to the activity at hand, which may not extend to learning more generally. “If I get rewarded for doing math in a space-age video game, it doesn’t mean I’m going to be motivated to do math anywhere else.”Gamification sometimes tries to make “chocolate-covered broccoli,” Schwartz said, by adding art and rewards to make speeded response tasks involving single-answer, factual questions more fun. He hopes to see more creative play patterns that give students points for rethinking an approach or adapting their strategy, rather than only rewarding them for quickly producing a correct response.

      gamification is an issue - how to balance skill and intrinsic motivation and motivation for a reward.

    1. It simply means that like every other social situation, gamesinform how we interpret and act in the social contexts we find ourselves in

      This is true. I find that people become a different version of themselves when they're in their competitive nature, whether it's a board game or a sports game, they become different people and act differently based on the people playing with/against them and the people spectating.

    2. They persist through time and may evenaffect the general sense of self-worth the player has, if the emotional reac-tion to the player’s engagement with the game was strong enough.

      This is seen more in games of sports or highly competitive environments. As someone who has done club volleyball and basketball, some of my teammates were entirely affected by one bad game they had and the reaction of their fellow teammates or coaches. A bad experience can cause a player to stop liking the game ad sometimes cause them to not want to play the sport and game anymore.

    3. It simply means that like every other social situation, gamesinform how we interpret and act in the social contexts we find ourselves in.

      Especially with a close group of friends or family, the way we act while playing a game is changed. Games always come with rule-bending or disagreements and how 'much' we care is affected by who we are with as explained in the sentence before the highlighted.

    1. Were I dressed in armor on a high horse,there is no man here to match me—their might is so feeble.And so I crave in this court only a Christmas gamesince it’s the holidays, and you here are young and merry.If any in this house here holds that he is brave, 285if so bold be his blood or his brain be so wildthat he sternly dares to trade one strike for one strike,then I will give him as my gift this costly weapon,this axe—it’s heavy enough to handle as he pleases;and I’ll bear the first strike, here baring my neck as I kneel. 290If any fellow be so fierce as to try it,let him hasten to me and take hold of this weapon—I hand it over and he can have it as his own—and I will take a hit from him, stone-still on this floor,provided you agree to this pact: that I may give a hit to him, 295so says I!Yet a rest I will allow,till a year and a day go by.Come quick, and let’s see nowif any dare reply!

      The game

    1. I honestly feel Mad Men was the last show where everyone immediately got online to talk about it after the episode aired, en masse,

      I disagree with this statement. Because I think there are a few shows that have come out recently where we run straight to the internet to engage in discourse about. Game of Thrones (and other related series), Euphoria, Power, etc. There are plenty.

    1. Reviewer #2 (Public review):

      Wnt signaling is the name given to a cell-communication mechanism that cells employ to inform on each other's position and identity during development. In cells that receive the Wnt signal from the extracellular environment, intracellular changes are triggered that cause the stabilization and nuclear translocation of β-catenin, a protein that can turn on groups of genes referred to as Wnt targets. Typically these are genes involved in cell proliferation. Genetic mutations that affect Wnt signaling components can therefore affect tissue expansion. Loss of function of APC is a drastic example: APC is part of the β-catenin destruction complex, and in its absence, β-catenin protein is not degraded and constitutively turns on proliferation genes, causing cancers in the colon and rectum. And here lies the importance of the finding: β-catenin has for long been considered to be regulated almost exclusively by tuning its protein turnover. In this article, a new aspect is revealed: Ctnnb1, the gene encoding for β-catenin, possesses tissue-specific regulation with transcriptional enhancers in its vicinity that drive its upregulation in intestinal stem cells. The observation that there is more active β-catenin in colorectal tumors not only because the broken APC cannot degrade it, but also because transcription of the Ctnnb1 gene occurs at higher rates, is novel and potentially game-changing. As genomic regulatory regions can be targeted, one could now envision that mutational approaches aimed at dampening Ctnnb1 transcription could be a viable additional strategy to treat Wnt-driven tumors.

  3. drive.google.com drive.google.com
    1. She alleged that the coaches told her she was theteam’s tallest player and needed to play in that night’s game.Plaintiff claimed they observed her shaking and having diffi-culty participating during warm-ups but still played her in thegame. The plaintiff’s mother took her to the hospital after thesecond game. Plaintiff alleged that the delay in receiving med-ical treatment caused her to suffer an exacerbation of her neu-rological condition.

      Not addressing the player's issues during warm-up hurt the athlete. She was having a neurological problem that was not addressed when witnessed. They wanted to play her because she was tall and wanted for that game.

  4. Aug 2024
    1. the end goal in a game is partially constituted by the constraints on how you got it

      The effort placed by YOU to get to the goal -> the process What were the obstacles you overcame in order to reach the goal of the game?

    2. connect us not just toeach other in the moment and to each other across cultures, really, but just like you said, to thehumans of the past and the cultures of the past

      Through games, we can learn about the beliefs and practices different cultures had during the time period when the game was created. Therefore, providing a reflection of what was expected of them at the time due to societal norms.

    3. people across time have played that same game and negotiated the same board with thosesame pieces and same options for moves.

      It makes me wonder how they negotiated the rules to the games we know today. For example, Uno, Loteria, Hang Man. How were they able to collaborate without letting their beliefs on how the game should be played take over.

    1. “It’s a bit like a video game,” Prof. DeCarlo said. “And we’re able to measure everything all at once.”

      I would like to know how people go about measuring these chemicals and what goes in to collecting all of the data.

    2. pollution levels in real time, and even follow plumes to try to determine their source. “It’s a bit like a video game,” Prof. DeCarlo said. “And we’re able to measure everything all at once.”

      Do the vans own emissions effect the results? Or are the vans electric or non-gas.

    1. Cohen’s fatal injury happened the same day a 16-year-old football player from Alabama was fatally injured during his school’s season opener. Caden Tellier, the quarterback for John T. Morgan Academy in Selma, suffered a brain injury Friday night, Alabama Independent School Association Executive Director Michael McLendon told CNN in a statement. His death was announced Saturday. John T. Morgan Academy football player, Caden Tellier posing for a picture. Morgan Academy Related article Alabama teen dies after head injury during high school football game Caden’s family decided to donate the teen’s organs, his mother wrote on Facebook. “Caden is still fighting hard in his earthly body as he prepares for this final act of generosity to bring new life to others,” Arsella Slagel Tellier posted Tuesday. “We continue to pray for those whose lives will be forever changed by his gifts.”

      Hi @CNN - this is so inappropriate. Thanks.

    1. the world is not replete with divisions of power and privilege that skew one’s opportunities within it, predetermining possibilities through a game of social and economic fate

      this statement highlights the "can-do" attitude of the entrepreneurial subject. Szeman isn't describing reality, but the way that entrepreneurs see it and process it ...

    1. Furthermore,although a program exists that has enabled a computer to win against the master ofchess, no such program that reaches the level of chess exists for shogi or go. Thereason is that, in general, as the depth of reading ahead in a game increases, thenumber of cases that must follow increases explosively. In this respect, computerscannot yet cope well with shogi and go, as they can with chess. We must confron

      Despite advances, computers still struggle with games like shogi and go due to the explosive growth in possible moves.

  5. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Play is a second method for encouraging openness. While meditation focuses on interior, individual states, a playful environment is deeply participatory.

      Exactly this is very necessary for teacher to add interesting games in the class to create a relaxed feeling. Encouraging each child to join in the game will improve their concentration and absorb knowledge quickly.

    1. The purpose of this article is to not to enter the definitional game but ratherto ask whether focusing on fascism is politically useful for thinking aboutAmerica’s political future. Thinking about fascism in our present momentrequires, in my view, a focus on four issues: first, a hard look at salient featuresand outcomes of the Trump presidency; second, a view of fascism that focuses onhistorical methodology and the question of comparison across time and space;third, a revisitation of empirical evidence that asks what was happening ineurope in the 1930s—particularly Italy, where fascism began; and last, the ques-tion of political strategy—what is to be done?

      Author's Game Plan: Focus- is fascism a concept worth involving in current political discussions of US. Distinguished argument from "is Trump fascist"

      How will they do this? 1. features and outcomes of Trump presidency 2. fascism comparisons and methods 3. Italy Fascism- where it began? 4. Next steps- so what?

    1. the world is not replete with divisions of power and privilege that skew one’s opportunities within it, predetermining possibilities through a game of social and economic fate

      this statement highlights the "can-do" attitude of the entrepreneurial subject. Szeman isn't describing reality, but the way that entrepreneurs see it and process it ...

    1. Reviewer #3 (Public Review):

      Summary:<br /> In this paper, the authors demonstrate the inevitably of the emergence of some degree of spatial information in sufficiently complex systems, even those that are only trained on object recognition (i.e. not "spatial" systems). As such, they present an important null hypothesis that should be taken into consideration for experimental design and data analysis of spatial tuning and its relevance for behavior.

      Strengths:<br /> The paper's strengths include the use of a large multi-layer network trained in a detailed visual environment. This illustrates an important message for the field: that spatial tuning can be a result of sensory processing. While this is a historically recognized and often-studied fact in experimental neuroscience, it is made more concrete with the use of a complex sensory network. Indeed, the manuscript is a cautionary tale for experimentalists and computational researchers alike against blindly applying and interpreting metrics without adequate controls.

      Weaknesses:<br /> However, the work has a number of significant weaknesses. Most notably: the degree and quality of spatial tuning is not analyzed to the standards of evidence historically used in studies of spatial tuning in the brain, and the authors do not critically engage with past work that studies the sensory influences of these cells; there are significant issues in the authors' interpretation of their results and its impact on neuroscientific research; the ability to linearly decode position from a large number of units is not a strong test of spatial information, nor is it a measure of spatial cognition; and the authors make strong but unjustified claims as to the implications of their results in opposition to, as opposed to contributing to, work being done in the field.

      The first weakness is that the degree and quality of spatial tuning that emerges in the network is not analyzed to the standards of evidence that have been used in studies of spatial tuning in the brain. Specifically, the authors identify place cells, head direction cells, and border cells in their network and their conjunctive combinations. However, these forms of tuning are the most easily confounded by visual responses, and it's unclear if their results will extend to forms of spatial tuning that are not. Further, in each case, previous experimental work to further elucidate the influence of sensory information on these cells has not been acknowledged or engaged with.

      For example, consider the head direction cells in Figure 3C. In addition to increased activity in some directions, these cells also have a high degree of spatial nonuniformity, suggesting they are responding to specific visual features of the environment. In contrast, the majority of HD cells in the brain are only very weakly spatially selective, if at all, once an animal's spatial occupancy is accounted for (Taube et al 1990, JNeurosci). While the preferred orientation of these cells are anchored to prominent visual cues, when they rotate with changing visual cues the entire head direction system rotates together (cells' relative orientation relationships are maintained, including those that encode directions facing AWAY from the moved cue), and thus these responses cannot be simply independent sensory-tuned cells responding to the sensory change) (Taube et al 1990 JNeurosci, Zugaro et al 2003 JNeurosci, Ajbi et al 2023).

      As another example, the joint selectivity of detected border cells with head direction in Figure 3D suggests that they are "view of a wall from a specific angle" cells. In contrast, experimental work on border cells in the brain has demonstrated that these are robust to changes in the sensory input from the wall (e.g. van Wijngaarden et al 2020), or that many of them are not directionally selective (Solstad et al 2008).

      The most convincing evidence of "spurious" spatial tuning would be the emergence of HD-independent place cells in the network, however, these cells are a small minority (in contrast to hippocampal data, Thompson and Best 1984 JNeurosci, Rich et al 2014 Science), the examples provided in Figure 3 are significantly more weakly tuned than those observed in the brain, and the metrics used by the authors to quantify place cell tuning are not clearly defined in the methods, but do not seem to be as stringent as those commonly used in real data. (e.g. spatial information, Skaggs et al 1992 NeurIPS).

      Indeed, the vast majority of tuned cells in the network are conjunctively selective for HD (Figure 3A). While this conjunctive tuning has been reported, many units in the hippocampus/entorhinal system are *not* strongly hd selective (Muller et al 1994 JNeurosci, Sangoli et al 2006 Science, Carpenter et al 2023 bioRxiv). Further, many studies have been done to test and understand the nature of sensory influence (e.g. Acharya et al 2016 Cell), and they tend to have a complex relationship with a variety of sensory cues, which cannot readily be explained by straightforward sensory processing (rev: Poucet et al 2000 Rev Neurosci, Plitt and Giocomo 2021 Nat Neuro). E.g. while some place cells are sometimes reported to be directionally selective, this directional selectivity is dependent on behavioral context (Markus et al 1995, JNeurosci), and emerges over time with familiarity to the environment (Navratiloua et al 2012 Front. Neural Circuits). Thus, the question is not whether spatially tuned cells are influenced by sensory information, but whether feed-forward sensory processing alone is sufficient to account for their observed turning properties and responses to sensory manipulations.

      These issues indicate a more significant underlying issue of scientific methodology relating to the interpretation of their result and its impact on neuroscientific research. Specifically, in order to make strong claims about experimental data, it is not enough to show that a control (i.e. a null hypothesis) exists, one needs to demonstrate that experimental observations are quantitatively no better than that control.

      Where the authors state that "In summary, complex networks that are not spatial systems, coupled with environmental input, appear sufficient to decode spatial information." what they have really shown is that it is possible to decode *some degree* of spatial information. This is a null hypothesis (that observations of spatial tuning do not reflect a "spatial system"), and the comparison must be made to experimental data to test if the so-called "spatial" networks in the brain have more cells with more reliable spatial info than a complex-visual control.

      Further, the authors state that "Consistent with our view, we found no clear relationship between cell type distribution and spatial information in each layer. This raises the possibility that "spatial cells" do not play a pivotal role in spatial tasks as is broadly assumed." Indeed, this would raise such a possibility, if 1) the observations of their network were indeed quantitatively similar to the brain, and 2) the presence of these cells in the brain were the only evidence for their role in spatial tasks. However, 1) the authors have not shown this result in neural data, they've only noticed it in a network and mentioned the POSSIBILITY of a similar thing in the brain, and 2) the "assumption" of the role of spatially tuned cells in spatial tasks is not just from the observation of a few spatially tuned cells. But from many other experiments including causal manipulations (e.g. Robinson et al 2020 Cell, DeLauilleon et al 2015 Nat Neuro), which the authors conveniently ignore. Thus, I do not find their argument, as strongly stated as it is, to be well-supported.

      An additional weakness is that linear decoding of position is not a strong test, nor is it a measure of spatial cognition. The ability to decode position from a large number of weakly tuned cells is not surprising. However, based on this ability to decode, the authors claim that "'spatial' cells do not play a privileged role in spatial cognition". To justify this claim, the authors would need to use the network to perform e.g. spatial navigation tasks, then investigate the network's ability to perform these tasks when tuned cells were lesioned.

      Finally, I find a major weakness of the paper to be the framing of the results in opposition to, as opposed to contributing to, the study of spatially tuned cells. For example, the authors state that "If a perception system devoid of a spatial component demonstrates classically spatially-tuned unit representations, such as place, head-direction, and border cells, can "spatial cells" truly be regarded as 'spatial'?" Setting aside the issue of whether the perception system in question does indeed demonstrate spatially-tuned unit representations comparable to those in the brain, I ask "Why not?" This seems to be a semantic game of reading more into a name then is necessarily there. The names (place cells, grid cells, border cells, etc) describe an observation (that cells are observed to fire in certain areas of an animal's environment). They need not be a mechanistic claim (that space "causes" these cells to fire) or even, necessarily, a normative one (these cells are "for" spatial computation). This is evidenced by the fact that even within e.g. the place cell community, there is debate about these cells' mechanisms and function (eg memory, navigation, etc), or if they can even be said to serve only a single function. However, they are still referred to as place cells, not as a statement of their function but as a history-dependent label that refers to their observed correlates with experimental variables. Thus, the observation that spatially tuned cells are "inevitable derivatives of any complex system" is itself an interesting finding which *contributes to*, rather than contradicts, the study of these cells. It seems that the authors have a specific definition in mind when they say that a cell is "truly" "spatial" or that a biological or artificial neural network is a "spatial system", but this definition is not stated, and it is not clear that the terminology used in the field presupposes their definition.

      In sum, the authors have demonstrated the existence of a control/null hypothesis for observations of spatially-tuned cells. However, 1) It is not enough to show that a control (null hypothesis) exists, one needs to test if experimental observations are no better than control, in order to make strong claims about experimental data, 2) the authors do not acknowledge the work that has been done in many cases specifically to control for this null hypothesis in experimental work or to test the sensory influences on these cells, and 3) the authors do not rigorously test the degree or source of spatial tuning of their units.

    1. I SAW YUPHI— SHTH IRA’S PROSPERITY, I HAVE HAP NO PEACE

      The jealousy held by Duryodhana over his cousin Arjuna explores an interesting theme and has significant implications. As the eldest of the Kauravas, it makes sense that he holds a lot of resentment and envy for Arjuna as he is one of the Pandavas and it is mainly rooted in the fact that they are in rival families. This spite is the reason why he wants to play the game of dice leading to the eventual exile of the Pandavas. In other readings like Giwe for example, he exemplifies what it means to sacrifice for others and to help a larger group of people by putting his pride aside. On the other hand, we can see how jealousy can cause conflict and inflict harm upon others which is what good leaders avoid and do the opposite of. Not to mention, we can see the destructive potential of jealousy and how it can become even larger conflicts. CC BY Ajey Sasimugunthan (contact)

    1. Alas, their ruthless fate, unhappy friends!     But in what manner, tell me, did they perish?

      Diction such as "ruthless" to describe the fate of the Persian loss highlights the harshness of the sequence of events leading to the Persians defeat. Not to mention, mentioning "unhappy friends" explains the people that are mourning over the loss of loved ones especially in war that they were expected to win. People are often at a loss for words after a disaster happens and the first thing they think about is how did it happen. This is what the Persian people are going through as they are learning to accept the harsh reality of their defeat and will need to cut their losses in order to bounce back up. Similar to huge upsets in sports game, the teams are usually at a loss for words as they do not understand how they let themselves lose the game especially being in a very favorable situations. All of this goes without saying that it serves as a lesson for readers that you should never count out people when their backs are against the wall because that is when they might perform at their best and surprise you when it is least expected. CC BY Ajey Sasimugunthan (contact)

    1. Reviewer #3 (Public Review):

      Summary:

      Well-illustrated new material is documented for Acanthomeridion, a formerly incompletely known Cambrian arthropod. The formerly known facial sutures are proposed be associated with ventral plates that the authors homologise with the free cheeks of trilobites (although also testing alternative homologies). An update of a published phylogenetic dataset permits reconsideration of whether dorsal ecdysial sutures have a single or multiple origins in trilobites and their relatives.

      Strengths:

      Documentation of an ontogenetic series makes a sound case that the proposed diagnostic characters of a second species of Acanthomeridion are variation within a single species. New microtomographic data shed light on appendage morphology that was not formerly known. The new data on ventral plates and their association with the ecdysial sutures are valuable in underpinning homologies with trilobites.

      I think the revision does a satisfactory job of reconciling the data and analyses with the conclusions drawn from them. Referee 1's valid concerns about whether a synonymy of Acanthomeridion anacanthus is justified have been addressed by the addition of a length/width scatterplot in Figure 6. Referee 2's doubts about homology between the librigenae of trilobites and ventral plates of Acanthomeridion have been taken on board by re-running the phylogenetic analyses with a coding for possible homology between the ventral plates and the doublure of olenelloid trilobites. The authors sensibly added more trilobite terminals to the matrix (including Olenellus) and did analyses with and without constraints for olenelloids being a grade at the base of Trilobita. My concerns about counting how many times dorsal sutures evolved on a consensus tree have been addressed (the authors now play it safe and say "multiple" rather than attempting to count them on a bushy topology). The treespace visualisation (Figure 9) is a really good addition to the revised paper.

      Weaknesses:

      The question of how many times dorsal ecdysial sutures evolved in Artiopoda was addressed by Hou et al (2017), who first documented the facial sutures of Acanthomeridion and optimised them onto a phylogeny to infer multiple origins, as well as in a paper led by the lead author in Cladistics in 2019. Du et al. (2019) presented a phylogeny based on an earlier version of the current dataset wherein they discussed how many times sutures evolved or were lost based on their presence in Zhiwenia/Protosutura, Acanthomeridion and Trilobita. The answer here is slightly different (because some topologies unite Acanthomeridion and trilobites). This paper is not a game-changer because these questions have been asked several times over the past seven years, but there are solid, worthy advances made here.

      I'd like to see some of the most significant figures from the Supplementary Information included in the main paper so they will be maximally accessed. The "stick-like" exopods are not best illustrated in the main paper; their best imagery is in Figure S1. Why not move that figure (or at least its non-redundant panels) as well as the reconstruction (Figure S7) to the main paper? The latter summarises the authors' interpretation that a large axe-shaped hypostome appears to be contiguous with ventral plates. The specimens depict evidence for three pairs of post-antennal cephalic appendages but it's a bit hard to picture how they functioned if there's no room between the hypostome and ventral plates. Also, a comment is required on the reconstruction involving all cephalic appendages originating against/under the hypostome rather the first pair being paroral near the posterior end of the hypostome and the rest being post-hypostomal as in trilobites.

    1. “You picked a hard row to hoe,” she said, and went back to her chess game. The rows were tubs of ice cream. The hoe was that scoop.

      I'm happy she broke that quote down, lol

    1. Many of today's electronics are basically specialized computers, though we don't always think of them that way.

      There are a lot of other electronics that can classify as a specialized computer. Some examples are phones, smart watches, game consoles, and TVs.

    1. Reviewer #2 (Public Review):

      The work by Yun et al. explores an important question related to post-copulatory sexual selection and sperm competition: Can females actively influence the outcome of insemination by a particular male by modulating storage and ejection of transferred sperm in response to contextual sensory stimuli? The present work is exemplary for how the Drosophila model can give detailed insight in basic mechanism of sexual plasticity, addressing the underlying neuronal circuits on a genetic, molecular and cellular level.

      Using the Drosophila model, the authors show that the presence of other males or mated females after mating shortens the ejaculate-holding period (EHP) of a female, i.e. the time she takes until she ejects the mating plug and unstored sperm. Through a series of thorough and systematic experiments involving the manipulation of olfactory and chemogustatory neurons and genes in combination with exposure to defined pheromones, they uncover two pheromones and their sensory cells for this behavior. Exposure to the male specific pheromone 2MC shortens EHP via female Or47b olfactory neurons, and the contact pheromone 7-T, present males and on mated females, does so via ppk23 expressing gustatory foreleg neurons. Both compounds increase cAMP levels in a specific subset of central brain receptivity circuit neurons, the pC1b,c neurons. By employing an optogenetically controlled adenyl cyclase, the authors show that increased cAMP levels in pC1b,c neurons increase their excitability upon male pheromone exposure, decrease female EHP and increase the remating rate. This provides convincing evidence for the role of pC1b,c neurons in integrating information about the social environment and mediating not only virgin, but also mated female post-copulatory mate choice.

      Understanding context and state-dependent sexual behavior is of fundamental interest. Mate behavior is highly context-dependent. In animals subjected to sperm competition, the complexities of optimal mate choice have attracted a long history of sophisticated modelling in the framework of game theory. These models are in stark contrast to how little we understand so far about the biological and neurophysiological mechanisms of how females implement post-copulatory or so-called "cryptic" mate choice and bias sperm usage when mating multiple times.

      The strength of the paper is decrypting "cryptic" mate choice, i.e. the clear identification of physiological mechanisms and proximal causes for female post-copulatory mate choice. The discovery of peripheral chemosensory nodes and of neurophysiological mechanisms in central circuit nodes will provide a fruitful starting point to fully map the circuits for female receptivity and mate choice during the whole gamut of female life history.

    1. I PO NOT COVETB O O N S . MY .HUSBANDS WILLACHIEVE T H ELATER. WHEN Y UDH/SHTHIRA A PPR O A C H ED DHR/THARAE>HTRA TO TAKELEAVE O F H IM -W HEN D U R Y O D H A N A L E A R N T O F T H IS H E W EN T TO D H R /T H A R A SH T R A .O FA T H ER ,TH E PA N PA V A S WILL N EV ERFO RG IV E T H E IN SU L T T O PRAUPADI . WITH

      In the Mahabharata, Draupadi is portrayed as a powerful and assertive woman who challenges traditional gender norms. As the wife of the five Pandava brothers, Draupadi is expected to conform to the ideal of the submissive and obedient wife. However, she consistently subverts these expectations, demonstrating a strong sense of agency and autonomy. For instance, when her husbands lose her in a game of dice to their cousins, the Kauravas, Draupadi refuses to accept her fate, instead demanding justice and protection from her husbands and the gods. This bold and unyielding attitude is remarkable, given the patriarchal society in which she lives, where women are often relegated to secondary roles and expected to prioritize the needs of their husbands and families above their own.

      Draupadi's character also blurs the lines between traditional gender definitions. While she embodies many feminine traits, such as beauty, compassion, and nurturing, she also displays masculine qualities, like courage, strength, and strategic thinking. For example, during the great war, Draupadi takes on a leadership role, advising her husbands and helping to devise military strategies. This fluidity of gender expression is significant, as it challenges the binary oppositions that often govern gender roles in Indian culture. Draupadi's multifaceted personality thus expands our understanding of what it means to be a woman, and highlights the limitations of rigid gender categories.

    2. DUHSHA 5 / )NA TRIED TO D R A 6 DRAUPADIAWAY B U T SHE. R E SIST E D ,

      From birth I could tell Draupadi was going to be a force to be reckoned with. Born of fire with the same attitude, I found her to be her own savior at times. She is very strong willed and outspoken. She wanted a husband but would be no mans slave. She would not allow herself to be given away in that dice game by her chosen husband YUDHIsHTHIRA. Draupada is a powerful woman in a time where women had no voice.

    1. formulate abstract ideals

      ideas are not abstractions

      they are constructed fashioned as pieces in a game affordances and associative complexes

      metaphors and even in mathematics they preserve they retain a hing of the taste the feeling asthasia that the root of the metaphor or the word they were made up with from

      now for ideals they are definitely not abstracted but

      named rich associative complexes stories myths that they are intentinal meaningful hashes

      certainly shaped by their rich history

      ideals precisely because they are in the realm of processes hows quintessentially nothing to do with esseencs

      peieces in an emergent evolving game

      cionsider also tingimiging current in software world where it is a daily experience that things and processes are one and they always need to be pictured to be presented and to be interacted with as affordances

      affordances metaphors

      On the grounds that knowing abstracts from our experience, Plato takes abstraction to be the true reality. Whitehead calls this the “Fallacy of Misplaced Concreteness” A.N.Whitehead Science and the Modern World (1997) p.58.

      don't prepare the words prepare the feelings

      if you find words that resonate with you

      pay attention to the feeling the the words arose in you resonated with the tacit awareness

      In this spirit misplaced correctness should read misplaced reification

      and ask yourself would I use the same words or really that is different

      `` When I was a kid about half past three My ma said "Daughter, come here to me" Said things may come, and things may go But this is one thing you ought to know... Oh 't ain't what you do it's the way that you do it 'T ain't what you do it's the way that you do it 'T ain't what you do it's the way that you do it That's what gets results 'T ain't what you do it's the time that you do it 'T ain't what you do it's the time that you do it 'T ain't what you do it's the time that you do it That's what gets results You can try hard Don't mean a thing Take it easy, greasy Then your jive will swing Oh 't ain't what you do it's the place that you do it 'T ain't what you do it's the time that you do it 'T ain't what you do it's the way that you do it That's what gets results

      ```

    2. abstract concepts

      .clarify - truth is a regulative concept

      truth is not an abstract concept it is not about the world but how symbolic expression of idea and intent can be operating

      is part of logic and is a value that can be assigned to a statement

      key piece in the game opf propositions constituent generative concept for logic

      the concept of concept is a beatiful example of itself being concrete imaginative construct of the mind

      articulated in language its etymology

      is whatever the mind can conceive with a some specific characteristics that fcannot be exhaustively described charactaris

      liike most thing that are most of the time not things at all like pornography

      you iknow when you see one

    1. the world is not replete with divisions of power and privilege that skew one’s opportunities within it, predetermining possibilities through a game of social and economic fate

      this statement highlights the "can-do" attitude of the entrepreneurial subject. Szeman isn't describing reality, but the way that entrepreneurs see it and process it ...

    1. “According to a report by the National Bureau of Economic Research, properties within a mile of a National Football League (NFL) stadium can see rents increase by as much as 9%.”

      “The Intersection of Sports Stadiums and Real Estate – a Complex Game.” 2024. Leadingre Real Estate Companies of the World, U.S. And International Luxury Homes | LeadingRE. PropertyWeek. January 1, 2024. https://www.leadingre.com/mediaroom/2024/01/05/the-intersection-of-sports-stadiums-and-real-estate-a-complex- game#:~:text=Rental%20properties%20near%20stadiums%20are.

    1. Monopoly is not played on a cartesian plane. It's played on a directed circular graph. Therefore, it is inappropriate to use the Euclidean distance metric to compare the distances between places on the board. We must instead use minimum path lengths. Example: If we used Euclidean distance, then you would have to agree that the distance between, say, Go and Jail is equal to the distance between the Short Line and the Pennsylvania Railroad. Clearly, this is not the intention. In your example, the "nearest railroad" would be the railroad square having the shortest path from wherever you stand. With the game board representing a directed graph, there are no "backwards" paths. Thus, the distance from the pink Chance square to the Reading railroad is not 2. It's 38.
    1. Figures 6 and 7 highlight the Google Trends search interest and placement on the Nielsen top 10 streaming rankings for two streaming TV shows, both post-apocalyptic dramas based on video game franchises. Fallout was released in full on Prime Video on April 10, 2024. The Last of Us premiered on HBO (simulcast on HBO Max) on January 15, 2023 and ran until March 12. Fallout was on the Nielsen rankings for 7 weeks, spending 4 of those at #1 before falling to #7. The Last of Us was on the Nielsen rankings for 9 weeks, spending just one week at #1, but fell no farther than #4. Interest for both shows peaked when they premiered, but by a week later interest in Fallout had fallen and it was half as popular; it continued to fall, although showing small peaks. Interest in The Last of Us jumped weekly as new episodes aired, not dipping below the halfway mark until almost a week after the finale.

      Move the explanation before the figures.

    1. On October 30, 1948, the Donora High School Football team played through a dense smog to complete the game with hundreds of fans in the audience, despite very poor visibility.

      Citation?

    1. On October 30, 1948, the Donora High School Football team played through a dense smog to complete the game with hundreds of fans in the audience, despite very poor visibility.

      Citation?

    1. Welcome back.

      This is part two of this lesson. We're gonna continue immediately from the end of part one, so let's get started.

      Right at the offset, it's critical to understand that layer 2 of the networking stack uses layer 1. So in order to have an active layer 2 network, you need the layer 1 or physical layer in place and working, meaning two network interfaces and a shared medium between those as a minimum. But now on top of this, each networking stack, so remember this is all of the networking software layers, each stack, so the left and the right, now have both layer 1 and layer 2. So conceptually, layer 2 sits on top of layer 1 like this, and our games, these are now communicating using layer 2.

      Each device, because it's layer 2, now has a MAC address, a hardware address, which is owned by the network interface card on each device. So let's step through what happens now when the game on the left wants to send something to the game running on the right. Well, the left game knows the MAC address of the game on the right. Let's assume that as a starting point. So it communicates with the layer 2 software. Let's assume this is ethernet on the left. It indicates that it wants to send data to the MAC address on the right and the layer 2 software based on this, creates an ethernet frame containing the data that the game wants to send in the payload part of the frame.

      So the frame F1 has a destination of the MAC address ending 5b76, which is the MAC address on the laptop on the right and it contains within the payload part of that frame the data that the game wants to send. At this point, this is where the benefits of layer 2 begin. Layer 2 can communicate with the layer 1 part of the networking stack and it can look for any signs of a carrier signal. If any other device on the network were transmitting at this point, you would see the signal on the layer 1 network. So it's looking to sense a carrier, and this is the job of CSMA, which stands for carrier sense multiple access. In this case, it doesn't detect a carrier and so it passes the frame to layer 1. Layer 1 doesn't care what the frame is, it doesn't understand the frame as anything beyond a block of data and so it transmits this block of data onto the shared medium.

      On the right side, the layer 1 software receives the raw bit stream and it passes it up to its layer 2. Layer 2 reviews the destination MAC address of the frame. It sees that the frame is destined for itself and so it can pass that payload, that data, back to the game and so that's how these games can communicate using layer 2. So layer 2 is using layer one to transmit and receive the raw data, but it's adding on top of this MAC addresses which allow for machine-to-machine communication and in addition, it's adding this media access control.

      So let's look at an example of how this works. If we assume that at the same time that the left machine was transmitting, the machine on the right attempted to do the same. So layer 2 works with layer 1 and it checks for a carrier on the shared physical medium. If the left machine is transmitting, which we know it is, the carrier is detected and layer 2 on the right simply waits until the carrier is not detected. So it's layer 2 which is adding this control. Layer 1 on its own would simply transmit and cause a collision, but layer 2 checks for any carrier before it instructs layer 1 to transmit. When the carrier isn't detected anymore, then layer 2 sends the frame down to layer 1 for transmission. Layer 1 just sees it as data to send and it transmits it across the physical medium. It's received at the left side device. Layer 1 sends this raw data to layer 2. It can see that it's the intended destination and so it sends the data contained in the frame payload back to the game.

      Now, I want to reiterate a term again which you need to remember, and that term is encapsulation. This is the process of taking some data, in this case game data, and wrapping it in something else. In this case, the game data is encapsulated inside a frame at each side before giving the game back its intended data, the data is de-encapsulated, the payload is extracted from the frame. So this is a concept that you need to remember, because as data passes down the OSI model, it's encapsulated in more and more different components. So the transport layer does some encapsulation, the network layer does some encapsulation and the data link layer does some encapsulation. This is a process which you need to be comfortable with.

      Now, two more things which I want to cover on this screen and the first of this is conceptually, for anything using layer 2 to communicate, they see it as layer 2 on the left is directly communicating to layer 2 on the right. Even though layer 2 is using layer 1 to perform the physical communication, anything which is using these layer 2 services has no visibility of that. That's something that's common in the OSI model. Anything below the point that you are communicating with is abstracted away. If you're using a web browser which functions at layer 7, you don't have to worry about how your data gets to the web server. It just works. So your web browser, which is running at layer 7 is communicating with a web server which is also running at layer 7. You don't have to worry about how this communication happens.

      The other thing which I want to cover, is if in this scenario, what if both machines check for a carrier which doesn't exist and then both layer 2s instruct their layer 1 to both transmit at the same time. This causes a collision. Now, layer 2 contains collision detection and that's what the CD part of CSMA/CD is for. If a collision is detected, then a jam signal is sent by all of the devices which detect it and then a random back-off occurs. The back-off is a period of time during which no device will attempt a transmission. So after this back-off period occurs, the transmission is retried. Now, because this back-off is random, hopefully it means that only one device will attempt to transmit at first and other devices will see the carrier on the network and wait before transmitting, but if we still have a collision, then this back-off is attempted again only now with a greater period. So over time, there's less and less likelihood that you're going to have multiple devices transmitting at the same time. So this collision detection and avoidance is essential for layer 2. It's the thing that allows multiple devices to coexist on the same layer 2 network.

      Okay, so let's move on.

      So now you have an idea about layer 2 networking, let's revisit how our laptops are connected. In the previous example where I showed hubs, we had four devices connected to the same four port hub. Now, hubs are layer 1 devices. This means they don't understand frames in any way. They just see physical data. Essentially, they're a multi-port repeater. They just repeat any physical activity on one port to all or the ports. So the top laptop sends a frame destined for the bottom laptop. The hub just sees this as raw data. It repeats this on all of the other ports and this means all of the other laptops will receive this data, which their layer 2 software will interpret as a frame. They'll all receive this data, so this frame, they'll see that they're not the intended destination and they'll discard it. The bottom laptop will see that it is the intended destination and its layer 2 software will pass on the data to the game.

      Now, hubs aren't smart and this means that if the laptop on the right were to start transmitting at exactly the same time, then it would cause a collision, and this collision would be repeated on all of the other ports that the hub has connected. Using a layer 1 device, a hub doesn't prevent you running a layer 2 network over the top of it, but it means that the hub doesn't understand layer 2 and so it behaves in a layer 1 way. You still have each device doing carrier sense multiple access, and so collisions should be reduced, but if multiple devices try and transmit at exactly the same time, then this will still cause collisions.

      What will improve this is using a switch, and a switch is a layer 2 device. It works in the same way physically as a hub, but it understands layer 2 and so it provides significant advantages. Let's review how this changes things. To keep things simple, let's keep the same design only now we have a switch in the middle. It still has four ports and it's still connected to the same four laptops. Because they're layer 2, they now have their own hardware addresses, their MAC addresses. Now, because a switch is a layer 2 device, it means that it has layer 2 software running inside it, which means that it understands layer 2. And because of that it maintains what's called a MAC address table. Switches over time learn what's connected to each port, so the device MAC addresses which are connected to each port. When a switch sees frames, it can interpret them and see the source and destination MAC addresses. So over time with this network, the MAC address table will get populated with each of our devices. So the switch will store the MAC addresses it sees on a port and the port itself. This is generally going to happen the first time each of the laptop sends a frame which the switch receives. It will see the source MAC address on the frame and it will update the MAC address table which it maintains.

      Now let's say the MAC address table is populated and the top laptop sends a frame which is intended for the left laptop. Well, the switch will see the frame arrive at the port that the top laptop is connected to at which point, one of two things will happen. If the switch didn't know which port the destination MAC address was on, well it would forward this frame to all of the other ports. If it does know which port the specific MAC address is attached to, then it will use that one port to forward the frame to. Switches are intelligent. They aren't just repeating the physical level. They interpret the frames and make decisions based on the source and destination MAC addresses of those frames. So switches store and forward frames. They receive the frame, they store it and then they forward it based on the MAC address table and then they discard it. Now this has another benefit, because it's not just repeating like a dumb layer 1 device, it means that it won't forward collisions. In fact, each port on the switch is a separate collision domain. Because each port is a separate collision domain, the only two things which can transmit at the same time are the device and the port it's connected to. So if there is a collision, it will be limited to that one port only. The switch will not forward that corrupted data through to any of its other ports, because it only forwards valid frames. The switch isn't forwarding the physical layer, it's dealing with frames only. It receives a frame. If it's valid, it stores it, it reviews it and then it forwards it.

      Now, layer 2 is the foundation for all networks which you use day-to-day. It's how your wired networks work. It's how wifi networks work. It's how the internet works, which is basically a huge collection of interconnected layer 2 networks. The name itself, so the internet, stands for an inter-network of networks. So these networks are layer 2 networks which are all connected together to form the internet.

      So in summary, what position are we in by adding layer 2?

      Well, we have identifiable devices. For the first time, we can uniquely address frames to a particular device using MAC addresses. It allows for device to device communication rather than the shared media which layer 1 offers. We also have media access control so devices can share access to physical media in a nice way avoiding crosstalk and collisions. But we also have the ability to detect collisions and so if they do occur, we have a way to correct or work around them. So with all of that, we can do unicast communications which are one-to-one communications and we can do broadcast communications, which are one-to-all communications. And as long as we replace layer 1 hubs with layer two switches, which are like hubs with superpowers, then we gain the ability to scale to a much better degree and we avoid a lot of the collisions because switches store and forward rather than just repeating everything.

      Now, layer 2 really does provide crucial functionality. Everything from this point onwards builds on layer 2, so it's critical that you understand it. So if necessary, then re-watch this video, because from now on, you need to understand layer 2 at a real fundamental level. Now, this seems like a great point to take a break, so go ahead and complete this video and when you're ready, join me in the next part of this series where we'll be looking at layer 3, which is the network layer.

    1. Welcome back.

      And in this part of the lesson series, we're going to look at layer two of the OSI model, which is the data link layer.

      Now the data link layer is one of the most critical layers in the entire OSI seven-layer model. Everything above this point relies on the device-to-device communication, which the data link layer provides. So when you are sending or receiving data, to or from the internet, just be aware that the data link layer is supporting the transfer of that data. So it's essential that you understand.

      Now, this is going to be one of the longer parts of this lesson series, because layer two actually provides a significant amount of functionality.

      Now, before I step through the architecture of layer two, we have to start with the fundamentals. Layer two, which is the data link layer, runs over layer one. So a layer two network requires a functional layer one network to operate, and that's something which is common throughout the OSI model. Higher layers build on lower layers adding features and capabilities. A layer two network can run on different types of layer one networks, so copper, fiber, wifi, and provide the same capabilities.

      Now there are different layer two protocols and standards for different situations, but for now, we're going to focus on ethernet which is what most local networks use. So things in your office or things in your home.

      Now, layer two, rather than being focused on physical wavelengths or voltages, introduces the concept of frames. And frames are a format for sending information over a layer two network. Layer two also introduces a unique hardware address, called a MAC address, for every device on a network. This hardware address is a hexadecimal address. It's 48 bits long and it looks like this: 3e:22 and so on. The important thing to understand is that a MAC address, generally, for physical networking, is not software assigned. The address is uniquely attached to a specific piece of hardware.

      A MAC address is formed of two parts, the OUI, which is the organizationally unique identifier, and this is assigned to companies who manufacture network devices. So each of these companies will have a separate OUI. The second part of the MAC address is then network interface controller or NIC specific. And this means together the MAC address on a network card should be globally unique.

      Now, layer two, as I've mentioned, uses layer one. This means that a layer two, or ethernet frame, can be transmitted onto the shared physical medium by layer one. This means that it's converted into voltages, RF, or light. It's sent across the medium and then received by other devices, also connected to that shared medium.

      It's important to understand this distinction. Layer two provides frames, as well as other things which I'll cover soon. And layer one handles the physical transmission and reception onto and from the physical shared medium. So when layer one is transmitting a frame onto the physical medium, layer one doesn't understand the frame. Layer one is simply transmitting raw data onto that physical medium.

      Now a frame, which is the thing that layer two uses for communication, is a container of sorts. It has a few different components. The first part is the preamble and start frame delimiter. And the function of this is to allow devices to know that it's the start of the frame, so they can identify the various parts of that frame. You need to know where the start of a frame is to know where the various parts of that frame start.

      Next comes the destination and the source MAC addresses. So all devices on a layer two network have a unique MAC address, and a frame can be sent to a specific device on a network by putting its MAC address in the destination field, or you can put all Fs if you want to send the frame to every device on the local network. And this is known as a broadcast. Now the source MAC address field is set to the device address of whatever is transmitting the frame and this allows it to receive replies.

      Next is EtherType, and this is commonly used to specify which layer three protocol is putting its data inside a frame. Just like layer two uses layer one to move raw bitstream data across the shared physical medium, while layer three uses layer two frames for device-to-device communication on a local network. And so when you are receiving a frame at the other side of a communication, you need to know which layer three protocol originally put data into that frame. A common example might be IP or the internet protocol. So this is what the EtherType, or ET, field is used for.

      Now, these three fields are commonly known as the MAC header. They control the frame destination, they indicate the source and specify its function. After the header is the payload, and this is anywhere from 46 to 1500 bytes in size for standard frames. It contains the data that the frame is sending. The data is generally provided by the layer three protocol and the protocol which is being used, as I just mentioned, is indicated within the EtherType or ET field.

      Now this process is called encapsulation. You have something which layer three generates, often this is an IP Packet, and this is put inside an ethernet frame. It's encapsulated in that frame. The frame delivers that data to a different layer two destination. On the other side, the frame is analyzed, and the layer three packet is extracted and given back to layer three at the destination side. The EtherType field is used to determine which layer three protocol receives this at the destination. And then finally at the end of the frame, is the frame check sequence, which is used to identify any errors in the frame. It's a simple CRC check. It allows the destination to check if corruption has occurred or not.

      So that's the frame and it's an important thing to understand if you are to understand how all of the bits of layer one, two, and three fit together. So layer two frames are generated by the layer two software at the source side. They're passed to layer one. That raw data is transmitted onto the physical medium. It's taken off the physical medium at the destination side. It's passed to its layer two software and that can interpret the frame and pass that onto layer three, which can then interpret that data.

      Now as a reminder, this is the problem that we have with a purely layer one network implementation. We have two devices running a game, a laptop on the left and a laptop on the right. And these are connected using a single network cable, a shared physical medium. Now, as I mentioned earlier in this lesson series, layer one provides no media access control. The layer one software rolling on the network card will simply transmit any data it receives onto the physical medium. So if the game on the left sends some data, it will be transmitted onto the medium and it will be seen by the device on the right. The problem is that the laptop on the right could also be sending data at the same time. This means the electrical signals will overlap and interfere with each other and this is known as a collision and it impacts both pieces of data. It corrupts both.

      This is one of the problems of layer one, which is solved by layer two, and layer two provides controlled access to the physical medium. Now let's explore how.

      Okay, so this is the end of part one of this lesson. It was getting a little bit on the long side and so I wanted to add a break. It's an opportunity just to take a rest or grab a coffee. Part two will be continuing immediately from the end of part one. So go ahead, complete the video, and when you're ready, join me in part two.

    1. word of certain other walking dead.” He looked at Bran when hesaid that, and at Summer stretched out beside him. “As to thatWall,” the man went on, “it’s not a place that I’d be going. The OldBear took the Watch into the haunted woods, and all that come backwas his ravens, with hardly a message between them. Dark wings,dark words, me mother used to say, but when the birds y silent,seems to me that’s even darker.” He poked at the re with his stick.“It was dierent when there was a Stark in Winterfell. But the oldwolf’s dead and young one’s gone south to play the game of thrones,and all that’s left us is the ghosts.”

      he knows too much

    2. Megga couldn’t sing, but she was mad to be kissed. She andAlla played a kissing game sometimes, she confessed, but it wasn’tthe same as kissing a man, much less a king.

      oh! um

    Annotators

    1. Reviewer #2 (Public Review):

      Wnt signaling is the name given to a cell-communication mechanism that cells employ to inform on each other's position and identity during development. In cells that receive the Wnt signal from the extracellular environment, intracellular changes are triggered that cause the stabilization and nuclear translocation of β-catenin, a protein that can turn on groups of genes referred to as Wnt targets. Typically these are genes involved in cell proliferation. Genetic mutations that affect Wnt signaling components can therefore affect tissue expansion. Loss of function of APC is a drastic example: APC is part of the β-catenin destruction complex, and in its absence, β-catenin protein is not degraded and constitutively turns on proliferation genes, causing cancers in the colon and rectum. And here lies the importance of the finding: β-catenin has for long been considered to be regulated almost exclusively by tuning its protein turnover. In this article, a new aspect is revealed: Ctnnb1, the gene encoding for β-catenin, possesses tissue-specific regulation with transcriptional enhancers in its vicinity that drive its upregulation in intestinal stem cells. The observation that there is more active β-catenin in colorectal tumors not only because the broken APC cannot degrade it, but also because transcription of the Ctnnb1 gene occurs at higher rates, is novel and potentially game-changing. As genomic regulatory regions can be targeted, one could now envision that mutational approaches aimed at dampening Ctnnb1 transcription could be a viable additional strategy to treat Wnt-driven tumors.

    2. Author response:

      eLife assessment 

      This important study identifies a novel gastrointestinal enhancer of Ctnnb1. The authors present convincing evidence to support their claim that the dosage of Wnt/β-catenin signaling controlled by this enhancer is critical to intestinal epithelia homeostasis and the progression of colorectal cancers. The study will be of interest to biomedical researchers interested in Wnt signaling, tissue-specific enhancers, intestinal homeostasis, and colon cancer. 

      We greatly appreciate editors’ and reviewers’ extensive and constructive comments and suggestions. We will do our utmost to revise the manuscript accordingly.

      Public Reviews: 

      Reviewer #1 (Public Review)

      Summary: 

      Ctnnb1 encodes β-catenin, an essential component of the canonical Wnt signaling pathway. In this study, the authors identify an upstream enhancer of Ctnnb1 responsible for the specific expression level of β-catenin in the gastrointestinal tract. Deletion of this promoter in mice and analyses of its association with human colorectal tumors support that it controls the dosage of Wnt signaling critical to the homeostasis in intestinal epithelia and colorectal cancers. 

      Strengths: 

      This study has provided convincing evidence to demonstrate the functions of a gastrointestinal enhancer of Ctnnb1 using combined approaches of bioinformatics, genomics, in vitro cell culture models, mouse genetics, and human genetics. The results support the idea that the dosage of Wnt/β-catenin signaling plays an important role in the pathophysiological functions of intestinal epithelia. The experimental designs are solid and the data presented are of high quality. This study significantly contributes to the research fields of Wnt signaling, tissue-specific enhancers, and intestinal homeostasis. 

      Weaknesses: 

      One weakness of this manuscript is an insufficient discussion on the Ctnnb1 enhancers for different tissues. For example, do specific DNA motifs and transcriptional factors contribute to the tissue-specificity of the neocortical and gastrointestinal enhancers? It is also worth discussing the potential molecular mechanisms controlling the gastrointestinal expression of Ctnnb1 in different species since the identified human and mouse enhancers don't seem to share significant similarities in primary sequences. 

      We agree with the reviewer that the manuscript lacks sufficient discussions on how enhancers control cell-type-specific expressions of target genes, which is one of the most important questions in the field of transcription regulation. Equally important are the common and species-specific features of this regulation. In general, motif composition, location, order, and affinity with trans-factors within enhancers are four key elements. We will elaborate the point in follow-up revision.

      Reviewer #2 (Public Review): 

      Wnt signaling is the name given to a cell-communication mechanism that cells employ to inform on each other's position and identity during development. In cells that receive the Wnt signal from the extracellular environment, intracellular changes are triggered that cause the stabilization and nuclear translocation of β-catenin, a protein that can turn on groups of genes referred to as Wnt targets. Typically these are genes involved in cell proliferation. Genetic mutations that affect Wnt signaling components can therefore affect tissue expansion. Loss of function of APC is a drastic example: APC is part of the β-catenin destruction complex, and in its absence, β-catenin protein is not degraded and constitutively turns on proliferation genes, causing cancers in the colon and rectum. And here lies the importance of the finding: β-catenin has for long been considered to be regulated almost exclusively by tuning its protein turnover. In this article, a new aspect is revealed: Ctnnb1, the gene encoding for β-catenin, possesses tissue-specific regulation with transcriptional enhancers in its vicinity that drive its upregulation in intestinal stem cells. The observation that there is more active β-catenin in colorectal tumors not only because the broken APC cannot degrade it, but also because transcription of the Ctnnb1 gene occurs at higher rates, is novel and potentially game-changing. As genomic regulatory regions can be targeted, one could now envision that mutational approaches aimed at dampening Ctnnb1 transcription could be a viable additional strategy to treat Wnt-driven tumors. 

      We appreciate the reviewer for acknowledging the potential significance represented by the manuscript. We also recognize that targeting genomic regulatory regions to dampen Ctnnb1 transcription could be a promising strategy for treating Wnt-driven tumors, including many colorectal carcinomas. However, we would like to point out that three are significant technical challenges associated with AAV delivery to the GI epithelium, including the hostile environment, immune response, and low delivery efficiency.

      Reviewer #3 (Public Review): 

      The authors of this paper identify an enhancer upstream of the Ctnnb1 gene that selectively enhances expression in intestinal cells. This enhancer sequence drives expression of a reporter gene in the intestine and knockout of this enhancer attenuates Ctnnb1 expression in the intestine while protecting mice from intestinal cancers. The human counterpart of this enhancer sequence is functional and involved in tumorigenesis. Overall, this is an excellent example of how to fully characterize a cell-specific enhancer. The strength of the study is the thorough nature of the analysis and the relevance of the data to the development of intestinal tumors in both mice and humans. A minor weakness is that the loss of this enhancer does not completely compromise the expression of the Ctnnb1 gene in the intestine, suggesting that other elements are likely involved. Adding some discussion on that point would be helpful.

      We are quite encouraged by the reviewer’s positive comments. We agree with the reviewer that other cis-regulatory elements may be involved in the transcription of Ctnnb1 within the GI epithelium. It is also possible that the basal transcription of Ctnnb1 within the GI epithelium is relatively high, and that enhancers can only boost transcription within a certain range. We will discuss these possibilities in the revision.

    1. Players are organized into small groups with other players of similar skill levels, engaging in training sessions that is more tailored and concentrated to their needs. This program will allow players to fill any gaps in their game where team training might take a longer period of time to cover. Complement your current training routine with additional sessions that run weekly. We work with players of any team and skill levels.

      This section introduces one of its concepts bio-banding to the people, it is quite descriptive. However, this section could have been elaborated a lot to give people more details about what this program does exactly.

    1. Welcome back. In this lesson, I want to introduce another core AWS service, the simple storage service known as S3. If you use AWS in production, you need to understand S3. This lesson will give you the very basics because I'll be deep diving into a specific S3 section later in the course, and the product will feature constantly as we go. Pretty much every other AWS service has some kind of interaction with S3. So let's jump in and get started.

      S3 is a global storage platform. It's global because it runs from all of the AWS regions and can be accessed from anywhere with an internet connection. It's a public service. It's regional based because your data is stored in a specific AWS region at rest. So when it's not being used, it's stored in a specific region. And it never leaves that region unless you explicitly configure it to. S3 is regionally resilient, meaning the data is replicated across availability zones in that region. S3 can tolerate the failure of an AZ, and it also has some ability to replicate data between regions, but more on that in the S3 section of the course.

      Now S3 might initially appear confusing. If you utilize it from the UI, you appear not to have to select a region. Instead, you select the region when you create things inside S3, which I'll talk about soon. S3 is a public service, so it can be accessed from anywhere as long as you have an internet connection. The service itself runs from the AWS public zone. It can cope with unlimited data amounts and it's designed for multi-user usage of that data. So millions of users could be accessing cute cat pictures added by the Animals for Life Rescue Officers. S3 is perfect for hosting large amounts of data. So think movies or audio distribution, large scale photo storage like stock images, large textual data or big data sets. It could be just as easily used for millions or billions of IOT devices or to store images for a blog. It scales from nothing to near unlimited levels.

      Now S3 is economical, it's a great value service for storing and allowing access to data. And it can be accessed using a variety of methods. There's the GUI, you can use the command line, the AWS APIs or even standard methods such as HTTP. I want you to think of S3 as the default storage service in AWS. It should be your default starting point unless your requirement isn't delivered by S3. And I'll talk more about the limitations and use cases later in this lesson.

      S3 has two main things that it delivers: Objects and Buckets. Objects are the data the S3 stores, your cat picture, the latest episode of Game of Thrones, which you have stored legally, of course, or it could be large scale datasets showing the migration of the koala population in Australia after a major bushfire. Buckets are containers for objects. It's buckets and objects that I want to cover in this lesson as an introduction to the service.

      So first, let's talk about objects. An object in S3 is made up of two main components and some associated metadata. First, there is the object key. And for now you can think of the object key, similar to a file name. The key identifies the object in a bucket. So if you know the object key and the bucket, then you can uniquely access the object, assuming that you have permissions. Remember by default, even for public services, there is no access in AWS initially, except for the account root user.

      Now, the other main component of an object is its value. And the value is the data or the contents of the object. In this case, a sequence of binary data, which represents a koala in his house. In this course, I want to avoid suggesting that you remember pointless values. Sometimes though, there are things that you do need to commit to memory. And this is one of those times. The value of an object, in essence, how large the object is, can range from zero bytes up to five terabytes in size. So you can have an empty object or you can have one that is a huge five TB. This is one of the reasons why S3 is so scalable and so useful in a wide range of situations because of this range of sizes for objects.

      Now, the other components of an object, aren't that important to know at this stage, but just so you hear the terms that I'll use later, objects also have a version ID, metadata, some access control, as well as sub resources. Now don't worry about what they do for now, I'll be covering them all later. For this lesson, just try to commit to memory what an object is, what components it has and the size range for an object.

      So now that we've talked about objects, let's move on and look at buckets. Buckets are created in a specific AWS region. And let's use Sydney or ap-southeast-2 as an example. This has two main impacts. Firstly, your data that's inside a bucket has a primary home region. And it never leaves that region, unless you as an architect or one of your system admins configures that data to leave that region. That means that S3 has stable and controlled data sovereignty. By creating a bucket in a region, you can control what laws and rules apply to that data. What it also means is that the blast radius of a failure is that region.

      Now this might be a new term. What I mean by blast radius is that if a major failure occurs, say a natural disaster or a large scale data corruption, the effect of that will be contained within the region. Now a bucket is identified by its name, the bucket name in this case, koala data. A bucket name needs to be globally unique. So that's across all regions and all accounts of AWS. If I pick a bucket name, in this case, koala data, nobody else can use it in any AWS account. Now making a point of stressing this as it often comes up in the exam. Most AWS things are often unique in a region or unique in your account. For example, I can have an IAM user called Fred and you can also have an IAM user called Fred. Buckets though are different, with buckets, the name has to be totally unique, and that's across all regions and all AWS accounts. I've seen it come up in the exam a few times. So this is definitely a point to remember.

      Now buckets can hold an unlimited number of objects. And because objects can range from zero to five TB in size, that essentially means that a bucket can hold from zero to unlimited bytes of data. It's an infinitely scalable storage system. Now one of the most important things that I want to say in this lesson is that as an object storage system, an S3 bucket has no complex structure. It's flat, it has flat structure. All objects stored within the bucket at the same level. So this isn't like a file system where you can truly have files within folders, within folders. Everything is stored in the bucket at the root level.

      But, if you do a listing on an S3 bucket, you will see what you think are folders. Even the UI presents this as folders. But it is important for you to know at this stage that that's not how it actually is. Imagine a bucket where you see three image files, koala one, two and three dot JPEG. The first thing is that inside S3, there's no concept of file type based on the name. These are just three objects where the object key is koala1.JPEG, koala2.JPEG and koala3.JPEG. Now folders in S3 are represented when we have object names that are structured like these. So the objects have a key, a forward slash old forward slash koala one, two and three dot JPEG. When we create object names like this, then S3 presents them in the UI as a folder called old. So because we've got object names that begin with slash old, then S3 presents this as a folder called old. And then inside that folder, we've got koala one, two, and three dot JPEG.

      Now nine out of 10 times, this detail doesn't matter, but I want to make sure that you understand how it actually works. Folders are often referred to as prefixes in S3 because they're part of the object names. They prefix the object names. As you move through the various stages of your AWS learnings, this is gonna make more and more sense. And I'm gonna demonstrate this in the next lesson, which is a demo lesson.

      Now to summarize buckets are just containers, they're stored in a region, and for S3, they're generally where a lot of permissions and options are set. So remember that buckets are generally the default place where you should go to, to configure the way the S3 works.

      Now, I want to cover a few summary items and then step through some patterns and anti-patterns for S3, before we move to the demo. But first an exam powerup. These are things that you should try to remember and they will really help in the exam. First bucket names are globally unique. Remember that one because it will really help in the exam. I've seen a lot of times where AWS have included trick questions, which test your knowledge of this one. If you get any errors, you can't create a bucket a lot of the time it's because somebody else already has that bucket name.

      Now bucket names do have some restrictions. They need to be between 3 and 63 characters, all lower case and no underscores. They have to start with a lowercase letter or a number, and they can't be formatted like IP addresses. So you can't have 1.1.1.1 as your bucket name. Now there are some limits in terms of buckets. Now limits are often things that you don't need to remember for the exam, but this is one of the things that you do. There is a limit of a hundred buckets that you can have in an AWS account. So this is not per region, it's for the entire account. There's a soft limit of 100 and a hard limit so you can increase all the way up to this hard limit using support requests, and this hard limit is a thousand.

      Now this matters for architectural reasons. It's not just an arbitrary number. If you're designing a system which uses S3 and users of that system store data inside S3, you can implement a solution that has one bucket per user if you have anywhere near this number of users. So if you have anywhere from a hundred to a thousand users or more of a system, then you're not gonna be able to have one bucket per user because you'll hit this hard limit. You tend to find this in the exam quite often, it'll ask you how to structure a system, which has potentially thousands of users. What you can do is take a single bucket and divide it up using prefixes, so those folders that aren't really folders, and then in that way, you can have multiple users using one bucket. Remember the 100/1000, it's a 100 soft limit and a 1000 hard limit.

      You aren't limited in terms of objects in a bucket, you can have zero to an infinite number of objects in a bucket. And each object can range in size from zero bytes to 5 TB in size. And then finally, in terms of the object structure, an object consists of a key, which is its name and then the value, which is the data. And there are other elements to an object which I'll discuss later in the course, but for now, just remember the two main components, the key and the value. Now, all of these points are worth noting down, maybe make them into a set of flashcards and you can use them later on during your studies.

      S3 is pretty straightforward and that there tend to be a number of things that it's really good at and a fairly small set of things that it's not suitable for. So let's take a look. S3 is an object storage system. It's not a file storage system, and it's not a block storage system, which are the other main types. What this means is that if you have a requirement where you're accessing the whole of these entities, so the whole of an object, so an image, an audio file, and you're doing all of that at once, then it's a candidate for object storage. If you have a Window server which needs to access a network file system, then it's not S3 that needs to be file-based storage. S3 has no file system, it's flat. So you can't browse to an S3 bucket like you would a file share in Windows. Likewise, it's not block storage, which means you can't mount it as a mount point or a volume on the Linux or Windows. When you're dealing with virtual machines or instances, you mount block storage to them. Block storage is basically virtual hard disks. In EC2, you have EBS, which is block storage. Block storage is generally limited to one thing accessing it at a time, one instance in the case of EBS. S3 doesn't have that single user limitation and it's not block storage, but that means you can't mount it as a drive.

      S3 is great for large scale data storage or distribution. Many examples I'll show you throughout the course will fit into that category. And it's also good for offloading things. If you have a blog with lots of posts and lots of images or audio or movies, instead of storing that data on an expensive compute instance, you can move it to an S3 bucket and configure your blog software to point your users at S3 directly. You can often shrink your instance by offloading data onto S3. And don't worry, I'll be demoing this later in the course. Finally, S3 should be your default thought for any input to AWS services or output from AWS services. Most services which consume data and or output data can have S3 as an option to take data from or put data to when it's finished. So if you're faced with any exam questions and there's a number of options on where to store data, S3 should be your default. There are plenty of AWS services which can output large quantities of data or ingest large quantities of data. And most of the time, it's S3, which is an ideal storage platform for that service.

      Okay time for a quick demo. And in this demo, we're just gonna run through the process of creating a simple S3 bucket, uploading some objects to that bucket, and demonstrating exactly how the folder functionality works inside S3. And I'm also gonna demonstrate a number of elements of how access and permissions work with S3. So go ahead and complete this video, and when you're ready join me in the next, which is gonna be a demo of S3.

    1. Welcome back. In this lesson, I want to cover the architecture of public AWS services and private AWS services. This is foundational to how AWS works, from a networking and security perspective. The differences might seem tiny, but understanding them fully will help you grasp more complex network and security products or architectures throughout your studies.

      AWS services can be categorized into two main types: public services and private services. If you don’t have much AWS experience, you might assume that a public service is accessible to everyone, and a private service isn't. However, when you hear the terms AWS private service and AWS public service, it’s referring to networking only. A public service is accessed using public endpoints, such as S3, which can be accessed from anywhere with an internet connection. A private AWS service runs within a VPC, so only things within that VPC, or connected to that VPC, can access the service. For both, there are permissions as well as networking. Even though S3 is a public service, by default, an identity other than the account root user has no authorization to access that resource. So, permissions and networking are two different considerations when talking about access to a service. For this lesson, it's the networking which matters.

      When thinking about any sort of public cloud environment, most people instinctively think of two parts: the internet and private network. The internet is where internet-based services operate, like online stores, Gmail, and online games. If you're at home playing an online game or watching training videos, you’re connecting to the internet via an internet service provider. So this is the internet zone. Then we have the private network. If you’re watching this video from home, your home network is an example of a private network. Only things directly connected to a network port within your house or people with your WiFi password can operate in your personal, private network zone.

      AWS also has private zones called Virtual Private Clouds (VPCs). These are isolated, so VPCs can't communicate with each other unless you allow it, and nothing from the internet can reach these private networks unless you configure it. Services like EC2 instances can be placed into these private zones and, just like with your home network, it can only access the internet, and the internet can only access it if you allow and configure it.

      Many people think AWS is architected with just two network zones: the internet and private zones. But there's actually a third zone: the AWS public zone, which runs between the public internet and the AWS private zone networks. This is not on the public internet but connected to it. The distinction might seem irrelevant, but it matters as you learn more about advanced AWS networking. The AWS public zone is where AWS public services operate, like S3.

      To summarize, there are three different network zones: the public internet, the AWS private zone (where VPCs run), and the AWS public zone (where AWS public services operate). If you access AWS public services from anywhere with a public internet connection, your communication uses the public internet for transit to and from this AWS public zone. This is why you can access AWS public services from anywhere with an internet connection because the internet is used to carry packets from you to the AWS public zone and back again.

      Later in the course, I will cover how you can configure virtual or physical connections between on-premises networks and AWS VPCs, allowing private networks to connect together if you allow it. You can also create and attach an internet gateway to a VPC, allowing private-zone resources to access the public internet if they have a public IP address. This also allows access to public AWS services like S3 without touching the public internet, communicating through the AWS public zone.

      Private resources, such as EC2 instances, can be given a public IP address, allowing them to be accessed from the public internet. Architecturally, this projects the EC2 instance into the public zone, enabling communication with the public internet. Understanding the three different network zones—the public internet, the AWS public zone, and the AWS private zone—is crucial for doing well in the real world and in specific exams. These three network zones become critical as you learn more advanced networking features of AWS.

      That’s everything for this lesson. Complete the video, and when you’re ready, I’ll look forward to you joining me in the next lesson.

  6. Jul 2024
    1. inally, TGT is mainly focused on the study of end-states and possible equilibria, paying hardly any attention to how such equilibria might be reached. By contrast, EGT is concerned with the evolution of the strategy composition in a population, which in some cases may never settle down on an equilibrium.

      進化ゲーム理論は均衡に到達する過程に興味がある

    2. approach to EGT can formally encompass the biological interpretation

      生物学的解釈の包含: 社会的解釈(個人の戦略変更)は、生物学的解釈(死亡と誕生)を包含できます。つまり、個体の「戦略の変更」を「その個体の死亡と新しい戦略を持つ個体の誕生」と見なすことができます。

    1. There is a disturbing new trend happening in Latin America, specifically in Colombia, and that is:

      0:07 Men being attracted by extremely beautiful, sexy Colombian and Latin American women and then being drugged, robbed, killed, overdosed, kidnapped, all the crimes that you can possibly think of.

      0:21 And a lot of men come to Latin America and thinking that, well, women are just gonna throw themselves at me. And to a certain extent, it is easier than dating in the United States and Germany and all these western countries, and it is better. Women are more beautiful, women are more feminine. But you have to be extremely careful. You have to know where you are.

      0:40 Recently, I've seen story after story after story, and this is an alarming trend because these criminals are getting harder. They're getting harsher on their crimes, they're getting more sadistic, and they're also planning a lot better.

      0:53 One recent case that I heard of was a German guy who went to Colombia, and on his second day, one girl that he met invited him to cook some food at home, cook some delicious Colombian food, and he said, why not? I'm probably not gonna bang this girl, but I'm gonna go home to her so she can cook me some food. She slipped scopolamine. a drug. into a drink, gave it to him, and then a few hours later, he woke up with his money gone, everything stolen, and even his crypto was stolen.

      1:28 They figured out how to get access to his crypto. They stole $15,000, which if you're watching this, you're a wealthy individual, you might think, well, 15K, whatever. But if you have 15 million in your crypto account or your Binance account or in your bank account, they will figure out how to steal it from you.

      1:42 Scopolamine is a drug that basically makes you into a little slave, into a little servant, and you'll do whatever the attacker wants. They tell you, go tell the security and tell them that I'm your friend. That literally happens in Colombia.

      1:57 People get drugged, and then they go to their Airbnb, to their hotel, and they tell the security, that's my friend. Let them come with me. And they come with you, and they steal absolutely everything because it's called the devil's breath.

      2:08 It's a drug that essentially turns you into a zombie, and it has bans all over Latin America, and a lot of police forces in Latin America are trying to ban this drug. They're trying to control the amount of it that is produced, imported, and they have very strict penalties. If you traffic it, if you sell it, you go to prison for a long time.

      2:26 But in these countries, these women, they figured out how to get as much money as possible from these expats, from these tourists. You might meet them on Tinder, you might meet an absolutely beautiful girl, and she invites you to some coffee shop. And then after the coffee shop, you think, yeah, this is going so well. She says, let's go cook some food. Let's go meet some of my friends.

      2:46 Or in one case that I saw, she invited him with her private driver. She said, oh, I have a private driver. He can take us really nice places. It's not safe here. And the private driver was the guy's kidnapper, ended up kidnapping the guy.

      3:00 And there was also another story of a very famous Minnesota man, Asian man, but American, Asian-American. He went to Colombia. He met an absolutely beautiful girl. He showed her off to his family. This Asian American was then kidnapped by this girl, but not immediately, not on the first date. Multiple dates later, after he came back for a second time to Colombia. He met her first, went back home, told everybody about his beautiful girlfriend, came back to Colombia, and then he was kidnapped. The kidnappers asked for $2,000, which is ridiculous, and then they killed him anyway.

      3:42 And this can happen to you in Mexico, in Colombia, Brazil. You have to know where you're going. If you're in Mexico, don't go out past eight, 9:00 PM in bad areas. If you're coming to a place like Tulum, stay in the absolute safest areas. Don't think, ah, I know what I'm doing. Ah, they're exaggerating the crime.

      4:00 And especially if you're come to Latin America for dating or if you're using dating apps, if it's too good to be true, it literally is.

      4:09 If you meet a girl on Tinder and she has bikini pictures all over her profile, if you see that she's pushing hard to meet you, if you see that she's pushing hard to either go to your place or to go to her place. If she wants you to go to her place, it's a red flag anywhere in the world. Even in Russia.

      4:24 I heard of a story from a friend of mine. The guy was invited by the girl to her house, and he thought, oh, I'm getting laid with an extremely hot rushing girl. He went to her place. He got his kidney removed. He woke up the next morning, whoa, disoriented without his kidney.

      4:38 Well, of course, a girl isn't going to invite you to your place in a random country, especially if she doesn't speak your language. Most girls are not that slutty. That's probably not gonna happen to you, unless you're some footballer or a famous person.

      4:51 So you have to keep an eye out for red flags, and you always have to keep in mind that people will try to take advantage of you, especially if you don't speak the language.

      5:00 I speak native Spanish. I don't wear my Rolex. I don't wear expensive clothes when I'm going out in Mexico, in Colombia and Argentina. You just don't show off. You speak the language or you try to. If you're going to Brazil, learn Portuguese, because you are a target, especially if you're tall, white.

      5:20 I've seen many tall Americans, white as hell, white as paper, and they walk through Mexico like it's their front yard. You are a target. Don't wear expensive jewelry. Here in Tulum, Rolexes get stolen all the time. I was reading through many articles of gun robberies and overall armed violence, and they were all because a person had an extremely expensive something, either a camera, and I'm filming this in an area where they're actually building new buildings. There's security all over the place. It's called Selvazama, absolutely beautiful. They're gonna, well, they're gonna chop up all these trees and they're gonna build new buildings. It's quite safe here. It's actually the best. One of the safest place in Mexico, I would say.

      5:56 But if you're going through a rough area, especially if it's at night, you don't wanna have a Rolex. You don't want to have Gucci shoes. I'm wearing my New balance, my little shorts from Zara.

      6:08 You have to know where you are. It's not Dubai. It's not Miami. You have to always be aware, and if it's too good to be true, it probably is when dating these Latino women.

      6:18 Now, my full game plan. I was born and raised in Puerto Rico, one of the least safest places in the world. I've spent a lot of time in Colombia, in Mexico, in Dominican Republic, many places where people get robbed, they get stabbed, they get lost, they get kidnapped.

      6:31 My game plan, one, as I said, do not show off in any way. You think, oh, a Rolex is gonna get me laid. No, it's gonna get you kidnapped. Do not show off. Do not wear expensive clothes. Wear Zara, H&M, even if you're a billionaire, just wear the cheapest clothes possible while looking nice. You don't wanna look like a homeless person, but if you're going through a rough area, it's better to look like a homeless person, so that the attacker thinks: This person is more poor than I am. Why am I gonna attack them?

      6:53 Second of all, if you have an expensive phone, like an iPhone, try to not use it that much outside, to be honest, or get a copy. For example, you could buy a second phone, like an iPhone 10 or an iPhone 11, and then you have your iPhone 15 at home. You use the other iPhone for just going outside, Google Maps, WhatsApp, get a local WhatsApp number.

      7:10 Again, speak the language. If you speak the language with a Colombian girl, she thinks twice about kidnapping you, about taking you somewhere. The taxi drivers will think twice about robbing you or putting a gun to your face because you speak the language. You know yourself around.

      7:25 And also make friends in these places. If you go to Mexico, know local people so that you can always keep them updated if everything is okay.

      7:32 If you're going to a place like I went to, Tijuana, very dangerous city in Mexico, next to the border, or in the border or at the border, with the United States, you wanna have people that you let them know every few hours how you are, or every day how you are.

      7:46 Hey, I'm doing great. I'm here in Tijuana. I stayed the night. Everything is fine, everything is fine. You just let them know that everything is fine. This is how Latino people keep themselves updated to see if everything is fine. My family's like that, hey, are you alive? Everything fine?

      7:59 We're not in a war zone, but this is how Latino culture is, because we know that shit happens. We know that people get kidnapped. We know that people get stabbed, so you wanna get adjusted to that culture.

      8:07 And overall, know the different areas of the city. Stay in the absolute best, safest possible area every time you go somewhere. Don't try to save a buck. Don't try to stay in the area with the most people with the highest chance of getting laid. And don't do anything stupid.

      8:21 Don't go to some cabaret. I see it on Reddit all the time, on the Mexico groups and on the the different Latin American groups, that people go to cabarets, erotic massage centers. They go to different nightclubs that they shouldn't be going to, and they get stabbed. They get robbed, they get kidnapped. You want to be vigilant.

      8:35 It's a great place. Latin America is beautiful, it's free. There's investment opportunities. There's land to buy, there's low taxes, there's beautiful women. It's one of the best areas of the world, but you have to know where you are.

      • Overview of Graphs in Computation:

        • Graphs have been successful in domains like shader programming and signal processing.
        • Computation in these systems is usually expressed on nodes with edges representing information flow.
        • Traditional models often have a closed-world environment where node and edge types are pre-defined.
      • Introduction to Scoped Propagators (SPs):

        • SPs are a programming model embedded within existing environments and interfaces.
        • They represent computation as mappings between nodes along edges.
        • SPs reduce the need for a closed environment and add behavior and interactivity to otherwise static systems.
      • Definition and Mechanics:

        • A scoped propagator consists of a function taking a source and target node, returning a partial update to the target.
        • Propagation is triggered by specific events within a defined scope.
        • Four event scopes implemented: change (default), click, tick, and geo.
        • Syntax: scope { property1: value1, property2: value2 }.
      • Event Scopes and Syntax:

        • Example: click {x: from.x + 10, rotation: to.rotation + 1} updates target properties when the source is clicked.
      • Demonstration and Practical Uses:

        • SPs enable the creation of toggles and counters by mapping nodes to themselves.
        • Layout management is simplified as arrows move with nodes.
        • Useful for constraint-based layouts and debugging by transforming node properties.
        • Dynamic behaviors can be created using scopes like tick, which utilize time-based transformations.
      • Behavior Encoding and Side Effects:

        • All behavior is encoded in arrow text, allowing for easy reconstruction from static diagrams.
        • Supports arbitrary JavaScript for side effects, enabling creation of utilities or tools within the environment.
      • Cross-System Integration:

        • SPs can cross boundaries of siloed systems without editing source code.
        • Example: mapping a Petri Net to a chart, demonstrating flexibility in creating mappings between unrelated systems.
      • Complex Example:

        • A small game created with SPs includes joystick control, fish movement, shark behavior, toggle switch, death state, and score counter.
        • The game uses nine arrows to propagate behavior between different node types.
      • Comparison to Prior Work:

        • Differences from Propagator Networks: propagation along edges, scope conditions, arbitrary stateful nodes.
        • Previous work like Holograph influenced the use of the term "propagator."
      • Open Questions and Future Work:

        • Unanswered questions include function reuse, modeling side effects, multi-input-multi-output propagation, and applications to other domains.
        • Formalization of the model and examination of real-world usage are pending tasks.

      By following the structured format above, the summary captures the essence and main points of the text, providing clear insights into the Scoped Propagators model and its potential applications.

    1. Shortly after publication of the EPR paper, Erwin Schrödinger wrote to Einstein to congratulate him and he added: "The separation process is not at all encompassed by the orthodox scheme" of quantum mechanics to which Einstein replied (quoted in The Shaky Game):Dear Schrödinger, I was very happy with your long letter, which dealt with my little paper. For reasons of language, this one was written by Podolsky after many discussions. But still it has not come out as well as I really wanted; on the contrary, the main point was, so to speak, buried by the erudition… A Talmudic philosopher [probably Niels Bohr] is anti-realist.
      • SEE
    1. Gali WeinsteinPhD. Foundations (history, philosophy) of physics. · 5y · Was Albert Einstein troubled by Quantum Physics only because it conflicted with his General Theory of relativity?Einstein the realist did not object to quantum mechanics. His primary difficulty was with a probabilistic interpretation of quantum mechanics and the inseparability (entanglement) in quantum mechanics. He, therefore, strived to derive the main results of quantum mechanics from a deterministic unified field theory (a unification of a refined form of general relativity and electrodynamics) in which there would be no inseparability problem. Of course, particles are not entangled in general relativity and it is a deterministic theory. But Einstein objected to entanglement and to a probabilistic interpretation to quantum mechanics because of his realist position and not because it conflicted with his general theory of relativity. It is our modern point of view (since Stephen Hawking’s studies in the 1970s) that quantum mechanics conflicts with classical general relativity and we thus have to find ways to reconcile the two theories. For instance, we try to do so in the form of quantum gravity. However, this is a modern point of view. It was not Einstein’s view! These are the two primary things Einstein objected to (presented in his own colourful words) in quantum mechanics:“God does not play dice”: Max Born’s probabilistic interpretation of quantum mechanics.“Spooky action at a distance”: inseparability and entanglement.First, “God does not play dice”: Max Born’s probabilistic interpretation of quantum mechanics. Ironically, it was Einstein’s own contributions to quantum physics that had prompted Born to propound his probabilistic interpretation, as Born himself attested in his 1926 paper:I hereby start from a comment of Einstein’s regarding the relation between the wave field and light quanta. He says approximately that the waves are only seen as showing the way for the corpuscular light quanta, and he spoke in the same sense of a “ghost field”. This determines the probability that one light quantum, which is the carrier of energy and momentum, chooses a definite path. The field itself, however, does not have energy or momentum. … it is obvious to regard the de Broglie-Schrödinger waves as a “ghost field”, or even better as a guiding field.Einstein’s “ghost field” was defined as “the waves are only seen as showing the way for the corpuscular light quanta”, that is, a quantum particle has a ghost field (a wave) associated with it or, the particle is guided by a field. Born wrote to Einstein a month before the above paper was published:About me, it can be told that physics-wise I am entirely satisfied since my idea to look upon Schrödinger's wave field as a “ghost field” in your sense proves better all the time.What an irony of fate that Einstein gave Born the idea to look upon Schrödinger's wave function as God playing a dice… Schrödinger introduced the wave function into quantum mechanics and tried to interpret it in terms of a wavelike model. Born wanted to find a way for reconciling particles and waves, and he was relating probability to no other thing but to Einstein’s ghost field! Alas, according to Born the wave function had no physical reality and Einstein’s ghost field became a probability amplitude. Einstein, however, objected to Born’s probabilistic interpretation of Schrödinger's wave mechanics. He also opposed to the transformation of his “ghost field” into probability amplitude! Einstein went back to the blackboard and wrote that light quanta are guided by a wave, a “gohst field” and he wrote to Born a letter on December 4, 1926, in which he expressed his strict objection to Born’s interpretation:Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the ‘old one’. I, at any rate, am convinced that He is not playing dice.Einstein tells Born that “He is not playing dice”. Subsequently, Einstein also objected to Heisenberg uncertainty principle. The more you know about something's momentum, the less you know about its position; and the more you know about its position, the less you know about its momentum. This is the Heisenberg uncertainty principle, which Einstein could not accept. Gali Weinstein's answer to What does the quote “God doesn’t play dice with the universe” by Albert Einstein mean? Second, “Spooky action at a distance”: inseparability and entanglement. The Heisenberg uncertainty principle prompted Einstein, his assistant Nathan Rosen and Einstein’s Russian Colleague Boris Podolsky to propose the EPR argument. Here I give a pedestrian explanation of the EPR argument: Gali Weinstein's answer to What is the explanation for EPR paradox? Shortly after the publication of the EPR paper, Schrödinger wrote to Einstein to congratulate him and he added: "The separation process is not at all encompassed by the orthodox scheme" of quantum mechanics, to which Einstein replied (quoted in The Shaky Game):Dear Schrödinger, I was very happy with your long letter, which dealt with my little paper. For reasons of language, this one was written by Podolsky after many discussions. But still it has not come out as well as I really wanted; on the contrary, the main point was, so to speak, buried by the erudition… A Talmudic philosopher [probably Niels Bohr] is anti-realist.To get past the Talmudic philosopher Einstein invoked a supplementary principle, the separation principle. Einstein then explained to Schrödinger what he meant by the separation principle. Consider the two particles A and B that interact and then separate.After the interaction, the real state of AB consists precisely of the real state of A and the real state of B, which two states have nothing whatsoever to do with one another. The real state of B thus cannot depend upon the kind of measurement I carry out on A ('Separation Hypothesis' from above). But then for the same state of B there are two (and in general arbitrarily many) equally justified ψB, which contradicts the hypothesis of one-to-one or complete description of the real states.In 1947 Einstein wrote again a letter to Born:I cannot make a case for my attitude in physics which you would consider at all reasonable. I admit, of course, that there is a considerable amount of validity in the statistical approach which you were the first to recognize clearly as necessary given the framework of the existing formalism. I cannot seriously believe in it because the theory cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance. I am, however, not yet firmly convinced that it can really be achieved with a continuous [unified] field theory, although I have discovered a possible way of doing this which so far seems quite reasonable. The calculation difficulties are so great that I will be biting the dust long before I myself can be fully convinced of it. But I am quite convinced that someone will eventually come up with a theory whose objects, connected by laws, are not probabilities but considered facts, as used to be taken for granted until quite recently. I cannot, however, base this conviction on logical reasons, but can only produce my little finger as witness, that is, I offer no authority which would be able to command any kind of respect outside of my own hand.Einstein the realist tells Born that “physics should represent a reality in time and space, free from spooky actions at a distance”.
      • masterpiece
    1. We use a reward function r(s) that is zero for allnon-terminal time steps t < T. The outcome zt = ± r(sT) is the termi-nal reward at the end of the game from the perspective of the currentplayer at time step t: +1 for winning and −1 for losing.

      reward function is as simple and sparse as possible, using the only thing you know for certain, whether you won or lost the game.

    1. An illustration of alpha–beta pruning. The grayed-out subtrees don't need to be explored (when moves are evaluated from left to right), since it is known that the group of subtrees as a whole yields the value of an equivalent subtree or worse, and as such cannot influence the final result. The max and min levels represent the turn of the player and the adversary, respectively.

      Alpha-Beta pruning comes down to being smart about searching the tree of possible future game states to be more efficient about rollouts.

    1. “Or so he’d have you believe. You think you’re the only one he whisperssecrets to? He gives each of us just enough to convince us that we’d behelpless without him. He played the same game with me, when I first wedRobert. For years, I was convinced I had no truer friend at court, but now ...”She studied his face for a moment. “He says you mean to take the Houndfrom Joffrey.”

      yeahh never trust anyone at court come on now

    Annotators

    1. Author response:

      eLife assessment

      This fundamental study provides a near-comprehensive anatomical description and annotation of neurons in a male Drosophila ventral nerve cord, based on large-scale circuit reconstruction from electron microscopy. This connectome resource will be of substantial interest to neuroscientists interested in sensorimotor control, neural development, and analysis of brain connectivity. However, although the evidence is extensive and compelling, the presentation of results in this very large manuscript lacks clarity and concision.

      We thank the reviewers for their detailed and thoughtful feedback and the time that they invested to provide it. Organising this manuscript (which is clearly not a standard research article) was quite challenging as it had to fulfil a number of functions: presenting a guide to the system of annotations and the associated online resources; providing an atlas for the annotated cell types; and showcasing various analyses to illustrate the value of the dataset as well as just a few of the many questions it can be used to address. We gave careful consideration to its structure and attempted to signpost the sections that would be most useful to particular types of readers. Nevertheless we can see that this was not completely successful and we thank the reviewers for their suggestions for improvement.

      We acknowledge that the resulting manuscript was very large and will endeavour to streamline our text in the revision without compromising the accessibility of the data. We do note that there is some precedent for comprehensive and lengthy connectome papers going all the way back to White et al. 1986 which took 340 pages to describe the 302 neurons of the C. elegans connectome. More recently, we can compare the “hemibrain papers” published in eLife: Scheffer et al., 2020, Li et al., 2020, Schlegel et al., 2021, Hulse et al., 2021. These papers would also be difficult to digest at a single sitting but were game-changing for the Drosophila neuroscience field and have already been cited hundreds of times, a testament to their utility. In the same way that these papers provided the first comprehensively proofread and annotated EM connectome for (a large part of) the adult fly brain, our work now provides the first fully proofread and annotated EM connectome for the nerve cord. Given the pioneering nature of this dataset we feel that the lengthy but highly structured atlas sections of the paper are justified and will prove impactful in the long term.

      Whilst no EM dataset is perfect, we have endeavoured to make this one as comprehensive as possible. We found 74.4 million postsynapses and 15,765 neurons of VNC origin, all of which have been carefully proofread, reviewed, annotated and typed. For comparison, the female adult nerve cord dataset (FANC, Azevedo et al., Nature, 2024) contains roughly 45 million synapses and 14,600 neuronal cell bodies of which at the time of writing 5576 have received preliminary proofreading and 222 high quality proofreading. We emphasise that these are highly complementary datasets, given the difference in sex and the fact that each dataset has different artefacts (MANC has poorer preservation of neurons in the leg nerves; FANC is missing part of the abdominal ganglion and has lower synapse recovery). We reconstructed 5484 sensory neurons from the thoracic nerves, 84% of the ~6500 estimated from FANC. The overall recovery rate was ~86.5% if we include the ~1100 sensory neurons from abdominal nerves, which were in excellent condition.

      Reviewer #1 (Public Review):

      Summary:

      The authors present a close to complete annotation of the male Drosophila ventral nerve cord, a critical part of the fly's central nervous system.

      Strengths:

      The manuscript describes an enormous amount of work that takes the first steps towards presenting and comprehending the complexity and organization of the ventral nerve cord. The analysis is thorough and complete. It also makes the effort to connect this EM-centric view of the nervous system to more classical analyses, such as the previously defined hemilineages, that also describe the organization of the fly nervous system. There are many, many insights that come from this work that will be valuable to the field for the foreseeable future.

      We thank the reviewer for acknowledging the enormous collaborative effort represented by this manuscript. We tried to synthesise decades of light-level work by neuroscientists and developmental biologists working in Drosophila and other insects in order to create a standard, systematic nomenclature for >22,000 neurons, most of which had not been typed at light level. We hope that the MANC dataset and this guide to its contents will prove to be useful resources to Drosophila neurobiologists and the wider neuroscience field.

      Weaknesses:

      With more than 60 primary figures, the paper is overwhelming and cannot be read and digested in a single sitting. The result is more like a detailed resource rather than a typical research paper.

      In writing this paper, we had two aims: first, to describe and validate our extensive biological annotation of the connectome and second, to provide interesting illustrative examples of the many analyses that could be carried out on this dataset using the atlas we generated. The resulting paper is intended primarily as a detailed reference rather than a typical research paper. At the end of the Introduction, we outline the structure of the paper and explicitly direct non-specialist readers to focus on the initial and concluding sections for orientation to the dataset so that they would not get bogged down in the details. We will review our section organisation and headings to try to make the paper more straightforward to navigate, and we will add specific figure numbers to the outline.

      Reviewer #2 (Public Review):

      Summary and strengths:

      This massive paper describes the identity and connectivity of neurons reconstructed from a volumetric EM image volume of the ventral nerve cord (VNC) of a male fruit fly. The segmentation of the EM data was described in one companion paper; the classification of the neurons entering the VNC from the brain (descending neurons or DNs) and the motor neurons leaving the VNC was described in a second companion paper. Here, the authors describe a system for annotating the remaining neurons in the VNC, which include intrinsic neurons, ascending neurons, and sensory neurons, representing the vast majority of neurons in the dataset. Another fundamental contribution of this paper is the identification of the developmental origins (hemilineage) of each intrinsic neuron in the VNC. These comprehensive hemilineage annotations can be used to understand the relationship between development and circuit structure, provide insight into neurotransmitter identity, and facilitate comparisons across insect species.Many sensory neurons are also annotated by comparison to past literature. Overall, defining and applying this annotation system provides the field with a standard nomenclature and resource for future studies of VNC anatomy, connectivity, and development. This is a monumental effort that will fundamentally transform the field of Drosophila neuroscience and provide a roadmap for similar connectomic studies in other organisms.

      We thank the reviewer for acknowledging the enormous collaborative effort represented by this manuscript. We tried to synthesise decades of light-level work by neuroscientists and developmental biologists working in Drosophila and other insects in order to create a standard, systematic nomenclature for >22,000 neurons, most of which had not been typed at light level. We hope that the MANC dataset and this guide to its contents will prove to be useful resources to Drosophila neurobiologists and the wider neuroscience field.

      Weaknesses:

      Despite the significant merit of these contributions, the manuscript is challenging to read and comprehend. In some places, it seems to be attempting to comprehensively document everything the authors found in this immense dataset. In other places, there are gaps in scholarship and analysis. As it is currently constructed, I worry that the manuscript will intimidate general readers looking for an entry point to the system, and ostracize specialized readers who are unable to use the paper as a comprehensive reference due to its confusing organization.

      In writing this paper, we had two aims: first, to describe and validate our extensive biological annotation of the connectome and second, to provide interesting illustrative examples of the many analyses that could be carried out on this dataset using the atlas we generated. The resulting paper is intended primarily as a detailed reference rather than a typical research paper. At the end of the Introduction, we outline the structure of the paper and explicitly direct non-specialist readers to focus on the initial and concluding sections for orientation to the dataset so that they would not get bogged down in the details. We will review our section organisation and headings to try to make the paper more straightforward to navigate, and we will add specific figure numbers to the outline.

      The bulk of the 559 pages of the submitted paper is taken up by a set of dashboard figures for each of ~40 hemilineages. Formatting the paper as an eLife publication will certainly help condense these supplemental figures into a more manageable format, but 68 primary figures will remain, and many of these also lack quality and clarity. Without articulating a clear function for each plot, it is hard to know what the authors missed or chose not to show. As an example, many of the axis labels indicate the hemilineage of a group of neurons, but are ordered haphazardly and so small as to be illegible; if the hemilineage name is too small, and in a bespoke order for that data, then is the reader meant to ignore the specific hemilineage labels?

      We will contact eLife professional editing staff to determine whether the paper can be streamlined by moving more material to supplemental without making it difficult to locate the detailed catalogues of neurons that will be of interest to specialist readers. Based on the typical eLife format, we suspect that retaining the dashboard main figures for each hemilineage will be necessary to maintain its utility as a reference. We will, however, shorten the associated main text by, for example, moving background material used to assign the hemilineages to the Methods section and moving specific results to the figure legends where possible.

      We articulated the function for each plot as follows: "Below we describe in more depth every hemilineage that produces more than one or two secondary neurons. For each of these 35 hemilineages, we show (A) the overall morphology of the secondary population, (B) representative individual neurons (as estimated by highest average NBLAST score to other members of the hemilineage), and (C) specific notable examples (which in some cases are primary). We then report (D) the locations of their connectors (postsynapses and presynapses), (E) their upstream and downstream partners by class, and (F) their upstream and downstream partners by finer subdivisions corresponding to their systematic types (secondary hemilineage, target, or sensory modality). We also provide supplementary figures showing the morphology and normalised up- and downstream connectivity of all systematic types for each hemilineage."

      We have plotted every secondary neuron in each hemilineage, every predicted synapse for those neurons with confidence >0.5, every connection to partner neurons by class (no threshold applied), and then the same information organised by hemilineage in a heatmap (and including partners from all birthtimes and partners of unknown hemilineage). Then the supplementary figures show all connectivity, organised in the same way, for every individual cell type assigned to the hemilineage, including both primary and early secondary neurons. We will add more detail to the figure legends to clarify these points.

      We apologise that you were unable to read some of the axis labels in the review copy of the manuscript; we did submit high resolution versions of the figures as a supplemental file, but perhaps this did not reach you; they can also be found at https://www.biorxiv.org/content/10.1101/2023.06.05.543407v2.supplementary-material. The hemilineages are in a conserved (alphanumerical) order for all hemilineage-specific plots and many others. The exceptions arise when neurons are clustered based on their connectivity to hemilineages, in which case the order of the labels necessarily follows the structure of the resulting clusters.

      The text has similar problems of emphasis. It is often meandering and repetitive. Overlapping information is found in multiple places, which causes the paper to be much longer than it needs to be. For example, the concept of hemilineages is introduced three times before the subtitle "Introduction to hemilineage-based organisation". When cell typing is introduced, it is unclear how this relates to serial motif, hemilineage, etc; "Secondary hemilineages" follow the Cell typing title. Like the overwhelming number of graphical elements, this gives the impression that little attention has been paid to curating and editing the text. It is unclear whether the authors intend for the paper to be read linearly or used as a reference. In addition, descriptions of the naming system are often followed by extensive caveats and exceptions, giving the impression that the system is not airtight and possibly fluid. At many points, the text vacillates between careful consideration of the dataset's limitations and overly grandiose claims. These presentation flaws overshadow the paper's fundamental contribution of describing a reasonable and useful cell-typing system and placing intrinsic neurons within this framework.

      Because we intended this paper to be read primarily as a reference, we tried to make each section stand on its own, which we agree resulted in some redundancy (with more details appearing where relevant). However, we will do our best to tighten the text for the version of record.

      Our description immediately under the Cell typing title includes the use of hemilineage, serial (not serial motif, which was not used), and laterality (left-right homologues) in the procedure to assign cell types. We will change this to “Cell typing of intrinsic, ascending, and efferent neurons” for clarity. The “Secondary hemilineages” title marks the start of a new section that serves as a reference for each of the secondary hemilineages; we will change this to “Secondary hemilineage catalogue” or similar for clarity.

      References to past Drosophila literature are inconsistent and references to work from other insects are generally not included; for example, the extensive past work on leg sensory neurons in locusts, cockroaches, and stick insects. Such omissions are understandable in a situation where brevity is paramount. However, this paper adopts a comprehensive and authoritative tone that gives the reader an impression of completeness that does not hold up under careful scrutiny.

      We did not attempt to review the sensory neuron literature in this manuscript but rather cited those specific papers which included the axon morphology data that informed our modality, peripheral origin, and cell type assignments. Most of these came from the Drosophila literature due to the availability of genetic tools used for sparse labelling of specific populations as well as the greatly increased likelihood of conserved morphology. However we certainly agree that decades of sensory neuron work in larger insects were foundational for this subfield and will add a sentence to this effect in the introduction to our sensory neuron typing.

      The paper accompanies the release of the MANC dataset (EM images, segmentation, annotations) through a web browser-based tool: clio.janelia.org. The paper would be improved by distilling it down to its core elements, and then encouraging readers to explore the dataset through this interactive interface. Streamlining the paper by removing extraneous and incomplete analyses would provide the reader with a conceptual or practical framework on which to base their own queries of the connectome.

      We certainly hope that this paper will encourage readers to explore the MANC dataset. Indeed, as we state in the Discussion, "Moreover, its ultimate utility depends on how widely it is leveraged in the future experimental and computational work of the entire neuroscience community. We have only revealed the tip of the iceberg in this report, with a wealth of opportunities now available in this publicly available dataset for forthcoming connectomic analyses that will feed into testable functional hypotheses." In the first few sections of the Results, we include a visual introduction to annotated features, a glossary of annotation terms, a visual guide to our cell typing nomenclature, and two video tutorials on the use of Clio Neuroglancer to query the dataset. To further encourage exploration, we have also included illustrative examples of just a few of the many analyses that can now be performed with this comprehensive and publicly available dataset.

    1. Reviewer #2 (Public Review):

      In this research, Zhang et al. have pioneered the creation of an advanced organoid culture designed to emulate the intricate characteristics of endometrial tissue during the crucial Window of Implantation (WOI) phase. Their method involves the incorporation of three distinct hormones into the organoid culture, coupled with additives that replicate the dynamics of the menstrual cycle. Through a series of assays, they underscore the striking parallels between the endometrial tissue present during the WOI and their crafted organoids. Through a comparative analysis involving historical endometrial tissue data and control organoids, they establish a system that exhibits a capacity to simulate the intricate nuances of the WOI.

      The authors made a commendable effort to address the majority of the statements. Developing an endometrial organoid culture methodology that mimics the window of implantation is a game-changer for studying the implantation process. However, the authors should strive to enhance the results to demonstrate how different WOI organoids are from SEC organoids, ensuring whether they are worth using in implantation studies, or a proper demonstration using implantation experiments.

    1. Les contextes numériques hybrides de travail : l’exemple d’un colloque scientifique.

      Propositions de titres : - colloque hybride : un jeu multijoueurs ? - réflexions sur l'hybridité et parrallèle avec le game design - hydridation des interactions : concevoir un "potentiel hybride" - Colloque hybride : expérience vécue et investissement interactionnel des participants. - Hybridité et game design : réfexions à partir de l'exemple d'un colloque hybride.

    2. Cet article se propose d’explorer la notion d’hybridité dans le cadre d’une situation de travail et d’interactions : la tenue d’un colloque scientifique. Lors de l’organisation du colloque “Interactions Multimodales Par ECrans” en 2022, nous y avons integré un dispositif qui se donnait l’ambition de faicliter et d’encourager différents régimes de présence et d’engagement que ce soit en présence, à distance ou les deux à la fois. Nous avons également lancé un appel à participation en amont du colloque afin de recueillir des données nous permettant d’avoir accès à l’expérience vécue par des participants qui suivaient le colloque dans des contextes différents. Ce faisant, nous voulions également explorer les méthodes “d’enquête participative” et mettre à l’épreuve le fonctionnement d’un collectif de recherche. Lors de l’analyse des données, il nous est apparu que l’hybridité avait une dynamique similaire aux enjeux du game design et il en sera question dans la discussion.

      j'ai rédigé cette partie, à améliorer par la suite.

    1. Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful?

      I am so familiar with this idea because I am a gamer who love playing action, survival and shooting games on my computer. I could say that I am addicted to games, but it does not make my life wrong as different cases we heard on news. I think it is similar to be aggressive when seeing violence on TV or social media. The more time we spend to observe any things, the deeper it is planted in our mind and behaviors. This is a real story, a few days ago before the course started, I spent a whole day like 14 hours to play a survival and shooting game with my friend. Then, when we about to offline, my friend, he told me that he saw someone standing outside in the balcony and his mind trick him to shoot that person just like what we were playing. (It is just his mind still in the game mode, he did not shoot anyone or owned a gun - just clarify). At that moment, we know that we are so tired and need to rest. There is a study about this effect. Here is the link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3003785/

      To me, I think it is not only baby, but also everyone at every age will be affected by what that person observes or interacts with multiple times. However, there are some cases that show some people tends to hate what they saw so much. For example, some kids grow up in an unhappy family or with abusive parents, tends to be a better Mom or Dad in the future because they understand the pain.

    1. naturalistic observation

      The example that went with this about the microphones reminds me of videos I see on social media where sports players get mic'd up and either their team doesn't know or someone secretly mic's them up and they don't know. This video I linked is of Zach Pickens from the Chicago Bears mic'd up at a game without knowing. I think this could be an example of naturalistic observation if they were trying to study something in particular. For example, they could be measuring how many times someone says a certain phrase in a day. https://www.tiktok.com/@espn/video/7278120784881585450?is_from_webapp=1&sender_device=pc&web_id=7392717372652783135

    1. Non- Indian fishing regulations, such as those developed and enforced through California Department of Fish and Game, have often failed to take into account the Karuk as original inhabitants, their inalienable right to sub- sistence harvesting, and the sustainable nature of Karuk harvests. As a result they have attempted to balance the subsistence needs of Karuk people with recreational desires of non-Indians from outside the area.

      this is a very complicated problem to solve, because with a higher population of people fishing and , its harder to maintain a healthy population of fish while allowing native tribes to fish for as many fish as they need, which could eventually ruin the ecosystem over a certain amount of time.

    1. Summary of "Flecs v4.0 is out!" by Sander Mertens

      What is Flecs? - Flecs is an Entity Component System (ECS) for C and C++ designed for building games, simulations, and other applications. - “Store data for millions of entities in data structures optimized for CPU cache efficiency and composition-first design.” - “Find entities for game systems with a high performance query engine that can run in time critical game loops.” - “Run code using a multithreaded scheduler that seamlessly combines game systems from reusable modules.” - “Builtin support for hierarchies, prefabs and more with entity relationships which speed up game code and reduce boiler plate.” - “An ecosystem of tools and addons to profile, visualize, document and debug projects.” - Open-source under the MIT license.

      Release Highlights for Flecs v4.0: - Over 1700 new commits, totaling upwards of 4700 commits. - “More than 1700 new commits got added since v3 with the repository now having upwards of 4700 commits in total.” - Closed and merged 400+ issues and PRs from community members. - “More than 400 issues and PRs submitted by dozens of community members got closed and merged.” - Discord community grew to over 2300 members, GitHub stars doubled from 2900 to 5800. - Test cases increased from 4400 to 8500, with test code growing from 130K to 240K lines.

      Adoption of Flecs: - Used by both small and large projects, including the highly anticipated game Hytale. - “Flecs provides the backbone of the Hytale Game Engine. Its flexibility has allowed us to build highly varied gameplay while supporting our vision for empowering Creators.” - Tempest Rising uses Flecs to manage high counts of units and spatial queries. - “We are using it [Flecs] mostly to leverage high count of units. Movement (forces / avoidance), collisions, systems that rely on spatial queries, some gameplay related stuff.” - Smaller games like Tome Tumble Tournament use Flecs for movement rules.

      Language Support and Community Contributions: - Flecs Rust binding released, actively developed and in alpha. - “An enormous amount of effort went into porting over all of the APIs, including relationships, and writing the documentation, examples, and tests.” - Flecs.NET (C#) has become the de facto C# binding. - “The binding closely mirrors the C++ API, and comes bundled with documentation, examples and tests.”

      New Features in v4.0: - Unified query API simplifies usage and enhances functionality. - “The filter, query, and rule implementations now have been unified into a single query API.” - Explorer v4 offers a revamped interface and new tools. - “The v4 explorer has a few new tricks up its sleeve, such as a utility to capture commands, editing multiple Flecs scripts at the same time, the ability to add & remove components, and new tools to inspect queries, systems and observers.” - Flecs Script for easy entity and component creation, with improved syntax and faster template engine. - “Flecs Script got completely overhauled, with an improved syntax, more powerful APIs and a much faster template engine.” - Sparse components for stable component pointers and performance gains. - “Sparse components don’t move when entities are moved between archetypes. Besides being good for performance, this also means that Flecs now supports components that aren’t movable!” - Overhauled demos showcasing new features and enhanced graphics. - “The Tower Defense demo has been overhauled for v4 to better showcase Flecs features, while also quadrupling the scale of the scene!” - Improved inheritance model, now opt-in for better performance. - “When a prefab is instantiated in v4, components are by default copied to the instance.” - Member queries reduce overhead and simplify relationships. - “In v4 queries can directly query entity members as if they were relationship targets, which is like having relationships without the fragmentation!” - Flecs Remote API for connecting to Flecs applications remotely. - “The new Flecs Remote API includes a simpler JSON format, a new REST API with a cleaner design, and a new JavaScript library for the development of web clients that use Flecs data.”

      Documentation and Future Directions: - Improved and expanded documentation covering new features in-depth. - “Several weeks of the v4 release cycle were spent on improving the documentation and making sure it’s up to date.” - Future updates to include reactivity frameworks, dense tree storage, dense/sparse tables, pluggable storages, and a node API. - “Reactivity... dense tree storage... dense/sparse tables... pluggable storages... node API...”

      Community Acknowledgment: - Special thanks to community members and sponsors who contributed to the development and support of Flecs v4. - “A special thanks to everyone that contributed PRs and helped with the development of Flecs v4 features.”

      This summary encapsulates the key updates, features, and community efforts surrounding the release of Flecs v4.0, highlighting its impact and future potential.