- Last 7 days
-
academic.oup.com academic.oup.com
-
but also as an unprecedented campaign of enslavement and plunder.
war intended for enslavement
-
Weltanschauungskrieg. The Osthee
ask danial
-
it became possible to enforce such brutal combat discipline on them without stirring any visible spirit of rebellion, let alone actual mutiny. On one level, it was easier to bear the officers' brutality by being allowed to act brutally toward others; on another, brutal enforcement of will came to be seen as the norm;
cycle of brutality and a changing world view
-
Verwilderung
ask danial
-
atrocities have often also had a powerful unifying effect on the perpetrators.
cohisioon in atrocities
-
Conversely, as compared with previous campaigns, soldiers on the Eastern Front became the target of an ever harsher policy of punishment for breaches of discipline related to actual combat activity, as the dramatic rise in long prison terms and executions demonstrates.
more punnishment for battle offencise
-
In the Soviet Union, however, we no longer hear of soldiers being tried, let alone executed, for acts of violence and plunder against Soviet citizens. Indeed, according to the “Barbarossa” decree, such prosecution was legally possible only if it was shown that by committing these offenses a soldier had simultaneously breached military discipline
no longer a crime to do horrible stuff to civis
-
camouflaging brutalities behind a series of euphemisms and pseudolegal terms. Ultimately, the army reverted to the crudest moral code of war, according to which everything which ensured one's survival was permitted (and thus considered moral), and everything even remotely suspect of threatening it must be destroyed (and was by definition immoral).
brutality working its way into the system itself
-
e generals reminded them of the image and honor of German arms, which their own orders had done so much to besmirch.
losing honer
-
By legalizing murder, robbery, torture, and destruction, these instructions put the moral basis of martial law, and thereby of military discipline, on its' head. The army did not simply pretend not to notice the criminal actions of the regime, it positively ordered its own troops to carry them out, and was distressed when breaches of discipline prevented their more efficient execution.
ordering what would ordinarily be crimes
-
large soldiers who committed offenses against members of the civilian population were brought to justice and severely punished, on the other.29Close There were only two, though highly significant exceptions: those perceived as the Reich's political enemies, be they former German citizens, foreign political opponents, or resistance fighters, and those labelled as the German Volk's biological enemies, and especially Jews, were treated not only by the SS, but also by the Wehrmacht, in an entirely different manner and could not expect any legal protection. Here military discipline showed its capacity not only to prevent crimes, but also to legalize them.
no legal protection for "enimys"
-
as sexual attacks on women, children, and even intercourse with animals.
sexual frustration and masculinity
-
The corrupting effects of occupation, the example given by the SS in Poland, and the highly ideological context in which the war was being fought, all made it extremely difficult to control the troops
idiology + bad example makes it very hard to control the troups
-
present conditions is the boundless brutalization and the moral depravity which in the shortest time will spread like an epidemic among the best German human material.
brutality a habit and contagin
-
Discipline was thus increasingly becoming a political issue. To be sure, long before the outbreak of war expressions of political criticism were a punishable offense in the army
politics of dicipline
-
Anschauung
ask danial
-
that the troops should expect to be confronted with “inner enmity” from all civilians who are not “members of the German race.” Moreover, he went on to say, “The behavior toward Jews needs no special mention for the soldiers of the National Socialist Reich.
not saying the quiet part about jews out loud
-
. Moreover, the officer's wife too must not only be a good spouse to him and a devoted mother to his children, but should also assist him in his professional capacity by serving as an example to the rank and file.9
gender
-
the troops were rarely punished for unauthorized crimes against the enemy, both because of their commanders' underlying sympathy with such actions, and because they constituted a convenient safety valve for venting the men's anger and frustration caused by the rigid discipline demanded from the men and by the increasingly heavy cost and hopelessness of the war.
war crimes as a relese for frustration
-
he new Wehrmacht was supposed to do away with the social barriers between officers and men, on the one hand, and to demand “blind” obedience and unquestioning loyalty from the troops, on the other.
blind obidence to officers and fuhrer
-
-
people.com people.com
-
Smith would ask questions about his body and his transition that "hinted at being ... sexual, a little inappropriate," he said in Bad Influence.
Topic I can mention as bad side effects of channels
-
In Bad Influence: The Dark Side of Kidfluencing, Netflix sheds light on what many described as Tiffany Smith's abuse of underage content creators, who once collaborated with the single mom and her daughter, Piper Rockelle.
Briefly shows what author will be talking about through the rest of her article/main claim?? (Don't think it would be considered as a claim but..)
-
-
openpress.usask.ca openpress.usask.ca
-
If you are geographically separated from your prospective employer, you may be invited to participate in a phone interview or web conference interview, instead of meeting face-to-face. Technology, of course, is a good way to bridge distances. The fact that you’re not there in person doesn’t make it any less important to be fully prepared, though. In fact, you may wish to be all the more “on your toes” to compensate for the distance barrier. Make sure your phone or computer is fully charged and your internet works (if possible, use an ethernet connection instead of wifi). If you’re at home for the interview, make sure the environment is quiet and distraction-free. If the interview is via web conference, try to make your background neat and tidy (ideally, the background should be a plain wall, but that isn’t always possible). Avoid using a simulated background, as they often look fake and the employer may feel that you are trying to hide something.
That’s a great reminder—technology definitely helps bridge the gap, but it’s true that virtual interviews require just as much, if not more, preparation. I always try to test everything at least 30 minutes beforehand to avoid any last-minute issues.
Have you ever had a virtual interview where something unexpected happened? How did you handle it?
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
While it does not provide certainty like deduction
lacks explanatory power
-
-
-
Father of Parkland victim says: 'America is broken
This parent is not related to this shooting at all but has a similar experience and is sharing his sentiment with the public.
-
"We need to do something about it. It just can't keep happening. Like, no one should just think that they're going to school in a normal day, then ... their lives are in danger because there are not enough regulations in place."
A student vocalizing that it is common and shouldn't be adds to the narrative.
-
Reid Seybold
This is the third student to speak on the shooter's political views. The one who was shocked about his behavior also experienced Ikner in the same environment and group as this student.
-
Gov. DeSantis
He is the governer of Florida and is a part of the republican party.
-
He would "go up to the line" in the meeting and then cross the line in comments made after the fact, Pusins said.
This student contradicts the student in the article above, stating that Ikner had a strong political opinion that was too far.
-
'shocked
this gives the perception that it is not Ikner that is responsible for his actions but there were other reasons leading him to this act. While some share that his political views are strong and radical, this student shared that he believed the student was normal and nice.
-
We are all Seminoles today
This is said but what is done to stand in community and prevent it from happening again.
-
12:28 PM EDT
First message is written just 40 minutes after the school shooting started and it comes in response to a X post by FSU
-
-
www.kqed.org www.kqed.org
-
Samuel Alito,
Would not have called this.
-
The other six justices all graduated from private Catholic schools.
that is fascinating
-
While she has left her job as an oral surgeon to home-school her daughter, she notes that many parents can’t do that and can’t afford private school
that doesn't mean you get to then discriminate against everyone else.
-
“These are topics that I’m just not ready to start on down the road with her.”
that sounds like a you problem and not a kid problem
-
On the other side is the notion that public schools should accommodate religious objections to some materials by allowing parents to opt their kids out of some classes.
Isn't this already a thing?
-
-
Local file Local file
-
Match
clear-cut – чёткий, ясный (букв. "чисто отрезанный") far-fetched – надуманный, маловероятный far-reaching – далеко идущий, масштабный hard-wired – врождённый, запрограммированный (как в психологии/биологии) life-long – пожизненный long-held – давно устоявшийся (о мнении, традиции) time-honoured – освящённый временем, традиционный upcoming – предстоящий, грядущий widely-held – распространённый, общепринятый
-
glacialthaw
таяние
-
endurance
подвиги на выносливость
-
intervening
последующие
-
perspire
потеть
-
shunts
отводить
-
elevate
повышать
-
anticipatory thermogenesis',
Это физиологический процесс, при котором организм заранее начинает вырабатывать тепло в ожидании холодового воздействия.
-
baffling
озадачивать
-
ferocious
ужасный
-
excruciating
невыносимой
-
sterner
сделан из более крепкого теста
-
endurance
экстремальные заплыв
-
compelling evidence
убедительные доказательства
-
feat
достижение
-
ever endured bysomeone who lived to tell the tale.
выдерживал человек, оставшийся в живых,
-
-
bookshelf.vitalsource.com bookshelf.vitalsource.com
-
Schools do not create violence; in most cases, violence spills into schools from the surrounding society. In the wake of a number of school shootings in recent decades, many school districts have adopted zero-tolerance policies that require suspension or expulsion for serious misbehavior or bringing weapons on campus.
As I read this passage, a few thoughts came to my mind. Schools are often seen as isolated environments, but are deeply connected to their communities. When violence enters a school, it's rarely born within those walls; it usually reflects larger issues in society. While zero-tolerance policies aim to keep students safe, they often address the symptoms rather than the root causes. To create truly safe learning spaces, we must look beyond disciplinary actions and work toward building stronger, more supportive communities outside of school.
-
Schooling is shaped not just by patterns of inequality but also patterns of culture. In the United States, the educational system stresses the value of practical learning, knowledge that prepares people for their future jobs. This is in line with what the educational philosopher John Dewey (1859–1952) called progressive education, having the schools make learning relevant to people’s lives. Students seek out subjects of study that they believe will give them an advantage when they are ready to compete in the job market.
Do you think focusing so much on job skills in school leaves out other important things we should learn?
-
To most people, it may seem that family life is important, but so many other things—getting a good education, establishing a career and, in one way or another, making the world a better place—matter more. In fact, just 34 percent of respondents endorsed the first statement, and 64 percent endorsed the second. As we have seen, the rate of marriage in the United States recently hit a fifty-year low, and people who do marry are now doing so considerably later in life.
Interestingly, most people believe it has to be one or the other. You can want to be fully present for your family, to give them everything you've got, and at the same time want to chase your own goals, get a solid education, and build a meaningful career. It's not selfish to enjoy both. Pursuing your growth can make you an even better partner, parent, or role model. Life isn’t always about choosing one path; it’s often about finding a way to walk both. It can't be so black and white in discussions like these.
-
Cultural norms—and often laws—identify people as suitable or unsuitable marriage partners. Some marital norms promote endogamy, marriage between people of the same social category. Endogamy limits potential partners to people who share some trait that society considers important, including being of the same religion, social class, village, race, or age.
After reading this passage, a question that comes to my mind is: How do endogamous marriage norms evolve or weaken in multicultural societies where social boundaries are more fluid?
-
-
www.americanyawp.com www.americanyawp.com
-
Truman, on March 12, 1947, announced $400 million in aid to Greece and Turkey, where “terrorist activities . . . led by Communists” jeopardized “democratic” governance.
-
-
www.linkedin.com www.linkedin.com
-
built through signal, relationship, and aligned action.
signal relationship and aligned action
-
We need to offer the world an alternative to doomscrolling and despair.
say no to doomscrolling, yes
-
silent for the past four years
4 years
-
-
www.biorxiv.org www.biorxiv.org
-
To counteract this, we retroreflected the illumination line as shown in Figure 1C & D, such that the flowing sample was illuminated (pushed) from both sides.
Instead of adding the second beam to counteract the force of the light sheet, is it possible to add a 3D sheath flow that pushes cells toward the bottom of the channel? This paper explains this concept https://doi.org/10.1063/5.0033291
-
-
www.linkedin.com www.linkedin.com
-
clarity, consonance, coherence
clarity consonance coherence
vs
consistent complete
-
navigating the inner terrain of long-game work.
long game work
-
Turning complex visions into grounded, fundable action
vision to fundable action
-
navigate complexity without losing the thread of what matters.
without loosing the thred that matters
-
Making the Impossible Inevitable
-
-
www.technometria.com www.technometria.com
-
Establishing First Person Digital Trust
-
-
blogs.ubc.ca blogs.ubc.ca
-
The Stauffer Life is a YouTube channel created in 2013,
Their form of evidence/reasoning to prove their claims (Which is a popular family YouTube channel with positive outcomes)
-
V
Start of conclusion; wrapping up main ideas and restating their claims
-
Vlogging allows audience from all around the world to witness the Stauffer family’s life journey, the audience may feel as if they know the family personally by following up their new uploads, and become part of their journey through online interactions.
Restating claims
-
Not only vlogging provides families an easily accessible way for life narrating, it also allows the family to gain recognition by interacting with enormously large number of audience.
Another claim: Vlogging can be helpful in providing a communicable space for not only the families creating the videos, but also viewers.
-
The Stauffer couple make part of their household income from vlogging.
Another reason Vlogging can be important because it can provide another source of income to families.
-
it has become a new form of life narrative that allows families to quickly engage with audience from all around the world, and invite the “viewers” to witness meaningful moments and significant events in their lives.
Main claim: Saying how society can benefit from family YouTube channels because the viewers can experience new cultures and engage with people from around the world
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).
This article discusses how hackers hacked Facebooks database and posted 530 million users' information including their names and phone numbers on a public server. Facebook responded by stating that they had no intention to let users know individually that their data was breached, and instead made a blog post discussing the hack. This is an example of a social media company using unethical means to retain users that they know they would lose had they notified them.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What incentives do social media companies have to protect privacy? What incentives to social media companies have to violate privacy?
Social media companies' incentives to protect user privacy include user satisfaction and security. By ensuring users' data is safe and sound, more people are likely to store their information on that social media platform by using it more often, posting to it more often, and saving things on it more often. Social media companies also have incentives to violate privacy; for example, if social media companies have access to users likes, dislikes, interests, etc, then they can advertise products or show content that to that user that is catered specifically to their interests. This increases user engagement on that platform.
-
-
docdrop.org docdrop.orgview2
-
I remember wearing my older brother David's suit for my senior pictures. It hung on me like a droopy Halloween king-sized ghost sheet. It was obvious that it was a borrowed suit of clothing.
It is very sad how in school many people are surrounded by wealthy families and are constantly being reminded of their financial situation from constant school traditions.
-
Christmas was no better. I knew that our teacher would open her gifts in front of everyone. How could my hand-drawn picture of a snowman hold up against Crystal's store-bought sweater or the fancy bottle of perfume from Lois? Sometimes I would be "sick" on the day we had to bring our favorite holiday gift to school for show-and-tell.
Although a magical time of year, Christmas can be very tough for families who are struggling and often brings even more attention to those with less money.
-
-
Local file Local file
-
September 23, 2019, was a very impactful day that shed light on global climate change and hadpeople changing their entire perspective on planet earth
Cool way to start with an example to get the audience's attention.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
A teenager on TikTok disrupted thousands of scientific studies with a single video – The Verge [h22]
We often talk about the “era of big data” as if we can understand everything with data, but if the data itself is wrong, biased, or even maliciously manipulated, does it make sense to draw conclusions based on that data? This makes me question how much of the data analysis I usually see.
-
Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.
Often people talk about datasets for social media apps and other issues (social, environmental, political..). I wonder after reading this if there are situations in not just like this tiktok video if there are more situations than we can think that Datasets get poisoned? And how come it isn't something we address sometimes when asking about the flaws of data collection when we do analysis for our data sets?
-
-
superintelligence.gladstone.ai superintelligence.gladstone.ai
-
for - report - America's Superintelligence Project - definition - ASI - Artificial Super Intelligence
summary - What is the cost of mistrust between nation states? - The mistrust between the US and China is reaching an all-tie high and it has disastrous consequences for an AI arms race - It is driving each country to move fast and break things, which will become an existential threat to all humanity - Deep Humanity, with an important dimension of progress traps can help us navigate ASI
-
A domestic superintelligence project would have a huge energy footprint that we wouldn’t be able to conceal from adversaries
for - ASI project - energy consumption - gWatt
-
we may be tempted to conduct our own activities aimed at introducing trojans or backdoors into adversary models. This could end up being necessary, but it could also trigger dangerous loss of control behaviors and runaway escalation.
for - progress trap - ASI - introducing trojan and backdoors in adversary ASI - can backfire
-
AI containment
for - definition - AI containment - progress trap - AI containment
-
former Mossad cyber operative warned, “The worst thing that could happen is that the U.S. develops an AI superweapon, and China or Russia have a trojan/backdoor inside the superintelligent model's weights because e.g. they had read/write access to the training data pipelines
for - ASI scenario - adversary with trojan backdoor access
-
example
for - example - AI unpredicted behavior
-
AI alignment researchers estimate less than a 10% chance that we lose control of superintelligent AI once it’s built. More typical estimates range from 10-80%, depending on who you ask.
for - stats - chances of losing control of ASI - 10 to 80%
-
highly capable and context-aware AI systems can invent dangerously creative strategies to achieve their internal goals that their developers never anticipated or intended them to pursue.
for - progress trap - ASI - progress trap - AGI
-
as we get closer to superintelligence, it will be seen more and more as an enabler and driver of weapon of mass destruction (WMD) capabilities, if not as a WMD in and of itself. Direct calls for a “Manhattan Project for AGI” are already starting.
for - quote - AGI - Weapon of Mass Destruction
quote - As we get closer to superintelligence, - it will be seen more and more as an enabler and driver of - weapon of mass destruction (WMD) capabilities, - if not as a WMD in and of itself. - Direct calls for a “Manhattan Project for AGI” are already starting.
-
To this day, if you know the right people, the Silicon Valley gossip mill is a surprisingly reliable source of information if you want to anticipate the next beat in frontier AI – and that’s a problem. You can’t have your most critical national security technology built in labs that are almost certainly CCP-penetrated
for - high security risk - US AI labs
-
AI is already augmenting important parts of the AI research process itself, and that will only accelerate
for - quote - AI - AI is accelerating AI research itself
-
at any given time, the CCP may have a better idea of what OpenAI’s frontier advances look like than the U.S. government does.
for - AI - Chinese know more than US government about latest US frontier AI research
-
Many in Silicon Valley believe we're less than a year away from AI that can automate most software engineering work
for - progress trap - AGI - one year away from automating software work
-
an AI-powered denial capability is useless if it behaves unpredictably, and if it executes on its instructions in ways that have undesired, high-consequence side-effects.
for - progress trap - AGI
-
The "move fast and break things" ethos of Silicon Valley is incompatible with the security demands of superintelligence
for - progress trap - AGI - Silicon Valley move fast and break things strategy - incompatible with security of AGI
Tags
- definition - AI containment
- ASI project - energy consumption - gWatt
- progress trap - AI containment
- Deep Humanity - progress traps - for navigating ASI
- Artificial Super Intelligence
- progress trap - AGI - Silicon Valley move fast and break things strategy - incompatible with security of AGI
- report - America's Superintelligence Project
- progress trap - ASI
- progress trap - AGI
- quote - AGI - Weapon of Mass Destruction
- ASI scenario - adversary with trojan backdoor access
- stats - chances of losing control of ASI - 10 to 80%
- progress trap - ASI - introducing trojan and backdoors in adversary ASI - can backfire
- AI arms race - US and China
- high security risk - US AI labs
- quote - AI - AI is accelerating AI research itself
- AI - Chinese know more than US government about latest US frontier AI research
- example - AI unpredicted behavior
- definition - ASI
- progress trap - AGI - one year away from automating software work
Annotators
URL
-
-
civileats.com civileats.com
-
Gates Foundation, telling them exactly how to do it. . . . And with so much money on the table and African governments completely strapped for cash and investment, it captured not just the narrative, but the policy space.”
The Gates foundation kind of monopolized this space. The African government didn't have many options and Gates had money.
-
Increasingly, global organizations see those trends and point to agroecology as the best solution.
Despite working for over 15 years and spending millions of dollars, AGRA was unable to meet their own goals, and there is no evidence to show that there tactics have helped reduce poverty or food insecurity. Agroecology is the best solution for this.
-
“the money that’s being spent [on agriculture in Africa] is not going to the right kinds of food systems.”
Millions of dollars being spent to address the hunger crisis in Africa is being funneled into damaging food systems.
-
Research like that is causing a shifting paradigm in many global development agencies, including the United Nations Food and Agriculture Organization (FAO), toward an emphasis on agroecology, especially as a strategy for confronting the climate crisis
Agroecology practices could be the solution to food sovereignty for places like Africa. Growing food with just increased productivity in mind does not account for the environmental damages that come with industrial systems. Agroecology practices address these issues and allows communities to have control over feeding themselves.
-
But critics say this approach to food production relies on expensive inputs, and that a lack of attention to environmental impact has gradually limited its successes.
Tactics for high yield growth like hybrid seeds and chemical fertilizers are undermined by their expensive costs and environmental impacts.
-
two new reports
The author discuses 2 reports about the failures of these modern green revolutions that have no evidence that they are working
-
still over 50 percent, and the prevalence of moderate or severe food insecurity has increased slightly, to 82 percent of the population
despite all this money people are still poor and hungry
-
increase productivity in their fields, using the same monocropping techniques embraced by commodity corn and soy growers in the U.S.
How could we address the problems of hunger without these problems of ecological and cultural damage?
-
-
en.wikipedia.org en.wikipedia.org
-
we obtain the expression for terminal speed of a spherical object moving under creeping flow conditions
what is d in this equation?
characteristic length scale
ordiameter of object
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Nicole Nguyen. Here's Who Facebook Thinks You Really Are. September 2016. Section: Tech. URL: https://www.buzzfeednews.com/article/nicolenguyen/facebook-ad-preferences-pretty-accurate-tbh (visited on 2024-01-30).
I remember when the Cambridge analytica scandal blew up, but now algorithms are everywhere. I wonder why algorithms and custom tailored ads are now seen as a neutral or accepted thing and not a privacy issue like before?
-
Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-
The Vox article about Facebook collecting data on people who don’t even have accounts was honestly unsettling. I knew they collected a lot, but I didn’t realize it could go that far. It made me think about how privacy online doesn’t really exist anymore. Even choosing not to sign up for something doesn’t mean your data is safe. This reminded me of the section in the chapter that talked about how users are not always in control of their own information. It feels like no matter what you do, your actions online are still being watched and used for something.
-
Nicole Nguyen. Here's Who Facebook Thinks You Really Are. September 2016. Section: Tech. URL: https://www.buzzfeednews.com/article/nicolenguyen/facebook-ad-preferences-pretty-accurate-tbh (visited on 2024-01-30).
The article by Nicole Nguyen explores how Facebook's ad preferences system assigns interest categories to users based on their online behavior. I'm a little surprised by this, as programs such as Facebook Pixel pose a privacy concern to me since they track you on other websites, highlighting the extent Facebook monitors user activity to tailor advertisements.
-
Web tracking. October 2023. Page Version ID: 1181294364. URL: https://en.wikipedia.org/w/index.php?title=Web_tracking&oldid=1181294364 (visited on 2023-12-05).
The Wikipedia entry on Web tracking (2023) is quite an eye-opener to the extent that websites and third parties are able to monitor user activity via cookies, fingerprinting, and even pixel tags that are not visible. One feature that jumped at me was fingerprinting that can recognize users solely by their browser settings, without even the use of cookies. This utterly surprised my perception that availing private browsing modes or deleting cookies will ensure anonymity online. It validates the extent to which web tracking can be persistent and stealthy and why more robust digital privacy measures are essential
-
Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).
One source that particularly caught my interest was Kurt Wagner's article in Vox, entitled "How Facebook collects your data even without an account." The article explores the concept of shadow profiles, in which Facebook gathers data about individuals who have never signed up for its platform, using information shared by friends or trackers embedded on other websites. I find this to be truly alarming, as it shows that we don't need our consent for data collection and storage; our information can still be collected simply because we exist within another's digital circle. I find it hard to believe privacy can ever truly exist in today's hyperconnected world. I think this poses a unique challenge to the concept of informed consent in tech; how can we truly consent if we're unaware that data is being collected on us? This highlights the need for increased transparency and user control, particularly among non-users who don't even have an option to opt out.
-
Web tracking. October 2023. Page Version ID: 1181294364. URL: https://en.wikipedia.org/w/index.php?title=Web_tracking&oldid=1181294364 (visited on 2023-12-05).
This wikipedia article talks about web tracking which is a way of collecting users' online behavior by tools like cookies and IP addresses through the third party. This data is used for targeted advertising and improving user experience. However, while it can raise privacy concerns, users can reduce tracking by using browser extensions, and VPNs.
-
Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).
This article talks about how Facebook collects data on both non-users and users of facebook. The app itself collects data on non-users when people who actually share posts to non-users who then view the post (browsing history). It is insane that there is no way to opt out not sharing your data with facebook and the app keeps data from its users for 90 days and non-users for 10 days. This I believe is an invasion of privacy as some people especially non-users do not even know that their information is being taken by an app.
-
Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).
This article talks about how Facebook collects data on it's users and how it also tracks non-users as well ( through phone contacts, likes etc ). It also looks into how Facebook stores its data. Although it's maximum is 90 days since the user is always on the app it gets updated data to feed upon. The user isn't usually explicitly aware of the data collection that's happening.
-
Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).
I recall reading this piece years ago and being appalled. That Facebook can create shadow profiles of individuals who have never signed up was a real shift for me in terms of how I think about digital privacy. I think companies should be more transparent about this sort of data collection and give users more control over it, even if they’re not technically part of the platform.
-
Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL
I think it's a bit scary how easily facebook can collect yoir data, especially if you aren't even on the platform. How is this even legal
-
Karen Hao. How to poison the data that Big Tech uses to surveil you. MIT Technology Review, March 2021. URL: https://www.technologyreview.com/2021/03/05/1020376/resist-big-tech-surveillance-data/ (visited on 2023-12-05).
This source describes the three methods in which researchers have suggested people use their data to keep tech companies in check. These include data strikes, data poisoning, and conscious data contributing. The first entails somehow deleting your data from a site or platform, the next using secondary applications to misdirect advertisement algorithms, and the last providing authentic data to a company's competitor.
-
Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).
I was shocked at how much control people have over their own online identity with regard to Facebook. That there is the potential for my information to end up on their servers because someone who knows me has uploaded their list of contacts is a violation I never agreed to. What's more frustrating is the lack of transparency and that even when Facebook claims to be deleting data, it's most likely collecting new data at the same time. It makes me question how much control we truly have on the internet and how it reminds me that having the ability to opt-out of a platform is not always about opting-out of its reach.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?
Looking through my ad profile made me realize how much of my online activity is being tracked even when I forget about it. Some of the topics made sense and lined up with things I actually searched, but other parts felt completely random. What surprised me most was how confident the profile felt, even though some of it was way off. It made me think about how much companies like Google are assuming about me, and how little I really know about what data they collect or how they’re using it. I don’t think I’m comfortable with it, even if I already expected it. It just feels strange to be watched that closely, especially when it’s not always accurate.
-
After looking at your ad profile, ask yourself the following: What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?
Honestly I don't think it is very accurate. While I do have ads of products I have viewed in the past or bought, I would say a good 75% of my ads are irrelevant or I have zero interest in. I personally am fine with Google knowing about about these patterns or preferences, as it is fairly common knowledge that they track your searches. However, I do wish we can get some more transparency on how it's used.
-
Try this yourself and see what Google thinks of you!
I was really surprised to see that google thinks im between 45 and 54 years old, I wonder what made them think I was this old
-
-
subnautica.fandom.com subnautica.fandom.comFauna1
-
Fauna
-
-
boffosocko.com boffosocko.com
-
Бренд Jaecoo представлен в автосалоне с полным модельным рядом. Это мощные и современные автомобили, сочетающие в себе инновационные технологии, стильный внешний вид и надёжность. Воспользуйтесь возможностью оформить покупку в кредит, сдать старое авто по трейд-ин или получить индивидуальное предложение от официального дилера. Все модели Jaecoo сертифицированы, полностью готовы к эксплуатации и имеют гарантию. Заходите на сайт, выбирайте авто, сравнивайте характеристики и записывайтесь на тест-драйв — всё просто и удобно.
-
-
www.youtube.com www.youtube.com
-
19:20 optimismus. gutgläubigkeit. "irgendwer wird uns schon retten"
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might):
Reading that platforms monitor even the little things—like what people pause over, say—personally feels intrusive. I get that there’s a business justification, but, as a user, I feel like I’m under surveillance all the time. And I wonder just how much of what I do online is actually "my" behavior, and how much is influenced by what platforms assume I’m interested in seeing. It relates to what we were discussing previously regarding algorithmic bias—these algorithms don’t just monitor us, they shape us, sometimes without us knowing
-
Online advertisers can see what pages their ads are being requested on, and track users [h1] across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website.
Obviously, this behavior is very annoying when all kinds of ads are tied to software. Maybe I just want to go use social media now, but ads are everywhere.
-
Online advertisers can see what pages their ads are being requested on, and track users [h1] across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website. Additionally, social media might collect information about non-users, such as when a user posts a picture of themselves with a friend who doesn’t have an account, or a user shares their phone contact list with a social media site, some of whom don’t have accounts (Facebook does this [h2]).
It's odd how these companies hyperfocus on their users, sorting of cheating the information out of us for their own monetary benefit. Its also extremely manipulative how it affects the user's actions. So many people are addicted to social media and it affects their mental health negatively yet these companies aren't held accountable.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’ [h18].
"This experience deeply disturbed me as it highlights how data and targeted advertising can be misused to manipulate political behavior and suppress democratic participation. It caused me to reflect upon how social media platforms, often presented as neutral tools, can become powerful agents of harm when left unregulated. We had discussed earlier in class about algorithmic bias - how systems designed for profit may unintentionally or intentionally perpetuate discrimination - which begs the question, should we have stronger regulations not just regarding what data is collected but how it's permitted to be used? Even when users give consent they rarely understand all consequences fully.
-
Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’ [h18].
To keep users on social media platforms longer, these platforms analyze the topics that receive the highest click-through rates from each user. They then push similar content that aligns with the user's interests to keep them engaged. Additionally, they place advertisements under certain posts based on this analysis, making the ads more relevant and increasing the likelihood that users will click on them.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
For example, social media data about who you are friends with might be used to infer your sexual orientation [h9]. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling)
It's very true that for all of us, going online means accessing the big data base and making our own contribution to it. And big data will push different content according to each person's different preferences, and even push different price point products, just like a press, he will draw the last value of each person.
-
used to infer your sexual orientation [h9].
This part absolutely shocked me. The notion that platforms can infer deep things about us — things like political views, gender identity or sexual orientation — simply by tracking what we click on or which people we follow feels maddeningly invasive. This has never occurred to me, how predictive simple online activity could be. That really makes me wonder a bit about our privacy online, and if users are actually realizing what is happening behind the curtains with their data.
-
-
www.etymonline.com www.etymonline.com
-
fiction
early 15c., ficcioun, "that which is invented or imagined in the mind," from Old French ficcion "dissimulation, ruse; invention, fabrication" (13c.) and directly from Latin fictionem (nominative fictio) "a fashioning or feigning," noun of action from past participle stem of fingere "to shape, form, devise, feign," originally "to knead, form out of clay," from PIE root *dheigh- "to form, build."
-
-
www.etymonline.com www.etymonline.com
-
figment
something invented or imagined, a myth or fable; deceitful practice; false doctrine," from Latin figmentum "something formed or fashioned, creation," related to figura "shape" (from PIE root *dheigh- "to form, build")
-
-
Local file Local file
-
By
conclusion
-
In
outcomes
-
This
describes their experiment
-
The
What is ideal experiment? Find the marginal contribution of conditionality in cash transfer program
-
The
debate over the relative merits of the two
-
The
Studies examining the impact of attaching conditions to cash transfer programs are limited
-
Conditional
why and where are CCTs and UCTs used
-
-
www.planalto.gov.br www.planalto.gov.br
-
Art. 455
OJ-SDI1-191
CONTRATO DE EMPREITADA. DONO DA OBRA DE CONSTRUÇÃO CIVIL. RESPONSABILIDADE (nova re-dação) - Res. 175/2011, DEJT divulgado em 27, 30 e 31.05.2011 - Diante da inexistência de previsão legal específica, o contrato de empreitada de construção civil entre o dono da obra e o empreiteiro não enseja responsabilidade solidária ou subsidiária nas obrigações trabalhistas contraídas pelo empreiteiro, salvo sendo o dono da obra uma empresa construtora ou incorporadora.
INCIDENTE DE RECURSO DE REVISTA REPETITIVO. TEMA Nº 0006. CONTRATO DE EMPREITADA. DONO DA OBRA. RESPONSABILIDADE. ORIENTAÇÃO JURISPRUDENCIAL Nº 191 DA SbDI-1 DO TST VERSUS SÚMULA Nº 42 DO TRIBUNAL REGIONAL DO TRABALHO DA TERCEIRA REGIÃO 1. A exclusão de responsabilidade solidária ou subsidiária por obrigação trabalhista, a que se refere a Orientação Jurisprudencial nº 191 da SbDI-1 do TST, não se restringe a pessoa física ou micro e pequenas empresas. Compreende igualmente empresas de médio e grande porte e entes públicos. 2. A excepcional responsabilidade por obrigações trabalhistas, prevista na parte final da Orientação Jurisprudencial nº 191 da SbDI-1 do TST, por aplicação analógica do artigo 455 da CLT, alcança os casos em que o dono da obra de construção civil é construtor ou incorporador e, portanto, desenvolve a mesma atividade econômica do empreiteiro. 3. Não é compatível com a diretriz sufragada na Orientação Jurisprudencial nº 191 da SbDI-1do TST jurisprudência de Tribunal Regional do Trabalho que amplia a responsabilidade trabalhista do dono da obra, excepcionando apenas “a pessoa física ou micro e pequenas empresas, na forma da lei, que não exerçam atividade econômica vinculada ao objeto contratado”. 4. Exceto ente público da Administração direta e indireta, se houver inadimplemento das obrigações trabalhistas contraídas por empreiteiro que contratar, sem idoneidade econômico-financeira, o dono da obra responderá subsidiariamente por tais obrigações, em face de aplicação analógica do art. 455 da CLT e de culpa in eligendo.
-
prazo de dois anos
A prescrição intercorrente no processo do trabalho é de 2 anos.
-
subsidiariamente
A regra é a responsabilidade subsidiária do sócio retirante, no entanto, por prazo de 2 anos após a modificação contratual
-
solidariamente
A responsabilidade é solidária do sócio retirante se se comprovar que houve fraude.
-
5 (cinco) dias
-
O prazo para embargos à execução é o mesmo do embargos de declaração, isto é, são 5 dias o prazo para propor embargos à execução a contar da data da garantia ao juízo ou da penhora de bens, não se aplicando o prazo comum de 8 dias.
-
O prazo de 8 dias é o definido para as partes se manifestarem a respeito da sentença de liquidação, sob pena de preclusão da oposição de embargos à execução.
-
-
Assembléia Geral especialmente convocada para êsse fim
A celebração de CCT ou ACT depende de deliberação da Assembleia Geral, a qual deve ser convocada especialmente para este fim.
O quórum de votação é: - Se CCT: Comparecimento e Votação de 2/3 dos associados em 1ª Convocação - 1/3 em 2ª Convocação; - Se ACT: Comparecimento e Votação de 2/3 dos interessados em 1ª Convocação - 1/3 em 2ª Convocação.
-
1/8 (um oitavo)
Se o sindicato contar com mais de 5.000 associados, a primeira convocação deverá contar com mais de 2/3 dos associados.
Porém, a segunda convocação poderá se limitar ao quórum de 1/8 dos associados.
-
§ 2o
- São constitucionais os acordos e as convenções coletivas que, ao considerarem a adequação setorial negociada, pactuem limitações ou afastamentos de direitos trabalhistas, independentemente da explicitação especificada de vantagens compensatórias, desde que respeitados os direitos absolutamente indisponíveis. [ARE 1.121.633, rel. min. Gilmar Mendes, j. 2-6-2022, P, DJE de 28-4-2023, Tema 1.046, com mérito julgado.]
-
art. 616, § 3º
Art. 616 - Os Sindicatos representativos de categorias econômicas ou profissionais e as emprêsas, inclusive as que não tenham representação sindical, quando provocados, não podem recusar-se à negociação coletiva.
- § 3º - Havendo convenção, acordo ou sentença normativa em vigor, o dissídio coletivo deverá ser instaurado dentro dos 60 (sessenta) dias anteriores ao respectivo termo final, para que o novo instrumento possa ter vigência no dia imediato a esse termo.
-
Art. 859
O quórum de aprovação da representação destinada à instauração de instância é o seguinte: - 1ª Convocação: 2/3 dos associados interessados; OU - 2ª Convocação: 2/3 dos presentes.
-
representação escrita ao Presidente do Tribunal
A instauração de instância em Dissídio Coletivo ocorre mediante representação dirigida ao Presidente do Tribunal.
A representação poderá ser formulada: - Associações sindicais; - Procuradoria da Justiça do Trabalho sempre na hipótese de ocorrer suspensão do trabalho.
A instância também poderá ser instaurada por iniciativa do próprio Presidente do Tribunal.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This computational modeling study builds on multiple previous lines of experimental and theoretical research to investigate how a single neuron can solve a nonlinear pattern classification task. The study presents solid evidence that the location of synapses on dendritic branches, as well as synaptic plasticity of excitatory and inhibitory synapses, influences the ability of a neuron to discriminate combinations of sensory stimuli. The ideas in this work are very interesting, presenting an important direction in the computational neuroscience field about how to harness the computational power of "active dendrites" for solving learning tasks.
-
Reviewer #1 (Public review):
Summary:
This computational modeling study builds on multiple previous lines of experimental and theoretical research to investigate how a single neuron can solve a nonlinear pattern classification task. The authors construct a detailed biophysical and morphological model of a single striatal medium spiny neuron, and endow excitatory and inhibitory synapses with dynamic synaptic plasticity mechanisms that are sensitive to (1) the presence or absence of a dopamine reward signal, and (2) spatiotemporal coincidence of synaptic activity in single dendritic branches. The latter coincidence is detected by voltage-dependent NMDA-type glutamate receptors, which can generate a type of dendritic spike referred to as a "plateau potential." In the absence of inhibitory plasticity, the proposed mechanisms result in good performance on a nonlinear classification task when specific input features are segregated and clustered onto individual branches, but reduced performance when input features are randomly distributed across branches. Interestingly, adding inhibitory plasticity improves classification performance even when input features are randomly distributed.
Strengths:
The integrative aspect of this study is its major strength. It is challenging to relate low-level details such as electrical spine compartmentalization, extrasynaptic neurotransmitter concentrations, dendritic nonlinearities, spatial clustering of correlated inputs, and plasticity of excitatory and inhibitory synapses to high-level computations such as nonlinear feature classification. Due to high simulation costs, it is rare to see highly biophysical and morphological models used for learning studies that require repeated stimulus presentations over the course of a training procedure. The study aspires to prove the principle that experimentally-supported biological mechanisms can explain complex learning.
Weaknesses:
The high level of complexity of each component of the model makes it difficult to gain an intuition for which aspects of the model are essential for its performance, or responsible for its poor performance under certain conditions. Stripping down some of the biophysical detail and comparing it to a simpler model may help better understand each component in isolation.
-
Reviewer #2 (Public review):
Summary:
The study explores how single striatal projection neurons (SPNs) utilize dendritic nonlinearities to solve complex integration tasks. It introduces a calcium-based synaptic learning rule that incorporates local calcium dynamics and dopaminergic signals, along with metaplasticity to ensure stability for synaptic weights. Results show SPNs can solve the nonlinear feature binding problem and enhance computational efficiency through inhibitory plasticity in dendrites, emphasizing the significant computational potential of individual neurons. In summary, the study provides a more biologically plausible solution to single-neuron learning and gives further mechanical insights into complex computations at the single-neuron level.
Strengths:
The paper introduces a novel learning rule for training a single multicompartmental neuron model to perform nonlinear feature binding tasks (NFBP), highlighting two main strengths: the learning rule is local, calcium-based, and requires only sparse reward signals, making it highly biologically plausible, and it applies to detailed neuron models that effectively preserve dendritic nonlinearities, contrasting with many previous studies that use simplified models.
-
Author response:
The following is the authors’ response to the original reviews
Public Reviews:
Reviewer #1 (Public Review):
Summary:
This computational modeling study builds on multiple previous lines of experimental and theoretical research to investigate how a single neuron can solve a nonlinear pattern classification task. The authors construct a detailed biophysical and morphological model of a single striatal medium spiny neuron, and endow excitatory and inhibitory synapses with dynamic synaptic plasticity mechanisms that are sensitive to (1) the presence or absence of a dopamine reward signal, and (2) spatiotemporal coincidence of synaptic activity in single dendritic branches. The latter coincidence is detected by voltage-dependent NMDA-type glutamate receptors, which can generate a type of dendritic spike referred to as a "plateau potential." The proposed mechanisms result in moderate performance on a nonlinear classification task when specific input features are segregated and clustered onto individual branches, but reduced performance when input features are randomly distributed across branches. Given the high level of complexity of all components of the model, it is not clear which features of which components are most important for its performance. There is also room for improvement in the narrative structure of the manuscript and the organization of concepts and data.
Strengths:
The integrative aspect of this study is its major strength. It is challenging to relate low-level details such as electrical spine compartmentalization, extrasynaptic neurotransmitter concentrations, dendritic nonlinearities, spatial clustering of correlated inputs, and plasticity of excitatory and inhibitory synapses to high-level computations such as nonlinear feature classification. Due to high simulation costs, it is rare to see highly biophysical and morphological models used for learning studies that require repeated stimulus presentations over the course of a training procedure. The study aspires to prove the principle that experimentally-supported biological mechanisms can explain complex learning.
Weaknesses:
The high level of complexity of each component of the model makes it difficult to gain an intuition for which aspects of the model are essential for its performance, or responsible for its poor performance under certain conditions. Stripping down some of the biophysical detail and comparing it to a simpler model may help better understand each component in isolation. That said, the fundamental concepts behind nonlinear feature binding in neurons with compartmentalized dendrites have been explored in previous work, so it is not clear how this study represents a significant conceptual advance. Finally, the presentation of the model, the motivation and justification of each design choice, and the interpretation of each result could be restructured for clarity to be better received by a wider audience.
Thank you for the feedback! We agree that the complexity of our model can make it challenging to intuitively understand the underlying mechanisms. To address this, we have revised the manuscript to include additional simulations and clearer explanations of the mechanisms at play.
In the revised introduction, we now explicitly state our primary aim: to assess to what extent a biophysically detailed neuron model can support the theory proposed by Tran-Van-Minh et al. and explore whether such computations can be learned by a single neuron, specifically a projection neuron in the striatum. To achieve this, we focus on several key mechanisms:
(1) A local learning rule: We develop a learning rule driven by local calcium dynamics in the synapse and by reward signals from the neuromodulator dopamine. This plasticity rule is based on the known synaptic machinery for triggering LTP or LTD in the corticostriatal synapse onto dSPNs (Shen et al., 2008). Importantly, the rule does not rely on supervised learning paradigms and neither is a separate training and testing phase needed.
(2) Robust dendritic nonlinearities: According to Tran-Van-Minh et al., (2015) sufficient supralinear integration is needed to ensure that e.g. two inputs (i.e. one feature combination in the NFBP, Figure 1A) on the same dendrite generate greater somatic depolarization than if those inputs were distributed across different dendrites. To accomplish this we generate sufficiently robust dendritic plateau potentials using the approach in Trpevski et al., (2023).
(3) Metaplasticity: Although not discussed much in more theoretical work, our study demonstrates the necessity of metaplasticity for achieving stable and physiologically realistic synaptic weights. This mechanism ensures that synaptic strengths remain within biologically plausible ranges during training, regardless of initial synaptic weights.
We have also clarified our design choices and the rationale behind them, as well as restructured the interpretation of our results for greater accessibility. We hope these revisions make our approach and findings more transparent and easier to engage with for a broader audience.
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
This study extends three previous lines of work:
(1) Prior computational/phenomenological work has shown that the presence of dendritic nonlinearities can enable single neurons to perform linearly non-separable tasks like XOR and feature binding (e.g. Tran-Van-Minh et al., Front. Cell. Neurosci., 2015).
Prior computational and phenomenological work, such as Tran-Van-Minh et al. (Front. Cell. Neurosci., 2015), directly inspired our study, as we now explicitly state in the introduction (page 4, lines 19-22). While Tran-Van-Minh theoretically demonstrated that these principles could solve the NFBP, it remains untested to what extent this can be achieved quantitatively in biophysically detailed neuron models using biologically plausible learning rules - which is what we test here.
(2) This study and a previous biophysical modeling study (Trpevski et al., Front. Cell. Neurosci., 2023) rely heavily on the finding from Chalifoux & Carter, J. Neurosci., 2011 that blocking glutamate transporters with TBOA increases dendritic calcium signals. The proposed model thus depends on a specific biophysical mechanism for dendritic plateau potential generation, where spatiotemporally clustered inputs must be co-activated on a single branch, and the voltage compartmentalization of the branch and the voltage-dependence of NMDARs is not enough, but additionally glutamate spillover from neighboring synapses must activate extrasynaptic NMDARs. If this specific biophysical implementation of dendritic plateau potentials is essential to the findings in this study, the authors have not made that connection clear. If it is a simple threshold nonlinearity in dendrites that is important for the model, and not the specific underlying biophysical mechanisms, then the study does not appear to provide a conceptual advance over previous studies demonstrating nonlinear feature binding with simpler implementations of dendritic nonlinearities.
We appreciate the feedback on the hypothesized role of glutamate spillover in our model. While the current manuscript and Trpevski et al. (2023) emphasize glutamate spillover as a plausible biophysical mechanism to provide sufficiently robust and supralinear plateau potentials, we acknowledge, however, that the mechanisms of supralinearity of dendritic integration, might not depend solely on this specific mechanism in other types of neurons. In Trpevski et al (2023) we, however, realized that if we allow too ‘graded’ dendritic plateaus, using the quite shallow Mg-block reported in experiments, it was difficult to solve the NFBP. The conceptual advance of our study lies in demonstrating that sufficiently nonlinear dendritic integration is needed and that this can be accounted for by assuming spillover in SPNs—but regardless of its biophysical source (e.g. NMDA spillover, steeper NMDA Mg block activation curves or other voltage dependent conductances that cause supralinear dendritic integration)—it enables biophysically detailed neurons to solve the nonlinear feature binding problem. To address this point and clarify the generality of our conclusions, we have revised the relevant sections in the manuscript to state this explicitly.
(3) Prior work has utilized "sliding-threshold," BCM-like plasticity rules to achieve neuronal selectivity and stability in synaptic weights. Other work has shown coordinated excitatory and inhibitory plasticity. The current manuscript combines "metaplasticity" at excitatory synapses with suppression of inhibitory strength onto strongly activated branches. This resembles the lateral inhibition scheme proposed by Olshausen (Christopher J. Rozell, Don H. Johnson, Richard G. Baraniuk, Bruno A. Olshausen; Sparse Coding via Thresholding and Local Competition in Neural Circuits. Neural Comput 2008; 20 (10): 2526-2563. doi: https://doi.org/10.1162/neco.2008.03-07-486). However, the complexity of the biophysical model makes it difficult to evaluate the relative importance of the additional complexity of the learning scheme.
We initially tried solving the NFBP with only excitatory plasticity, which worked reasonably well, especially if we assume a small population of neurons collaborates under physiological conditions. However, we observed that plateau potentials from distally located inputs were less effective, and we now explain this limitation in the revised manuscript (page 14, lines 23-37).
To address this, we added inhibitory plasticity inspired by mechanisms discussed in Castillo et al. (2011) , Ravasenga et al., and Chapman et al. (2022) , as now explicitly stated in the text (page 32, lines 23-26). While our GABA plasticity rule is speculative, it demonstrates that distal GABAergic plasticity can enhance nonlinear computations. These results are particularly encouraging, as it shows that implementing these mechanisms at the single-neuron level produces behavior consistent with network-level models like BCM-like plasticity rules and those proposed by Rozell et al. We hope this will inspire further experimental work on inhibitory plasticity mechanisms.
P2, paragraph 2: Grammar: "multiple dendritic regions, preferentially responsive to different input values or features, are known to form with close dendritic proximity." The meaning is not clear. "Dendritic regions" do not "form with close dendritic proximity."
Rewritten (current page 2, line 35)
P5, paragraph 3: Grammar: I think you mean "strengthened synapses" not "synapses strengthened".
Rewritten (current page 14, line 36)
P8, paragraph 1: Grammar: "equally often" not "equally much".
Updated (current page 10, line 2)
P8, paragraph 2: "This is because of the learning rule that successively slides the LTP NMDA Ca-dependent plasticity kernel over training." It is not clear what is meant by "sliding," either here or in the Methods. Please clarify.
We have updated the text and removed the word “sliding” throughout the manuscript to clarify that the calcium dependence of the kernels are in fact updated
P10, Figure 3C (left): After reading the accompanying text on P8, para 2, I am left not understanding what makes the difference between the two groups of synapses that both encode "yellow," on the same dendritic branch (d1) (so both see the same plateau potentials and dopamine) but one potentiates and one depresses. Please clarify.
Some "yellow" and "banana" synapses are initialized with weak conductances, limiting their ability to learn due to the relatively slow dynamics of the LTP kernel. These weak synapses fail to reach the calcium thresholds necessary for potentiation during a dopamine peak, yet they remain susceptible to depression under LTD conditions. Initially, the dynamics of the LTP kernel does not allow significant potentiation, even in the presence of appropriate signals such as plateau potentials and dopamine (page 10, lines 22–26). We have added a more detailed explanation of how the learning rule operates in the section “Characterization of the Synaptic Plasticity Rule” on page 9 and have clarified the specific reason why the weaker yellow synapses undergo LTD (page 11, lines 1–7).
As shown in Supplementary Figure 6, during subthreshold learning, the initial conductance is also low, which similarly hinders the synapses' ability to potentiate. However, with sufficient dopamine, the LTP kernel adapts by shifting closer to the observed calcium levels, allowing these synapses to eventually strengthen. This dynamic highlights how the model enables initially weak synapses to "catch up" under consistent activation and favorable dopaminergic conditions.
P9, paragraph 1: The phrase "the metaplasticity kernel" is introduced here without prior explanation or motivation for including this level of complexity in the model. Please set it up before you use it.
A sentence introducing metaplasticity has been added to the introduction (page 3, lines 36-42) as well as on page 9, where the kernel is introduced (page 9, lines 26-35)
P10, Figure 3D: "kernel midline" is not explained.
We have replotted fig 3 to make it easier to understand what is shown. Also, an explanation of the Kernel midpoint is added to the legend (current page 12, line 19)
P11, paragraph 1; P13, Fig. 4C: My interpretation of these data is that clustered connectivity with specific branches is essential for the performance of the model. Randomly distributing input features onto branches (allowing all 4 features to innervate single branches) results in poor performance. This is bad, right? The model can't learn unless a specific pre-wiring is assumed. There is not much interpretation provided at this stage of the manuscript, just a flat description of the result. Tell the reader what you think the implications of this are here.
Thanks for the suggestion - we have updated this section of the manuscript, adding an interpretation of the results that the model often fails to learn both relevant stimuli if all four features are clustered onto the same dendrite (page 13, lines 31-42).
In summary, when multiple feature combinations are encoded in the same dendrite with similar conductances, the ability to determine which combination to store depends on the dynamics of the other dendrite. Small variations in conductance, training order, or other stochastic factors can influence the outcome. This challenge, known as the symmetry-breaking problem, has been previously acknowledged in abstract neuron models (Legenstein and Maass, 2011). To address this, additional mechanisms such as branch plasticity—amplifying or attenuating the plateau potential as it propagates from the dendrite to the soma—can be employed (Legenstein and Maass, 2011).
P12, paragraph 2; P13, Figure 4E: This result seems suboptimal, that only synapses at a very specific distance from the soma can be used to effectively learn to solve a NFBP. It is not clear to what extent details of the biophysical and morphological model are contributing to this narrow distance-dependence, or whether it matches physiological data.
We have added Figure 5—figure supplement 1A to clarify why distal synapses may not optimally contribute to learning. This figure illustrates how inhibitory plasticity improves performance by reducing excessive LTD at distal dendrites, thereby enhancing stimulus discrimination. Relevant explanations have been integrated into Page 18, Lines 25-39 in the revised manuscript.
P14, paragraph 2: Now the authors are assuming that inhibitory synapses are highly tuned to stimulus features. The tuning of inhibitory cells in the hippocampus and cortex is controversial but seems generally weaker than excitatory cells, commensurate with their reduced number relative to excitatory cells. The model has accumulated a lot of assumptions at this point, many without strong experimental support, which again might make more sense when proposing a new theory, but this stitching together of complex mechanisms does not provide a strong intuition for whether the scheme is either biologically plausible or performant for a general class of problem.
We acknowledge that it is not currently known whether inhibitory synapses in the striatum are tuned to stimulus features. However, given that the striatum is a purely inhibitory structure, it is plausible that lateral inhibition from other projection neurons could be tuned to features, even if feedforward inhibition from interneurons is not. Therefore, we believe this assumption is reasonable in the context of our model. As noted earlier, the GABA plasticity rule in our study is speculative. However, we hope that our work will encourage further experimental investigations, as we demonstrate that if GABAergic inputs are sufficiently specific, they can significantly enhance computations (This is discussed on page 17, lines 8-15.).
P16, Figure 5E legend: The explanation of the meaning of T_max and T_min in the legend and text needs clarification.
The abbreviations T<sub>min</sub> and T<sub>max</sub> have been updated to CTL and CTH to better reflect their role in calcium threshold tracking. The Figure 5E legend and relevant text have been revised for clarity. Additionally, the Methods section has been reorganized for better readability.
P16, Figure 5B, C: When the reader reaches this paper, the conundrums presented in Figure 4 are resolved. The "winner-takes-all" inhibitory plasticity both increases the performance when all features are presented to a single branch and increases the range of somatodendritic distances where synapses can effectively be used for stimulus discrimination. The problem, then, is in the narrative. A lot more setup needs to be provided for the question related to whether or not dendritic nonlinearity and synaptic inhibition can be used to perform the NFBP. The authors may consider consolidating the results of Fig. 4 and 5 so that the comparison is made directly, rather than presenting them serially without much foreshadowing.
In order to facilitate readability, we have updated the following sections of the manuscript to clarify how inhibitory plasticity resolves challenges from Figure 4:
Figure 5B and Figure 5–figure supplement 1B: Two new panels illustrate the role of inhibitory plasticity in addressing symmetry problems.
Figure 5–figure supplement 1A: Shows how inhibitory plasticity extends the effective range of somatodendritic distances.
P18, Figure 6: This should be the most important figure, finally tying in all the previous complexity to show that NFBP can be partially solved with E and I plasticity even when features are distributed randomly across branches without clustering. However, now bringing in the comparison across spillover models is distracting and not necessary. Just show us the same plateau generation model used throughout the paper, with and without inhibition.
Figure updated. Accumulative spillover and no-spillover conditions have been removed.
P18, paragraph 2: "In Fig. 6C, we report that a subset of neurons (5 out of 31) successfully solved the NFBP." This study could be significantly strengthened if this phenomenon could (perhaps in parallel) be shown to occur in a simpler model with a simpler plateau generation mechanism. Furthermore, it could be significantly strengthened if the authors could show that, even if features are randomly distributed at initialization, a pruning mechanism could gradually transition the neuron into the state where fewer features are present on each branch, and the performance could approach the results presented in Figure 5 through dynamic connectivity.
To model structural plasticity is a good suggestion that should be investigated in later work, however, we feel that it goes beyond what we can do in the current manuscript. We now acknowledge that structural plasticity might play a role. For example we show that if we can assume ‘branch-specific’ spillover, that leads to sufficiently development of local dendritic non-linearities, also one can learn with distributed inputs. In reality, structural plasticity is likely important here, as we now state (current page 22, line 35-42).
P17, paragraph 2: "As shown in Fig. 6B, adding the hypothetical nonlinearities to the model increases the performance towards solving part of the NFBP, i.e. learning to respond to one relevant feature combination only. The performance increases with the amount of nonlinearity." This is not shown in Figure 6B.
Sentence removed. We have added a Figure 6 - figure supplement 1 to better explain the limitations.
P22, paragraph 1: The "w" parameter here is used to determine whether spatially localized synapses are co-active enough to generate a plateau potential. However, this is the same w learned through synaptic plasticity. Typically LTP and LTD are thought of as changing the number of postsynaptic AMPARs. Does this "w" also change the AMPAR weight in the model? Do the authors envision this as a presynaptic release probability quantity? If so, please state that and provide experimental justification. If not, please justify modifying the activation of postsynaptic NMDARs through plasticity.
This is an important remark. Our plasticity model differs from classical LTP models as it depends on the link between LTP and increased spillover as described by Henneberger et al., (2020).
We have updated the method section (page 27, lines 6-11), and we acknowledge, however, that in a real cell, learning might first strengthen the AMPA component, but after learning the ratio of NMDA/AMPA is unchanged ( Watt et al., 2004). This re-balancing between NMDA and AMPA might perhaps be a slower process.
Reviewer #2 (Public Review):
Summary:
The study explores how single striatal projection neurons (SPNs) utilize dendritic nonlinearities to solve complex integration tasks. It introduces a calcium-based synaptic learning rule that incorporates local calcium dynamics and dopaminergic signals, along with metaplasticity to ensure stability for synaptic weights. Results show SPNs can solve the nonlinear feature binding problem and enhance computational efficiency through inhibitory plasticity in dendrites, emphasizing the significant computational potential of individual neurons. In summary, the study provides a more biologically plausible solution to single-neuron learning and gives further mechanical insights into complex computations at the single-neuron level.
Strengths:
The paper introduces a novel learning rule for training a single multicompartmental neuron model to perform nonlinear feature binding tasks (NFBP), highlighting two main strengths: the learning rule is local, calcium-based, and requires only sparse reward signals, making it highly biologically plausible, and it applies to detailed neuron models that effectively preserve dendritic nonlinearities, contrasting with many previous studies that use simplified models.
Weaknesses:
I am concerned that the manuscript was submitted too hastily, as evidenced by the quality and logic of the writing and the presentation of the figures. These issues may compromise the integrity of the work. I would recommend a substantial revision of the manuscript to improve the clarity of the writing, incorporate more experiments, and better define the goals of the study.
Thanks for the valuable feedback. We have now gone through the whole manuscript updating the text, and also improved figures and added some supplementary figures to better explain model mechanisms. In particular, we state more clearly our goal already in the introduction.
Major Points:
(1) Quality of Scientific Writing: The current draft does not meet the expected standards. Key issues include:
i. Mathematical and Implementation Details: The manuscript lacks comprehensive mathematical descriptions and implementation details for the plasticity models (LTP/LTD/Meta) and the SPN model. Given the complexity of the biophysically detailed multicompartment model and the associated learning rules, the inclusion of only nine abstract equations (Eq. 1-9) in the Methods section is insufficient. I was surprised to find no supplementary material providing these crucial details. What parameters were used for the SPN model? What are the mathematical specifics for the extra-synaptic NMDA receptors utilized in this study? For instance, Eq. 3 references [Ca2+]-does this refer to calcium ions influenced by extra-synaptic NMDARs, or does it apply to other standard NMDARs? I also suggest the authors provide pseudocodes for the entire learning process to further clarify the learning rules.
The model is quite detailed but builds on previous work. For this reason, for model components used in earlier published work (and where models are already available via model repositories, such as ModelDB), we refer the reader to these resources in order to improve readability and to highlight what is novel in this paper - the learning rules itself. The learning rule is now explained in detail. For modelers that want to run the model, we have also provided a GitHub link to the simulation code. We hope this is a reasonable compromise to all readers, i.e, those that only want to understand what is new here (learning rule) and those that also want to test the model code. We explain this to the readers at the beginning of the Methods section.
ii. Figure quality. The authors seem not to carefully typeset the images, resulting in overcrowding and varying font sizes in the figures. Some of the fonts are too small and hard to read. The text in many of the diagrams is confusing. For example, in Panel A of Figure 3, two flattened images are combined, leading to small, distorted font sizes. In Panels C and D of Figure 7, the inconsistent use of terminology such as "kernels" further complicates the clarity of the presentation. I recommend that the authors thoroughly review all figures and accompanying text to ensure they meet the expected standards of clarity and quality.
Thanks for directing our attention to these oversights. We have gone through the entire manuscript, updating the figures where needed, and we are making sure that the text and the figure descriptions are clear and adequate and use consistent terminology for all quantities.
iii. Writing clarity. The manuscript often includes excessive and irrelevant details, particularly in the mathematical discussions. On page 24, within the "Metaplasticity" section, the authors introduce the biological background to support the proposed metaplasticity equation (Eq. 5). However, much of this biological detail is hypothesized rather than experimentally verified. For instance, the claim that "a pause in dopamine triggers a shift towards higher calcium concentrations while a peak in dopamine pushes the LTP kernel in the opposite direction" lacks cited experimental evidence. If evidence exists, it should be clearly referenced; otherwise, these assertions should be presented as theoretical hypotheses. Generally, Eq. 5 and related discussions should be described more concisely, with only a loose connection to dopamine effects until more experimental findings are available.
The “Metaplasticity” section (pages 30-32) has been updated to be more concise, and the abundant references to dopamine have been removed.
(2) Goals of the Study: The authors need to clearly define the primary objective of their research. Is it to showcase the computational advantages of the local learning rule, or to elucidate biological functions?
We have explicitly stated our goal in the introduction (page 4, lines 19-22). Please also see the response to reviewer 1.
i. Computational Advantage: If the intent is to demonstrate computational advantages, the current experimental results appear inadequate. The learning rule introduced in this work can only solve for four features, whereas previous research (e.g., Bicknell and Hausser, 2021) has shown capability with over 100 features. It is crucial for the authors to extend their demonstrations to prove that their learning rule can handle more than just three features. Furthermore, the requirement to fine-tune the midpoint of the synapse function indicates that the rule modifies the "activation function" of the synapses, as opposed to merely adjusting synaptic weights. In machine learning, modifying weights directly is typically more efficient than altering activation functions during learning tasks. This might account for why the current learning rule is restricted to a limited number of tasks. The authors should critically evaluate whether the proposed local learning rule, including meta-plasticity, actually offers any computational advantage. This evaluation is essential to understand the practical implications and effectiveness of the proposed learning rule.
Thank you for your feedback. To address the concern regarding feature complexity, we extended our simulations to include learning with 9 and 25 features, achieving accuracies of 80% and 75%, respectively (Figure 6—figure supplement 1A). While our results demonstrate effective performance, the absence of external stabilizers—such as error-modulated functions used in prior studies like Bicknell and Hausser (2021)—means that the model's performance can be more sensitive to occasional incorrect outcomes. For instance, while accuracy might reach 90%, a few errors can significantly affect overall performance due to the lack of mechanisms to stabilize learning.
In order to clarify the setup of the rule, we have added pseudocode in the revised manuscript (Pages 31-32) detailing how the learning rule and metaplasticity update synaptic weights based on calcium and dopamine signals. Additionally, we have included pseudocode for the inhibitory learning rule on Pages 34-35. In future work, we also aim to incorporate biologically plausible mechanisms, such as dopamine desensitization, to enhance stability.
ii. Biological Significance: If the goal is to interpret biological functions, the authors should dig deeper into the model behaviors to uncover their biological significance. This exploration should aim to link the observed computational features of the model more directly with biological mechanisms and outcomes.
As now clearly stated in the introduction, the goal of the study is to see whether and to what quantitative extent the theoretical solution of the NFBP proposed in Tran-Van-Minh et al. (2015) can be achieved with biophysically detailed neuron models and with a biologically inspired learning rule. The problem has so far been solved with abstract and phenomenological neuron models (Schiess et al., 2014; Legenstein and Maass, 2011) and also with a detailed neuron model but with a precalculated voltage-dependent learning rule (Bicknell and Häusser, 2021).
We have also tried to better explain the model mechanisms by adding supplementary figures.
Reviewer #2 (Recommendations For The Authors):
Minor:
(1) The [Ca]NMDA in Figure 2A and 2C can have large values even when very few synapses are activated. Why is that? Is this setting biologically realistic?
The elevated [Ca²⁺]NMDA with minimal synaptic activation arises from high spine input resistance, small spine volume, and NMDA receptor conductance, which scales calcium influx with synaptic strength. Physiological studies report spine calcium transients typically up to ~1 μM (Franks and Sejnowski 2002, DOI: 10.1002/bies.10193), while our model shows ~7 μM for 0.625 nS and around ~3 μM for 0.5 nS, exceeding this range. The calcium levels of the model might therefore be somewhat high compared to biologically measured levels - however, this does not impact the learning rule, as the functional dynamics of the rule remain robust across calcium variations.
(2) In the distributed synapses session, the study introduces two new mechanisms "Threshold spillover" and "Accumulative spillover". Both mechanisms are not basic concepts but quantitative descriptions of them are missing.
Thank you for your feedback. Based on the recommendations from Reviewer 1, we have simplified the paper by removing the "Accumulative spillover" and focusing solely on the "Thresholded spillover" mechanism. In the updated version of the paper, we refer to it only as glutamate spillover. However, we acknowledge (page 22, lines 40-42) that to create sufficient non-linearities, other mechanisms, like structural plasticity, might also be involved (although testing this in the model will have to be postponed to future work).
(3) The learning rule achieves moderate performance when feature-relevant synapses are organized in pre-designed clusters, but for more general distributed synaptic inputs, the model fails to faithfully solve the simple task (with its performance of ~ 75%). Performance results indicate the learning rule proposed, despite its delicate design, is still inefficient when the spatial distribution of synapses grows complex, which is often the case on biological neurons. Moreover, this inefficiency is not carefully analyzed in this paper (e.g. why the performance drops significantly and the possible computation mechanism underlying it).
The drop in performance when using distributed inputs (to a mean performance of 80%) is similar to the mean performance in the same situation in Bicknell and Hausser (2021), see their Fig. 3C. The drop in performance is due to that: i) the relevant feature combinations are not often colocalized on the same dendrite so that they can be strengthened together, and ii) even if they are, there may not be enough synapses to trigger the supralinear response from the branch spillover mechanism, i.e. the inputs are not summated in a supralinear way (Fig. 6B, most input configurations only reach 75%).
Because of this, at most one relevant feature combination can be learned. In the several cases when the random distribution of synapses is favorable for both relevant feature combinations to be learned, the NFBP is solved (Figs. 6B, some performance lines reach 100 % and 6C, example of such a case). We have extended the relevant sections of the paper trying to highlight the above mentioned mechanisms.
Further, the theoretical results in Tran-Van-Minh et al. 2015 already show that to solve the NFBP with supralinear dendrites requires features to be pre-clustered in order to evoke the supralinear dendritic response, which would activate the soma. The same number of synapses distributed across the dendrites i) would not excite the soma as strongly, and ii) would summate in the soma as in a point neuron, i.e. no supralinear events can be activated, which are necessary to solve the NFBP. Hence, one doesn’t expect distributed synaptic inputs to solve the NFBP with any kind of learning rule.
(4) Figure 5B demonstrates that on average adding inhibitory synapses can enhance the learning capabilities to solve the NFBP for different pattern configurations (2, 3, or 4 features), but since the performance for excitatory-only setup varies greatly between different configurations (Figure 4B, using 2 or 3 features can solve while 4 cannot), can the results be more precise about whether adding inhibitory synapses can help improve the learning with 4 features?
In response to the question, we added a panel to Figure 5B showing that without inhibitory synapses, 5 out of 13 configurations with four features successfully learn, while with inhibitory synapses, this improves to 7 out of 13. Figure 5—figure supplement 1B provides an explanation for this improvement: page 18 line 10-24
(5) Also, in terms of the possible role of inhibitory plasticity in learning, as only on-site inhibition is studied here, can other types of inhibition be considered, like on-path or off-path? Do they have similar or different effects?
This is an interesting suggestion for future work. We observed relevant dynamics in Figure 6A, where inhibitory synapses increased their weights on-site when randomly distributed. Previous work by Gidon and Segev (2012) examined the effects of different inhibitory types on NMDA clusters, highlighting the role of on-site and off-path inhibition in shunting. In our context, on-site inhibition in the same branch, appears more relevant for maintaining compartmentalized dendritic processing.
(6) Figure 6A is mentioned in the context of excitatory-only setup, but it depicts the setup when both excitatory and inhibitory synapses are included, which is discussed later in the paper. A correction should be made to ensure consistency.
We have updated the figure and the text in order to make it more clear that simulations are run both with and without inhibition in this context (page 21 line 4-13)
(7) In the "Ca and kernel dynamics" plots (Fig 3,5), some of the kernel midlines (solid line) are overlapped by dots, e.g. the yellow line in Fig 3D, and some kernel midlines look like dots, which leads to confusion. Suggest to separate plots of Ca and kernel dynamics for clarity.
The design of the figures has been updated to improve the visibility of the calcium and kernel dynamics during training.
(8) The formulations of the learning rule are not well-organized, and the naming of parameters is kind of confusing, e.g. T_min, T_max, which by default represent time, means "Ca concentration threshold" here.
The abbreviations of the thresholds ( T<sub>min</sub>, T<sub>max</sub> in the initial version) have been updated to CTL and CTH, respectively, to better reflect their role in tracking calcium levels. The mathematical formulations have further been reorganized for better readability. The revised Methods section now follows a more structured flow, first explaining the learning mechanisms, followed by the equations and their dependencies.
-
-
www.biorxiv.org www.biorxiv.org
-
Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.
Learn more at Review Commons
Reply to the reviewers
We prefer not to post our response to reviewers on BioRXiv as it is optional
-
Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
Referee #3
Evidence, reproducibility and clarity
This study applies cellular and molecular assays, together with transcriptome analysis, to dissect how certain heterochromatin-based epimutations can confer resistance to caffeine and other drugs in fission yeast cells. The findings indicate that compromising the function of two mitochondrial proteins, Cup1 and Ppr4, leads to increased oxidants and the activation of the mito-nuclear retrograde response, which in turn causes the activation of the Pap1-mediated oxidative stress response, including the induction of transmembrane transporters to increase the efflux of drugs. This provides mechanistic insights into how the chromatin-mediated silencing of mitochondrial factors can result in fungal drug resistance. The authors also show that these phenotypes are variable within a cell population, allowing phenotypic plasticity to changing environments. This is a straightforward and clearly presented study, and the conclusions are generally justified based on the experiments presented.
Minor comments:
- Fig. 3A: The legend needs more information to understand what is shown here. Does this show the normalized read counts (cpm?) for each gene scaled per average counts in all samples? Another possibility would be to show relative data for the two mutants compared to wild-type. Also, the labels for the bottom two clusters seem the wrong way round, i.e. the last cluster should be cup1-tt only. How many genes are shown here which made the cutoff?
- To strengthen some of the conclusions, it would be meaningful to calculate the significance of overlaps between key gene lists, given the size of the lists involved and the background gene list (Fig. 3B; Fig. 4).
- The font size indicating the significance of differences is too tiny in some bar plots (Fig. 5C-E; Fig. 6D; Fig. 7C).
Referees cross-commenting
In response to issues raised by Reviewer 1:
In my opinion, the growth, TPF and ROS assays applied are robust and diagnostic to show a mitochondrial dysfunction. Additional assays, like Seahorse, would provide more specific insights about particular aspects of mitochondrial dysfunction, but this is not really relevant to this study. The key point is that the epimutations compromise mitochondrial function by downregulating mitochondrial proteins, which, in turn, are exploited by the cell to trigger a stress response that protects against antifungal compounds. The exact nature of the mitochondrial dysfunction, any changes in morphology, or details of differentially expressed genes are not critical for this mechanism, as it relies on downstream processes like the retrograde response that is activated by diverse mitochondrial problems.
The question of whether heterochromatin-mediated resistance phenotypes are prevalent in human fungal pathogens is interesting and an important avenue for future study. But it is not evident to me how this could be addressed bioinformatically.
Significance
This manuscript builds on a previous study by the same group, which showed that different heterochromatin-based epimutations can provide cellular resistance to caffeine (Torres-Garcia et al. Nature, 2000). Here they use the UR1 and UR2 epimutations to highlight an example of how such mutations can generate antifungal resistance and phenotypic plasticity by exploiting side effects of mitochondrial dysfunction. Epimutations are an interesting case of cellular adaptation that lasts longer than gene-expression responses but are more readily reversible and flexible than genetic mutations, allowing bet-hedging by generating variable phenotypes in a clonal cell population. This study provides fresh insights into the downstream effects of epimutations causing altered cellular traits, thus complementing previous studies focusing on the patterns and mechanisms of establishing heterochromatin-based genomic islands. The current study is of interest to researchers working on genome regulation, mitochondrial function, cellular adaption/evolution, and has possible applications to combat antifungal resistance.
Field of expertise: genome regulation, gene function, fission yeast, stress response
-
Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
Referee #2
Evidence, reproducibility and clarity
This very interesting manuscript describes the impact of heterochromatin in triggering down-regulation of mitochondrial respiratory activity in S. pombe, and thereby causing increased efflux and consequently increased resistance to several compounds, including the azole class of antifungal drugs. The authors performed detailed mechanistic studies, focusing on two mitochondrial genes, cup1 and ppr4, which are under heterochromatin-dependent repression. Based on their findings, they conclude that reduced mitochondrial respiration causes increased levels of reactive oxygen species (ROS), which activates the transcription factor Pap1. Pap1 then upregulates several genes, including efflux pumps. The authors performed an excellent set of experiment to address heterogenous cell populations in the epimutants, which are described in the second part of the Results section and provide strong evidence of plastic drug resistance phenotypes.
The manuscript is beautifully written and the data is presented well. Overall, the conclusions are supported by the data.
I have a reservation with one particular conclusion that I discuss below under point 1. This can be addressed by modifications to the text. Under point 2, I suggest an easy to do experiment, which would strengthen the conclusion that ROS produced due to mitochondrial dysfunction are driving the drug resistance phenotype. This is an interesting mechanism, and the data in the manuscript supports it, but the authors' have not demonstrated it directly. They could do so by using antioxidants, as I suggest below.
- Most of the mechanistic analysis is centred around the transcription factor Pap1. The authors performed experiments to connect the production of ROS in mitochondrial mutants, with higher nuclear localisation of Pap1 and its activation of several genes, including the membrane transporter Cas5 and to a lesser extent Bfr1, which might be responsible for increased efflux. There is no question that efflux is elevated in mitochondrial mutants (a phenotype consistent with previous work in other yeast models). The authors also present data to show that inhibition of efflux reverses drug resistance. The data for Pap1's involvement is good in the cup1 mutant (one of the mitochondrial mutants that was studied) but not so much in the ppr4 mutant (the other mitochondrial mutant that was studied). There was little enrichment of Pap1 on the Cas5 promoter in the ppr4 mutant, and no effects of Pap1 on the expression of Cas5 in the ppr4 mutant (Fig 5C and 5D). While the pap1 mutation reduced resistance of the ppr4 mutant to drugs, the authors acknowledge that this could be due to increased sensitivity of the pap1 mutant to drugs. The enrichment of Pap1 on the bfr1 promoter was also modest in the mitochondrial mutants.
I would therefore suggest that another transcription factor might be responsible for the upregulation of these efflux pumps and/or other efflux pumps are involved in Pap1' s contribution to drug resistance. The authors should consider modifying their conclusions on Pap1-dependent targets that are responsible for drug resistance in the mitochondrial mutants. 2. The authors' conclusion that increased ROS levels upon dysfunctional respiration might be driving the drug resistance phenotype in S. pombe (via Pap1 but perhaps other mechanisms too), presents a novel mechanistic link between mitochondria and drug resistance. I would suggest solidifying this conclusion by asking if antioxidants can reduce ROS levels and thereby decrease drug resistance in S. pombe. N-acetyl-L-cysteine could be used for this purpose.
Significance
That mitochondrial dysfunction causes drug resistance has been known for over 20 years. This manuscript describes a new mechanism, which relies on the formation of semi-stable epimutants, whereby the expression of genes encoding key mitochondrial proteins is down-regulated. As the authors propose, the beauty of epimutations is that they cause a heterogenous phenotype and are reversible, which would create an opportunity for the organism to use a bet-hedging strategy in drug. The ability to reverse the phenotype would be particularly important with using mitochondrial dysfunction as a strategy to increase drug resistance, because mitochondrial dysfunction lowers metabolic flexibility and growth rates for the organism. Therefore, it is only beneficial in the presence of drugs. This is to my knowledge one of the first logical mechanistic explanations for how fungal cells (but likely applicable more broadly) might use mitochondrial dysfunction to their advantage when needed, and then this can be reversed back to respiratory competence to maintain metabolic flexibility when drug selection is no longer present.
This study will be of high interest to researchers studying drug resistance and how phenotypic plasticity and bet-hedging mechanisms are used by cells to survive toxic compounds. This is applicable across fields. This study will further be very interesting to the fields of antifungal drug resistance and fungal pathogenesis, and will provide the foundation for studying similar mechanisms in relevant fungal pathogens of animals and plants.
My expertise is in metabolism and mitochondrial roles in fungal pathogens. I really enjoyed reading the manuscript.
-
Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
Referee #1
Evidence, reproducibility and clarity
The manuscript under review appears to present findings on heterochromatin-mediated antifungal resistance, specifically focusing on the role of mitochondrial dysfunction in the model organism Schizosaccharomyces pombe. However, there are several significant concerns regarding the novelty and robustness of the conclusions drawn by the authors.
The key conclusions of the paper lack sufficient convincing evidence. While the authors attribute resistance phenotypes to heterochromatin-mediated repression, the evidence presented does not strongly support these claims. Significant claims should be qualified as preliminary or speculative, particularly those that extend beyond the experimental results provided. For example, asserting a definitive link between heterochromatin status and antifungal resistance mechanisms requires more comprehensive empirical data.
Additional experiments are crucial to bolster the claims made in the manuscript. The authors rely heavily on growth assays on nonfermentable carbon sources to supposedly elucidate respiratory function. However, this approach is outdated, and advancements in the field should be employed for a more robust assessment of mitochondrial integrity and function. Techniques such as the Seahorse assay could provide critical insights into the respiration capacity of the mutants. Furthermore, the use of electron transport chain (ETC) inhibitors like antimycin A would offer stronger evidence regarding mitochondrial dysfunction. The current use of generalized DCF staining to assess reactive oxygen species (ROS) lacks specificity. MitoSox and MitoTracker should be utilized to measure mitochondrial ROS levels and examine mitochondrial morphology effectively.
The authors claim that mitochondrial dysfunction correlates with significant changes in the transcriptome related to aerobic respiration, yet this crucial aspect lacks adequate elaboration in their analysis. Given that mitochondrial function is a primary theme of the manuscript, in-depth discussion and interpretation of the differentially expressed aerobic respiratory genes in the transcriptome data are necessary to validate their conclusions.
Additionally, as a pathogenic fungal microbiologist, I express interest in investigating whether heterochromatin-mediated resistance phenotypes are prevalent in human fungal pathogens, including Candida albicans and Cryptococcus neoformans. A bioinformatic analysis could help address this inquiry and potentially broaden the relevance of the findings.
Lastly, in the section "Cup1 and Ppr4 deficiencies...retrograde gene repression," the conclusions are made primarily based on transcriptome analysis and lack empirical confirmation through molecular biology techniques. This section should be revised to include comprehensive molecular evidence supporting the claims.
Significance
General Assessment: Strengths and Limitations
Strengths:
The study introduces a potentially novel mechanism of antifungal resistance in Schizosaccharomyces pombe through heterochromatin-mediated epimutations. This is particularly relevant in the context of rising antifungal resistance globally.
The focus on mitochondrial dysfunction as a contributor to drug resistance provides a deeper understanding of how fungi adapt to environmental stressors.
The manuscript raises important questions about the epigenetic factors influencing fungal resistance, which could inspire subsequent investigations in the field.
Limitations:
The research relies heavily on traditional methods for assessing respiratory function, which may not fully characterize the complexities of mitochondrial integrity and function. This may weaken the overall conclusions regarding mitochondrial dysfunction.
The evidence supporting key claims is not robust enough to confidently assert a direct link between heterochromatin changes and antifungal resistance. The lack of confirmatory experiments using more advanced techniques limits the study's impact.
The analysis of transcriptome data is insufficiently detailed, leaving significant gaps in understanding the specific mechanisms at play.
Advance: Comparison to Existing Published Knowledge
This study contributes to the existing literature by exploring the role of epimutations in antifungal resistance, aligning with emerging interests in epigenetic mechanisms in microbial adaptation. While previous studies have focused on genetic mutations and efflux mechanisms, this research attempts to link heterochromatin dynamics to resistance pathways, thereby filling a conceptual gap in understanding how eukaryotic microorganisms may adapt to antifungal treatments.
However, the advances made by the study appear to be incremental rather than groundbreaking. While it does shed light on the potential role of heterochromatin in drug resistance, further empirical evidence and a stronger methodological approach are required to substantiate these findings convincingly.
-
-
-
80 mass shootings in the U.S., including the most recent incident at Florida State University, according to the Gun Violence Archive.
Using data and numbers to make this look like it is something that occurs all the time and is "normal". While that is not necessarily wrong in the frequency, it is something that shouldn't be normalized
-
Thursday's incident was not the first time there was a shooting at the school.
FSU has had its second school shooting
-
Things like this take place.
"Things like this take place" should not be something that is said or said about school shootings.
-
FSU student Blake Leonard
A student was interviewed and his view on the shooting is shared. This is a common thing that is seen throughout the articles
-
-
www.suhrkamp.de www.suhrkamp.de
Tags
Annotators
URL
-
-
lsbjordao.github.io lsbjordao.github.io
-
Princípio do Poluidor-Pagador.
prevenção.
-
Conhecimento da Biodiversidade:
listas feitas de modo articulado.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important study by Jeong and Choi studied neural activity in the medial prefrontal cortex (mPFC) while rats performed a foraging paradigm in which they forage for rewards in the absence or presence of a threatening object (Lobsterbot). The authors present interesting observations suggesting that the mPFC population activity switches between distinct functional modes conveying distinct task variables- such as the distance to the reward location and types of threat-avoidance behaviors-depending on the location of the animal. Although the specific information represented by individual neurons remains to be clarified through further investigation, the reviewers thought that this study is solid, appreciated the value of studying neural coding in naturalistic settings, and felt that this work offers significant insights into how the mPFC operates during foraging behavior involving reward-threat conflict.
-
Reviewer #1 (Public review):
Summary:
In this study, Jeong and Choi examine neural correlates of behavior during a naturalistic foraging task in which rats must dynamically balance resource acquisition (foraging) with the risk of threat. Rats first learn to forage for sucrose reward from a spout, and when a threat is introduced (an attack-like movement from a "LobsterBot"), they adjust their behavior to continue foraging while balancing exposure to the threat, adopting anticipatory withdraw behaviors to avoid encounter with the LobsterBot. Using electrode recordings targeting the medial prefrontal cortex (PFC), they identify heterogenous encoding of task variables across prelimbic and infralimbic cortex neurons, including correlates of distance to the reward/threat zone and correlates of both anticipatory and reactionary avoidance behavior. Based on analysis of population responses, they show that prefrontal cortex switches between different regimes of population activity to process spatial information or behavioral responses to threat in a context-dependent manner. Characterization of the heterogenous coding scheme by which frontal cortex represents information in different goal states is an important contribution to our understanding of brain mechanisms underlying flexible behavior in ecological settings.
Strengths:
As many behavioral neuroscience studies employ highly controlled task designs, relatively less is generally known about how the brain organizes navigation and behavioral selection in naturalistic settings, where environment states and goals are more fluid. Here, the authors take advantage of a natural challenge faced by many animals - how to forage for resources in an unpredictable environment - to investigate neural correlates of behavior when goal states are dynamic. Related to his, they also investigate prefrontal cortex (PFC) activity is structured to support different functional "modes" (here, between a navigational mode and a threat-sensitive foraging mode) for flexible behavior. Overall, an important strength and real value of this study is the design of the behavioral experiment, which is trial-structured, permitting strong statistical methods for neural data analysis, yet still rich enough to encourage natural behavior structured by the animal's volitional goals. The experiment is also phased to measure behavioral changes as animals first encounter a threat, and then learn to adapt their foraging strategy to its presence. Characterization of this adaptation process is itself quite interesting and sets a foundation for further study of threat learning and risk management in the foraging context. Finally, the characterization of single-neuron and population dynamics in PFC in this naturalistic setting with fluid goal states is an important contribution to the field. Previous studies have identified neural correlates of spatial and behavioral variables in frontal cortex, but how these representations are structured, or how they are dynamically adjusted when animals shift their goals, has been less clear. The authors synthesize their main conclusions into a conceptual model for how PFC activity can support mode switching, which can be tested in future studies with other task designed and functional manipulations.
Weaknesses:
While the task design in this study is intentionally stimulus-rich and places minimal constraint on the animal to preserve naturalistic behavior, this also introduces confounds that limit interpretability of the neural analysis. For example, some variables which are the target of neural correlation analysis, such as spatial/proximity coding and coding of threat and threat-related behaviors, are naturally entwined. To their credit, the authors have included careful analyses and control conditions to disambiguate these variables and significantly improve clarity.
The authors also claim that the heterogenous coding of spatial and behavioral variables in PFC is structured in a particular way that depends on the animal's goals or context. As the authors themselves discuss, the different "zones" contain distinct behaviors and stimuli, and since some neurons are modulated by these events (e.g., licking sucrose water, withdrawing from the LobsterBot, etc.), differences in population activity may to some extent reflect behavior/event coding. The authors have included a control analysis, removing timepoints corresponding to salient events, to substantiate the claim that PFC neurons switch between different coding "modes." While this significantly strengthens evidence for their conclusion, this analysis still depends on relatively coarse labeling of only very salient events. Future experiment designs, which intentionally separate task contexts (e.g. navigation vs. foraging), could serve to further clarify the structure of coding across contexts and/or goal states.
Finally, while the study includes many careful, in-depth neural and behavioral analyses to support the notion that modal coding of task variables in PFC may play a role in organizing flexible, dynamic behavior, the study still lacks functional manipulations to establish any form of causality. This limitation is acknowledged in the text, and the report is careful not to over interpret suggestions of causal contribution, instead setting a foundation for future investigations.
-
Reviewer #2 (Public review):
Summary:
Jeong & Choi (2023) use a semi-naturalistic paradigm to tackle the question of how the activity of neurons in the mPFC might continuously encode different functions. They offer two possibilities: either there are separate dedicated populations encoding each function, or cells alter their activity dependent on the current goal of the animal. In a threat-avoidance task rats procurred sucrose in an area of a chamber where, after remaining there for some amount of time, a 'Lobsterbot' robot attacked. In order to initiate the next trial rats had to move through the arena to another area before returning to the robot encounter zone. Therefore the task has two key components: threat avoidance and navigating through space. Recordings in the IL and PL of the mPFC revealed encoding that depended on what stage of the task the animal was currently engaged in. When animals were navigating, neuronal ensembles in these regions encoded distance from the threat. However, whilst animals were directly engaged with the threat and simultaneously consuming reward, it was possible to decode from a subset of the population whether animals would evade the threat. Therefore the authors claim that neurons in the mPFC switched between two functional modes: representing allocentric spatial information, and representing egocentric information pertaining to the reward and threat. Finally, the authors propose a conceptual model based on these data whereby this switching of population encoding is driven by either bottom-up sensory information or top-down arbitration.
Strengths:
Whilst these multiple functions of activity in the mPFC have generally been observed in tasks dedicated to the study of a singular function, less work has been done in contexts where animals continuously switch between different modes of behaviour in a more natural way. Being able to assess whether previous findings of mPFC function apply in natural contexts is very valuable to the field, even outside of those interested in the mPFC directly. This also speaks to the novelty of the work; although mixed selectivity encoding of threat assessment and action selection has been demonstrated in some contexts (e.g. Grunfeld & Likhtik, 2018) understanding the way in which encoding changes on-the-fly in a self-paced task is valuable both for verifying whether current understanding holds true and for extending our models of functional coding in the mPFC.
The authors are also generally thoughtful in their analyses and use a variety of approaches to probe the information encoded in the recorded activity. In particular, they use relatively close analysis of behaviour as well as manipulating the task itself by removing the threat to verify their own results. The use of such a rich task also allows them to draw comparisons, e.g. in different zones of the arena or different types of responses to threat, that a more reduced task would not otherwise allow. Additional in-depth analyses in the updated version of the manuscript, particularly the feature importance analysis, as well as complimentary null findings (a lack of cohesive place cell encoding, and no difference in location coding dependent on direction of trajectory) further support the authors' conclusion that populations of cells in the mPFC are switching their functional coding based on task context rather than behaviour per se. Finally, the authors' updated model schematic proposes an intriguing and testable implementation of how this encoding switch may be manifested by looking at differentiable inputs to these populations.
Weaknesses:
The main existing weakness of this study is that its findings are correlational (as the authors highlight in the discussion). Future work might aim to verify and expand the authors' findings - for example, whether the elevated response of Type 2 neurons directly contributes to the decision-making process or just represents fear/anxiety motivation/threat level - through direct physiological manipulation. However, I appreciate the challenges of interpreting data even in the presence of such manipulations and some of the additional analyses of behaviour, for example the stability of animals' inter-lick intervals in the E-zone, go some way towards ruling out alternative behavioural explanations. Yet the most ideal version of this analysis is to use a pose estimation method such as DeepLabCut to more fully measure behavioural changes. This, in combination with direct physiological manipulation, would allow the authors to fully validate that the switching of encoding by this population of neurons in the mPFC has the functional attributes as claimed here.
-
Reviewer #3 (Public review):
Summary:
This study investigates how various behavioral features are represented in the medial prefrontal cortex (mPFC) of rats engaged in a naturalistic foraging task. The authors recorded electrophysiological responses of individual neurons as animals transitioned between navigation, reward consumption, avoidance, and escape behaviors. Employing a range of computational and statistical methods, including artificial neural networks, dimensionality reduction, hierarchical clustering, and Bayesian classifiers, the authors sought to predict from neural activity distinct task variables (such as distance from the reward zone and the success or failure of avoidance behavior). The findings suggest that mPFC neurons alternate between at least two distinct functional modes, namely spatial encoding and threat evaluation, contingent on the specific location.
Strengths:
This study attempt to address an important question: understanding the role of mPFC across multiple dynamic behaviors. The authors highlight the diverse roles attributed to mPFC in previous literature and seek to explain this apparent heterogeneity. They designed an ethologically relevant foraging task that facilitated the examination of complex dynamic behavior, collecting comprehensive behavioral and neural data. The analyses conducted are both sound and rigorous.
Weaknesses:
Because the study still lacks experimental manipulation, the findings remain correlational. The authors have appropriately tempered their claims regarding the functional role of the mPFC in the task. The nature of the switch between functional modes encoding distinct task variables (i.e., distance to reward, and threat-avoidance behavior type) is not established. Moreover, the evidence presented to dissociate movement from these task variables is not fully convincing, particularly without single-session video analysis of movement. Specifically, while the new analyses in Figure 7 are informative, they may not fully account for all potential confounding variables arising from changes in context or behavior.
-
Author response:
The following is the authors’ response to the original reviews
Reviewer 1 (Public Review):
Thank you for the helpful comments. Below, we have quoted the relevant sections from the revised manuscript as we respond to the reviewer’s comments item-by-item.
Weaknesses:
While the task design in this study is intentionally stimulus-rich and places a minimal constraint on the animal to preserve naturalistic behavior, this is, unfortunately, a double-edged sword, as it also introduces additional variables that confound some of the neural analysis. Because of this, a general weakness of the study is a lack of clear interpretability of the task variable neural correlates. This is a limitation of the task, which includes many naturally correlated variables - however, I think with some additional analyses, the authors could strengthen some of their core arguments and significantly improve clarity.
We acknowledge the weakness and have included additional analyses to compensate for it. The details are as follows in our reply to the subsequent comments.
For example, the authors argue, based on an ANN decoding analysis (Figure 2b), that PFC neurons encode spatial information - but the spatial coordinate that they decode (the distance to the active foraging zone) is itself confounded by the fact that animals exhibit different behavior in different sections of the arena. From the way the data are presented, it is difficult to tell whether the decoder performance reflects a true neural correlate of distance, or whether it is driven by behavior-associated activity that is evoked by different behaviors in different parts of the arena. The author's claim that PFC neurons encode spatial information could be substantiated with a more careful analysis of single-neuron responses to supplement the decoder analysis. For example, 1) They could show examples of single neurons that are active at some constant distance away from the foraging site, regardless of animal behavior, and 2) They could quantify how many neurons are significantly spatially modulated, controlling for correlates of behavior events. One possible approach to disambiguate this confound could be to use regression-based models of neuron spiking to quantify variance in neuron activity that is explained by spatial features, behavioral features, or both.
First of all, we would like to point out that while the recording was made during naturalistic foraging with minimal constraints behaviorally, a well-trained rat displayed an almost fixed sequence of actions within each zone. The behavioral repertoire performed in each zone was very different from each other: exploratory behaviors in the N-zone, navigating back and forth in the F-zone, and licking sucrose while avoiding attacks in the E-zone. Therefore, the entire arena is not only divided by the geographical features but also by the distinct set of behaviors performed in each zone. This is evident in the data showing a higher decoding accuracy of spatial distance in the F-zone than in the N- or E-zone. In this sense, the heterogeneous encoding reflects heterogenous distribution of dominant behaviors (navigation in the F-zone and attack avoidance while foraging in the E-zone) and hence corroborate the reviewer’s comment at a macroscopic scale encompassing the entire arena.
Having said that, the more critical question is whether the neural activity is more correlated with microscopic behaviors at every moment rather than the location decoded in the F-zone. As the reviewer suggested, the first-step is to analyze single-neuron activity to identify whether direct neural correlates of location exist. To this end, traditional place maps were constructed for individual neurons. Most neurons did not show cohesive place fields across different regions, indicating little-to-no direct place coding by individual neurons. Only a few neurons displayed recognizable place fields in a consistent manner. However, even these place fields were irregular and patchy, and therefore, nothing comparable to the place cells or grid cells found in the hippocampus or entorhinal cortex. Some examples firing maps have been added to Figure 2 and characterized in the text as below.
“To determine whether location-specific neural activity exists at the single-cell level in our mPFC data, a traditional place map was constructed for individual neurons. Although most neurons did not show cohesive place fields across different regions in the arena, a few neurons modulated their firing rates based on the rat’s current location. However, even these neurons were not comparable to place cells in the hippocampus (O’Keefe & Dostrovsky, 1971) or grid cells in the entorhinal cortex (Hafting et al., 2005) as the place fields were patchy and irregular in some cases (Figure 2B; Units 66 and 125) or too large, spanning the entire zone rather than a discrete location within it (Units 26 and 56). The latter type of neuron has been identified in other studies (e.g., Kaefer et al., 2020).”
Next, to verify whether the location decoding reflects neuronal activity due to external features or particular type of action, predicted location was compared between the opposite directions within the F-zone, inbound and outbound in reference to the goal area (Lobsterbot). If the encoding were specifically tied to a particular action or environmental stimuli, there should be a discrepancy when the ANN decoder trained with outbound trajectory is tested for predictions on the inbound path, and vice versa. However, the results showed no significant difference between the two trajectories, suggesting that the decoded distance was not simply reflecting neural responses to location-specific activities or environmental cues during navigation.
“To determine whether the accuracy of the regressor varied depending on the direction of movement, we compared the decoding accuracy of the regressor for outbound (from the N- to E-zone) vs. inbound (from the E- to N- zone) navigation within the F-zone. There was no significant difference in decoding accuracy between outbound vs. inbound trips (paired t-test; t(39) = 1.52, p =.136), indicating that the stability of spatial encoding was maintained regardless of the moving direction or perceived context (Figure 2E).”
Additionally, we applied the same regression analysis on a subset of data that were recorded while the door to the robot compartment was closed during the Lobsterbot sessions. This way, it is possible to test the decoding accuracy when the most salient spatial feature, the Lobsterbot, is blocked out of sight. The subset represents an average of 38.92% of the entire session. Interestingly, the decoding accuracy with the subset of data was higher accuracy than that with the entire dataset, indicating that the neural activities were not driven by a single salient landmark. This finding supports our conclusion that the location information can be decoded from a population of neurons rather than from individual neurons that are associated with environmental or proprioceptive cues. We have added the following description of results in the manuscript.
“Previous analyses indicated that the distance regressor performed robustly regardless of movement direction, but there is a possibility that the decoder detects visual cues or behaviors specific to the E-zone. For example, neural activity related to Lobsterbot confrontation or licking behavior might be used by the regressor to decode distance. To rule out this possibility, we analyzed a subset of data collected when the compartment door was closed, preventing visual access to the Lobsterbot and sucrose port and limiting active foraging behavior. The regressor trained on this subset still decoded distance with a MAE of 12.14 (± 3.046) cm (paired t-test; t(39) = 12.17, p <.001). Notably, the regressor's performance was significantly higher with this subset than with the full dataset (paired t-test; t(39) = 9.895, p <.001).”
As for the comment on “using regression-based models of neuron spiking to quantify variance in neuron activity that is explained by spatial features, behavioral features, or both”, it is difficult to separate a particular behavioral event let alone timestamping it since the rat’s location was being monitored in the constantly-moving, naturalistic stream of behaviors. However, as mentioned above, a new section entitled “Overlapping populations of mPFC neurons adaptively encode spatial information and defensive decision” argues against single-neuron based account by performing the feature importance analysis. The results showed that even when the top 20% of the most informative neurons were excluded, the remaining neural population could still decode both distance and events. This analysis supports the idea of a population-wide mode shift rather than distinct subgroups of neurons specialized in processing different sensory or motor events. This idea is also expressed in the schematic diagrams featured in Figure 8 of the revision.
To substantiate the claim that PFC neurons really switch between different coding "modes," the authors could include a version of this analysis where they have regressed out, or otherwise controlled for, these confounds. Otherwise, the claim that the authors have identified "distinctively different states of ensemble activity," as opposed to simple coding of salient task features, seems premature.
A key argument in our study is that the mPFC neurons encode different abstract internal representations (distance and avoidance decision) at the level of population. This has been emphasized in the revision with additional analyses and discussions. Most of all, we performed single neuron-based analysis for both spatial encoding (place fields for individual neurons) and avoidance decision (PETHs for head entry and head withdrawal) and contrasted the results with the population analysis. Although some individual neurons displayed a fractured “place cell-like” activity, and some others showed modulated firing at the head-entry and the head-withdrawal events, the ensemble decoding extracted distance information for the current location of the animal at a much higher accuracy. Furthermore, the PCA analysis identified abstract feature dimensions especially regarding the activity in the E-zone that cannot be attributable to a small number of sensory- or motor-related neurons.
To mitigate the possibility that the PCA is driven primarily by a small subset of units responsive to salient behavioral events, we also applied PCA to the dataset excluding the activity in the 2-second time window surrounding the head entry and withdrawal. While this approach does not eliminate all cue- or behavior-related activity within the E-zone, it does remove the neural activity associated with emotionally significant events, such as entry into the E-zone, the first drop of sucrose, head withdrawal, and the attack. Even without these events, the PC identified in the E-zone was still separated from those in the F-zone and N-zone. This result again argues in support of distinct states of ensemble activity formed in accordance with different categories of behaviors performed in different zones. Finally, the Naïve Bayesian classifier trained with ensemble activity in the E-zone was able to predict the success and failure of avoidance that occur a few seconds later, indicating that the same population of neurons are encoding the avoidance decision rather than the location of the animal.
Reviewer 1 (Recommendations):
The authors include an analysis (Figure 4) of population responses using PCA on session-wide data, which they use to support the claim that PFC neurons encode distinctive neural states, particularly between the encounter zone and nesting/foraging zones. However, because the encounter zone contains unique stimulus and task events (sucrose, threat, etc.), and the samples for PCA are drawn from the entire dataset (including during these events), it seems likely that the Euclidean distance measures analyzed in Figure 4b are driven mostly by the neural correlates of these events rather than some more general change in "state" of PFC dynamics. This does not invalidate this analysis but renders it potentially redundant with the single neuron results shown in Figure 5 - and I think the interpretation of this as supporting a state transition in the coding scheme is somewhat misleading. The authors may consider performing a PCA/population vector analysis on the subset of timepoints that do not contain unique behavior events, rather than on session-wide data, or otherwise equalizing samples that correspond to behavioral events in different zones. Observing a difference in PC-projected population vectors drawn from samples that are not contaminated by unique encounter-related events would substantiate the idea that there is a general shift in neural activity that is more related to the change in context or goal state, and less directly to the distinguishing events themselves.
Thank you for the comments. Indeed, this is a recurring theme where the reviewers expressed concerns and doubts about heterogenous encoding of different functional modes. Besides the systematic presentation of the results in the manuscript, from PETH to ANN and to Bayesian classifier, we argue, however, that the activity of the mPFC neurons is better represented by the population rather than loose collection of stimulus- or event-related neurons.
The PCA results that we included as the evidence of distinct functional separation, might reflect activities driven by a small number of event-coding neurons in different zones. As mentioned in the public review, we conducted the same analysis on a subset of data that excluded neural activity potentially influenced by significant events in the E-zone. The critical times are defined as ± 1 second from these events and excluded from the neural data. Despite these exclusions, the results continued to show populational differences between zones, reinforcing the notion that neurons encode abstract behavioral states (decision to avoid or stay) without the sensory- or motor-related activity. Although this analysis does not completely eliminate all possible confounding factors emerging in different external and internal contexts, it provides extra support for the population-level switch occurring in different zones.
In Figure 7, the authors include a schematic that suggests that the number of neurons representing spatial information increases in the foraging zone, and that they overlap substantially with neurons representing behaviors in the encounter zone, such as withdrawal. They show in Figure 3 that location decoding is better in the foraging zone, but I could not find any explicit analysis of single-neuron correlates of spatial information as suggested in the schematic. Is there a formal analysis that lends support to this idea? It would be simple, and informative, to include a quantification of the fraction of spatial- and behavior-modulated neurons in each zone to see if changes in location coding are really driven by "larger" population representations. Also, the authors could quantify the overlap between spatial- and behavior-modulated neurons in the encounter zone to explicitly test whether neurons "switch" their coding scheme.
The Figure 7 (now Figure 8) is now completely revised. The schematic diagram is modified to show spatial and avoidance decision encoding by the overlapping population of mPFC neurons (Figure 8a). Most notably, there are very few neurons that encode location but not the avoidance decision or vice versa. This is indicated by the differently colored units in F-zone vs. E-zone. The model also included units that are “not” engaged in any type of encoding or engaged in only one-type of encoding although they are not the majority.
We have also added a schematic for hypothetical switching mechanisms (Figure 8b) to describe the conceptual scheme for the initiation of encoding-mode switching (sensory-driven vs. arbitrator-driven process)
“Two main hypotheses could explain this switch. A bottom-up hypothesis suggests sensory inputs or upstream signals dictate encoding priorities, while a top-down hypothesis proposes that an internal or external “arbitrator” selects the encoding mode and coordinates the relevant information (Figure 8B). Although the current study is only a first step toward finding the regulatory mechanism behind this switch, our control experiment, where rats reverted to a simple shuttling task, provide evidence that might favor the top-down hypothesis. The absence of the Lobsterbot degraded spatial encoding rather than enhancing it, indicating that simply reducing the task demand is not sufficient to activate one particular type of encoding mode over another. The arbitrator hypothesis asserts that the mPFC neurons are called on to encode heterogenous information when the task demand is high and requires behavioral coordination beyond automatic, stimulus-driven execution. Future studies incorporating multiple simultaneous tasks and carefully controlling contextual variables could help determine whether these functional shifts are governed by top-down processes involving specific neural arbitrators or by bottom-up signals.”
Related to this difference in location coding throughout the environment, the authors suggest in Figure 3a-b that location coding is better in the foraging zone compared to the nest or encounter zones, evidenced by better decoder performance (smaller error) in the foraging zone (Figure 3b). The authors use the same proportion of data from the three zones for setting up training/test sets for cross-validation, but it seems likely that overall, there are substantially more samples from the foraging zone compared to the other two zones, as the animal traverses this section frequently, and whenever it moves from the next into the encounter zone (based on the video). What does the actual heatmap of animal location look like? And, if the data are down-sampled such that each section contributes the same proportion of samples to decoder training, does the error landscape still show better performance in the foraging zone? It is important to disambiguate the effects of uneven sampling from true biological differences in neural activity.
Thank you for the comment. We agree with the concern regarding uneven data size from different sections of the arena. Indeed, as the heatmap below indicates, the rats spent most of their time in two critical locations, one being a transition area between N-and F-zone and the other near the sucrose port. This imbalance needs to be corrected. In fact we have included methodology to correct this biased sampling. In the result section “Non-navigational behavior reduces the accuracy of decoded location” we have the following results.
Author response image 1.
Heatmap of the animal’s position during one example session. (Left) Unprocessed occupancy plot. Each dot represents 0.2 seconds. Right) Smoothed occupancy plot using a Gaussian filter (sigma: 10 pixels, filter size: 1001 pixels). The white line indicates a 10 cm length.
“To correct for the unequal distribution of location visits (more visits to the F- than to other zones), the regressor was trained using a subset of the original data, which was equalized for the data size per distance range (see Materials and Methods). Despite the correction, there was a significant main effect of the zone (F(1.16, 45.43) = 119.2, p <.001) and the post hoc results showed that the MAEs in the N-zone (19.52 ± 4.46 cm; t(39) = 10.45; p <.001) and the E-zone (26.13 ± 7.57 cm; t(39) = 11.40; p <.001) had a significantly higher errors when compared to the F-zone (14.10 ± 1.64 cm).”
Also in the method section, we have stated that:
“In the dataset adjusted for uneven location visits, we divided distance values into five equally sized bins. Then, a sub-dataset was created that contains an equal number of data points for each of these bins.”
Why do the authors choose to use a multi-layer neural network (Figure 2b-c) to decode the animal's distance to the encounter zone?(…) The authors may consider also showing an analysis using simple regression, or maybe something like an SVM, in addition to the ANN approach.
We began with a simple linear regression model and progressed to more advanced methods, including SVM and multi-layer neural networks. As shown below, simpler methods could decode distance to some extent, but neural networks and random forest regressors outperformed others (Neural Network: 16.61 cm ± 3.673; Linear Regression: 19.85 cm ± 2.528; Quadratic Regression: 18.68 cm ± 4.674; SVM: 18.88 cm ± 2.676; Random Forest: 13.59 cm ± 3.174).
We chose the neural network model for two main reasons: (1) previous studies demonstrated its superior performance compared to Bayesian regressors commonly used for decoding neural ensembles, and (2) its generalizability and robustness against noisy data. Although the random forest regressor achieved the lowest decoding error, we avoided using it due to its tendency to overfit and its limited generalization to unseen data.
Overall, we expect similar results with other regressors but with different statistical power for decoding accuracy. Instead, we speculate that neural network’s use of multiple nodes contributes to robustness against noise from single-unit recordings and enables the network to capture distributed processing within neural ensembles.
In Figure 6c, the authors show a prediction of withdrawal behavior based on neural activity seconds before the behavior occurs. This is potentially very interesting, as it suggests that something about the state of neural dynamics in PFC is potentially related to the propensity to withdraw, or to the preparation of this behavior. However, another possibility is that the behaves differently, in more subtle ways, while it is anticipating threat and preparing withdrawal behavior - since PFC neurons are correlated with behavior, this could explain decoder performance before the withdrawal behavior occurs. To rule out this possibility, it would be useful to analyze how well, and how early, withdrawal success can be decoded only on the basis of behavioral features from the video, and then to compare this with the time course of the neural decoder. Another approach might be to decode the behavior on the basis of video data as well as neural data, and using a model comparison, measure whether inclusion of neural features significantly increases decoder performance.
We appreciate this important point, as mPFC activity might indeed reflect motor preparation preceding withdrawal behavior. Another reviewer raised a similar concern regarding potential micro-behavioral influences on mPFC activity prior to withdrawal responses. However, our behavioral analysis suggests that highly trained rats engage in sucrose licking which has little variability regardless of the subsequent behavioral decision. To support, 95% of inter-lick intervals were less than 0.25 seconds, which is not enough time to perform any additional behavior during encounters.
Author response image 2.
To further clarify this, we included additional video showing both avoidance and escape withdrawals at close range. This video was recorded during the development of the behavioral paradigm, though we did not routinely collect this view, as animals consistently exhibited stable licking behavior in the E-zone. As demonstrated in the video, the rat remains highly focused on the lick port with minimal body movement during encounters. Therefore, we believe that the neural ensemble dynamics observed in the mPFC are unlikely to be driven by micro-behavioral changes.
Reviewer 2 (Public Review):
Thank you for the positive comment on our behavior paradigm and constructive suggestions on additional analysis. We came to think that the role of mPFC could be better portrayed as representing and switching between different encoding targets under different contexts, which in part, was more clearly manifested by the naturalistic behavioral paradigm. In the revision we tried to convey this message more explicitly and provide a new perspective for this important aspect of mPFC function.
It is not clear what proportion of each of the ensembles recorded is necessary for decoding distance from the threat, and whether it is these same neurons that directly 'switch' to responding to head entry or withdrawal in the encounter phase within the total population. The PCA gets closest to answering this question by demonstrating that activity during the encounter is different from activity in the nesting or foraging zones, but in principle this could be achieved by neurons or ensembles that did not encode spatial parameters. The population analyses are focused on neurons sensitive to behaviours relating to the threat encounter, but even before dividing into subtypes etc., this is at most half of the recorded population.
In our study, the key idea we aim to convey is that mPFC neurons adapt their encoding schemes based on the context or functional needs of the ongoing task. Other reviewers also suggested strengthening the evidence that the same neurons directly switch between encoding two different tasks. The counteracting hypothesis to "switching functions within the same neurons" posits that there are dedicated subsets of neurons that modulate behavior—either by driving decisions/behaviors themselves or being driven by computations from other brain regions.
To test this idea, we included an additional analysis chapter in the results section titled Overlapping populations of mPFC neurons adaptively encode spatial information and defensive decision. In this section, we directly tested this hypothesis by examining each neuron's contribution to the distance regressor and the event classifier. The results showed that the histogram of feature importance—the contribution to each task—is highly skewed towards zero for both decoders, and removing neurons with high feature importance does not impair the decoder’s performance. These findings suggest that 1) there is no direct division among neurons involved in the two tasks, and 2) information about spatial/defensive behavior is distributed across neurons.
Furthermore, we tested whether there is a negative correlation between the feature importance of spatial encoding and avoidance encoding. Even if there were no “key neurons” that transmit a significant amount of information about either spatial or defensive behavior, it is still possible that neurons with higher information in the navigation context might carry less information in the active-foraging context, or vice versa. However, we did not observe such a trend, suggesting that mPFC neurons do not exhibit a preference for encoding one type of information over the other.
Lastly, another reviewer raised the concern that the PCA results, which we used as evidence of functional separation of different ensemble functions, might be driven by a small number of event-coding neurons. To address this, we conducted the same analysis on a subset of data that excluded neural activity potentially influenced by significant events in the E-zone. In the Peri-Event Time Histogram (PETH) analysis, we observed that some neurons exhibit highly-modulated activity upon arrival at the E-zone (head entry; HE) and immediately following voluntary departure or attack (head withdrawal; HW). We defined 'critical event times' as ± one second from these events and excluded neural data from these periods to determine if PCA could still differentiate neural activities across zones. Despite these exclusions, the results continued to show populational differences between zones, reinforcing the notion that neurons adapt their activity according to the context. We acknowledge that this analysis still cannot eliminate all of the confounding factors due to the context change, but we confirmed that excluding two significant events (delivery onset of sucrose and withdrawal movement) does not alter our result.
To summarize, these additional results further support the conclusion that spatial and avoidance information is distributed across the neural population rather than being handled by distinct subsets. The analyses revealed no negative correlation between spatial and avoidance encoding, and excluding event-driven neural activity did not alter the observed functional separation, confirming that mPFC neurons dynamically adjust their activity to meet contextual demands.
A second concern is also illustrated by Fig. 7: in the data presented, separate reward and threat encoding neurons were not shown - in the current study design, it is not possible to dissociate reward and threat responses as the data without the threat present were only used to study spatial encoding integrity.
Thank you for this valuable feedback. Other reviewers have also noted that Figure 7 (now Figure 8) is misleading and contains assertions not supported by our experiments. In response, we have revised the model to more accurately reflect our findings. We have eliminated the distinction between reward coding and threat coding neurons, simplifying it to focus on spatial encoding and avoidance encoding neurons. The updated figure will more appropriately align with our findings and claims. A. Distinct functional states (spatial vs. avoidance decision) encoded by the same population neurons are separable by the region (F- vs. E zone). B. Hypothetical control models by which mPFC neurons assume different functional states.
Thirdly, the findings of this work are not mechanistic or functional but are purely correlational. For example, it is claimed that analyzing activity around the withdrawal period allows for ascertaining their functional contributions to decisions. But without a direct manipulation of this activity, it is difficult to make such a claim. The authors later discuss whether the elevated response of Type 2 neurons might simply represent fear or anxiety motivation or threat level, or whether they directly contribute to the decision-making process. As is implicit in the discussion, the current study cannot differentiate between these possibilities. However, the language used throughout does not reflect this.
We acknowledge that our experiments only involve correlational study and this serves as weakness. Although we carefully managed to select word to not to be deterministic, we agree that some of the language might mislead readers as if we found direct functional contribution. Thus, we changed expressions as below.
“We then further analyzed the (functional contribution ->)correlation between neural activity and success and failure of avoidance behavior. If the mPFC neurons (encode ->)participate in the avoidance decisions, avoidance withdrawal (AW; withdrawal before the attack) and escape withdrawal (EW; withdrawal after the attack) may be distinguishable from decoded population activity even prior to motor execution.”
Also, we added part below in discussion section to clarify the limitations of the study.
“Despite this interesting conjecture, any analysis based on recording data is only correlational, mandating further studies with direct manipulation of the subpopulation to confirm its functional specificity.”
Fourthly, the authors mention the representation of different functions in 'distinct spatiotemporal regions' but the bulk of the analyses, particularly in terms of response to the threat, do not compare recordings from PL and IL although - as the authors mention in the introduction - there is prior evidence of functional separation between these regions.
Thank you for bringing this part to our attention. As we mentioned in the introduction, we acknowledge the functional differences between the PL and IL regions. Although differences in spatial encoding between these two areas were not deeply explored, we anticipated finding differences in event encoding, given the distinct roles of the PL and IL in fear and threat processing. However, our initial analysis revealed no significant differences in event encoding between the regions, and as a result, we did not emphasize these differences in the manuscript. To address this point, we have reanalyzed the data separately and included the following findings in the manuscript.
“However, we did not observe a difference in decoding accuracy between the PL and IL ensembles, and there were no significant interactions between regressor type (shuffled vs. original) and regions (mixed-effects model; regions: p=.996; interaction: p=.782). These results indicate that the population activity in both the PL and IL contains spatial information (Figure 2D, Video 3).
[…]
Furthermore, we analyzed whether there is a difference in prediction accuracy between sessions with different recorded regions, the PL and the IL. A repeated two-way ANOVA revealed no significant difference between recorded regions, nor any interaction (regions: F(1, 38) = 0.1828, p = 0.671; interaction: F(1, 38) = 0.1614, p = 0.690).
[…]
We also examined whether there is a significant difference between the PL and IL in the proportion of Type 1 and Type 2 neurons. In the PL, among 379 recorded units, 143 units (37.73%) were labeled as Type 1, and 75 units (19.79%) were labeled as Type 2. In contrast, in the IL, 156 units (61.66%) and 19 units (7.51%) of 253 recorded units were labeled as Type 1 and Type 2, respectively. A Chi-square analysis revealed that the PL contains a significantly higher proportion of Type 2 neurons (χ²(1, 632) = 34.85, p < .001), while the IL contains a significantly higher proportion of Type 1 neurons compared to the other region (χ²(1, 632) = 18.07, p < .001).”
To summarize our additional results, we did not observe performance differences in distance decoding or event decoding. The only difference we observed was the proportional variation of Type 1 and Type 2 neurons when we separated the analysis by brain region. These results are somewhat counterintuitive, considering the distinct roles of the two regions—particularly the PL in fear expression and the IL in extinction learning. However, since the studies mentioned in the introduction primarily used lesion and infusion methods, this discrepancy may be due to the different approach taken in this study. Considering this, we have added the following section to the discussion.
“Interestingly, we found no difference between the PL and IL in the decoding accuracy of distance or avoidance decision. This somewhat surprising considering distinct roles of these regions in the long line of fear conditioning and extinction studies, where the PL has been linked to fear expression and the IL to fear extinction learning (Burgos-Robles et al., 2009; Dejean et al., 2016; Kim et al., 2013; Quirk et al., 2006; Sierra-Mercado et al., 2011; Vidal-Gonzalez et al., 2006). On the other hand, more Type 2 neurons were found in the PL and more Type 1 neurons were found in the IL. To recap, typical Type 1 neurons increased the activity briefly after the head entry and then remained inhibited, while Type 2 neurons showed a burst of activity during head entry and sustained increased activity. One study employing context-dependent fear discrimination task (Kim et al., 2013) also identified two distinct types of PL units: short-latency CS-responsive units, which increased firing during the initial 150 ms of tone presentation, and persistently firing units, which maintained firing for up to 30 seconds. Given the temporal dynamics of Type 2 neurons, it is possible that our unsupervised clustering method may have merged the two types of neurons found in Kim et al.’s study.
While we did not observe decreased IL activity during dynamic foraging, prior studies have shown that IL excitability decreases after fear conditioning (Santini et al., 2008), and increased IL activity is necessary for fear extinction learning. In our paradigm, extinction learning was unlikely, as the threat persisted throughout the experiment. Future studies with direct manipulation of these subpopulations, particularly examining head withdrawal timing after such interventions, could provide insight into how these subpopulations guide behavior.”
Additionally, we made some changes in the introduction, mainly replacing the PL/IL with mPFC to be consistent with the main body of results and conclusion and also specifying the correlational nature of the recording study.
“Machine learning-based populational decoding methods, alongside single-cell analyses, were employed to investigate the correlations between neuronal activity and a range of behavioral indices across different sections within the foraging arena.”
Reviewer 2 (Recommendations):
The authors consistently use parametric statistical tests throughout the manuscript. Can they please provide evidence that they have checked whether the data are normally distributed? Otherwise, non-parametric alternatives are more appropriate.
Thank you for mentioning this important issue in the analysis. We re-ran the test of normality for all our data using the Shapiro-Wilk test with a p-value of .05 and found that the following data sets require non-parametric tests, as summarized in Author response table 1 below. For those analyses which did not pass the normality test, we used a non-parametric alternative test instead. We also updated the methods section. For instance, repeated measures ANOVA for supplementary figure S1 and PCA results were changed to the Friedman test with Dunn’s multiple comparison test.
Author response table 1.
Line 107: it is not clear here or in the methods whether a single drop of sucrose solution is delivered per lick or at some rate during the encounter, both during the habituation or in the final task. This is important information in order to understand how animals might make decisions about whether to stay or leave and how to interpret neural responses during this time period. Or is it a large drop, such that it takes multiple licks to consume? Please clarify.
The apparatus we used incorporated an IR-beam sensor-controlled solenoid valve. As the beam sensor was located right in front of the pipe, the rat’s tongue activated the sensor. As a result, each lick opened the valve for a brief period, releasing a small amount of liquid, and the rat had to continuously lick to gain access to the sucrose. We carefully regulated the flow of the liquid and installed a small sink connected to a vacuum pump, so any remaining sucrose not consumed by the rat was instantly removed from the port. We clarified how sucrose was delivered in the methods section and also in the results section.
Method:
“The sucrose port has an IR sensor which was activated by a single lick. The rat usually stays in front of the lick port and continuously lick up to a rate of 6.3 times per second to obtain sucrose. Any sucrose droplets dropped in the bottom sink were immediately removed by negative pressure so that the rat’s behavior was focused on the licking.”
Result:
“The lick port was activated by an IR-beam sensor, triggering the solenoid valve when the beam was interrupted. The rat gradually learned to obtain rewards by continuously licking the port.”
However, I'm not sure I understand the authors' logic in the interpretation: does the S-phase not also consist of goal-directed behaviour? To me, the core difference is that one is mediated by threat and the other by reward. In addition, it would be helpful to visualize the behaviour in the S-phase, particularly the number of approaches. This difference in the amount of 'experience' so to speak might drive some of the decrease in spatial decoding accuracy, even if travel distance is similar (it is also not clear how travel distance is calculated - is this total distance?) Ideally, this would also be included as a predictor in the GLM.
We agree that the behaviors observed during the shuttling phase can also be considered goal-directed, as the rat moves purposefully toward explicit goals (the sucrose port and the N-zone during the return trip). However, we argue that there is a significant difference in the level of complexity of these goals.
During the L-phase, the rat not only has to successfully navigate to the E-zone for sucrose but also pay attention to the robots, either to avoid an attack from the robot's forehead or escape the fast-striking motion of the claw. When the rat runs toward the E-zone, it typically takes a side-approaching path, similar to Kim and Choi (2018), and exhibits defensive behaviors such as a stretched posture, which were not observed in the S-phase. This behavioral characteristic differs from the S-phase, where the rat adopted a highly stereotyped navigation pattern fairly quickly (within 3 sessions), evidenced by more than 50 shuttling trajectories per session. In this phase, the rat exhibited more stimulus-response behavior, simply repeating the same actions over time without deliberate optimization.
In our additional experiment with two different levels of goal complexity (reward-only vs. reward/threat conflict), we used a between-subject design in which both groups experienced both the S-phase and L-phase before surgery and underwent only one type of session afterward. This approach ruled out the possibility of differences in contextual experience. Additionally, since we initially designed the S-phase as extended training, behaviors in the apparatus tended to stabilize after rats completed both the S-phase and L-phase before surgery. As a result, we compared the post-surgery Lobsterbot phase to the post-surgery shuttling phase to investigate how different levels of goal complexity shape spatial encoding strength.
To clarify our claim, we edited the paragraph below.
“This absence of spatial correlates may result from a lack of complex goal-oriented navigation behavior, which requires deliberate planning to acquire more rewards and avoid potential threats.
[…]
After the surgery, unlike the Lob-Exp group, the Ctrl-Exp group returned to the shuttling phase, during which the Lobsterbot was removed. With this protocol, both groups experienced sessions with the Lobsterbot, but the Ctrl-Exp group's task became less complex, as it was reduced to mere reward collection.
. Given these observations, along with the mPFC’s lack of consistency in spatial encoding, it is plausible that the mPFC operates in multiple functional modes, and the spatial encoding mode is preempted when the complexity of the task requires deliberate spatial navigation.”
Additionally, we added behavior data during initial S-phase into Supplementary Figure 1.
It is good point that the amount of experience might drive decrease in spatial decoding accuracy. To test this hypothesis, we added a new variable, the number of Lobsterbot sessions after surgery, to the previous GLM analysis. The updated model predicted the outcome variable with significant accuracy (F(4,44) = 10.31, p < .001), and with the R-squared value at 0.4838. The regression coefficients were as follows: presence of the Lobsterbot (2.76, standard error [SE] = 1.11, t = 2.42, p = .020), number of recorded cells (-0.43, SE = .08, t = -5.22, p < .001), recording location (0.90, SE = 1.11, p = .424), and number of L sessions (0.002, SE = 0.11, p = .981). These results indicate that the number of exposures to the Lobsterbot sessions, as a measure of experience, did not affect spatial decoding accuracy.
For minor edit, we edited the term as “total travel distance”.
Relating to the previous point, it should be emphasized in both sections on removing the Lobsterbot and on non-navigational behaviours that the spatial decoding is all in reference to distance from the threat (or reward location). The language in these sections differs from the previous section where 'distance from the goal' is mentioned. If the authors wish to discuss spatial decoding per se, it would be helpful to perform the same analysis but relative to the animals' own location which might have equal accuracy across locations in the arena. Otherwise, it is worth altering the language in e.g. line 258 onwards to state the fact that distance to the goal is only decodable when animals are actively engaged in the task.
Thank you for this comment, we changed the term as “distance from the conflict zone” or “distance of the rat to the center of the E-zone” to clarify our experiment setup.
In Fig. 5, why is the number of neurons shown in the PETHs less than the numbers shown in the pie charts?
The difference in the number of neurons between the PETHs and the pie charts in Figure 5 is because PETHs are drawn only for 'event-responsive' units. For visualizing the neurons, we selectively included those that met certain criteria described in Method section (Behavior-responsive unit analysis). We have updated the caption for Figure 5 as follows to minimize confusion.
“Multiple subpopulations in the mPFC react differently to head entry and head withdrawal.
(A) Top: The PETH of head entry-responsive units is color-coded based on the Z-score of activity.
(C) The PETH of head withdrawal-responsive units is color-coded based on the Z-score of activity.”
I appreciate the amount of relatively unprocessed data plotted in Figure 5, but it would be great to visualize something similar for AW vs. EW responses within the HW2 population. In other words, what is there that's discernably different within these responses that results in the findings of Fig. 6?
To visualize the difference in neural activity between AW and EW, we included an additional supplementary figure (Supplementary Figure 5). We divided the neurons into Type 1 and Type 2 and plotted PETH during Avoidance Withdrawal (AW) and Escape Withdrawal (EW). Consistent with the results shown in Figure 6d, we could visually observe increased activity in Type 2 neurons before the execution of AW compared to EW. However, we couldn’t find a similar pattern in Type 1 neurons.
On a related note, it would add explanatory power if the authors were able to more tightly link the prediction accuracy of the ensemble (particularly the Type 2 neurons) to the timing of the behaviour. Earlier in the manuscript it would be helpful to show latency to withdraw in AW trials; are animals leaving many seconds before the attack happens, or are they just about anticipating the timing of the attack? And therefore when using ensemble activity to predict the success of the AW, is the degree to which this can be done in advance (as the authors say, up to 6 seconds before withdrawal) also related to how long the animal has been engaged with the threat?
We agree that the timing of head withdrawal, particularly in AW trials, is a critical factor in describing the rat's strategy toward the task. To test whether the rat uses a precise timing strategy—for instance, leaving several seconds before the attack or exploiting the discrete 3- and 6-second attack durations—we plotted all head withdrawal timepoints during the 6-second trials. The distribution was more even, without distinguishable peaks (e.g., at the very initial period or at the 3- or 6-second mark). This indicates a lack of precise temporal strategy by the rat. We included additional data in the supplementary figure (Supplementary Figure 6) and added the following to the results section.
“We monitored all head withdrawal timepoints to assess whether rats developed a temporal strategy to differentiate between the 3-second and 6-second attacks. We found no evidence of such a strategy, as the timings of premature head withdrawals during the 6-second attack trials were evenly distributed (see Supplementary Figure S1).”
As depicted in the new supplementary figure, head withdrawal times during avoidance behavior vary from sub-seconds to the 3- or 6-second attack timepoints. After receiving the reviewer’s comment, we became curious whether there is a decoding accuracy difference depending on how long the animal engaged with the threat. We selected all 6-second attack and avoidance withdrawal trials and checked if correctly classified trials (AW trials classified as AW) had different head withdrawal times—perhaps shorter durations—compared to misclassified trials (AW trials classified as EW). As shown in Author response image 3 below, there was no significant difference between these two types, indicating that the latency of head withdrawal does not affect prediction accuracy.
Author response image 3.
Finally, there remain some open questions. One is how much encoding strength - of either space or the decision to leave during the encounter - relates to individual differences in animal performance or behaviour, particularly because this seems so variable at baseline. A second is how stable this encoding is. The authors mention that the distance encoding must be stable to an extent for their regressor to work; I am curious whether this stability is also found during the encounter coding, and also whether it is stable across experience. For example, in a session when an individual has a high proportion of anticipatory withdrawals, is the proportion of Type 2 neurons higher?
Thank you for these questions. To recap the number of animals that we used, we used five rats during Lobsterbot experiments, and three rats for control experiment that we removed Lobsterbot after training. Indeed, there were individual differences in performance (i.e. avoidance success rate), number of recorded units (related to the recording quality), and baseline behaviors. To clarify these differences, see author response image 4 below.
Author response image 4.
We used a GLM to measure how much of the decoder’s accuracy was explained by individual differences. The result showed that 38.96% of distance regressor’s performance, and 12.14% of the event classifier’s performance was explained by the individual difference. Since recording quality was highly dependent on the animals, the high subject variability detected in the distance regression might be attributed to the number of recorded cells. Rat00 which had the lowest average mean absolute error had the highest number of recorded cells at average of 18. Compared to the distance regression, there was less subject variability in event classification. Indeed, the GLM results showed that the variability explained by the number of cells was only 0.62% in event classification.
The reason we mentioned that "distance encoding must be stable for our regressor to work" is entirely based on the population-level analysis. Because we used neural data and behaviors from entire trials within a session, the regressor or classifier would have low accuracy if encoding dynamics changed within the session. In other words, if the way neurons encode avoidance/escape predictive patterns changed within a training set, the classifier would fail to generate an optimized separation function that works well across all datasets.
To further investigate whether changes in experience affect event classification results over time, we plotted an additional graph below. Although there are individual and daily fluctuations in decoding accuracy, there was no observable trend throughout the experiments.
Author response image 5.
Regarding the correlation between the ratio of avoidance withdrawal and the proportion of Type 2 neurons, we were also curious and analyzed the data. Across 40 sessions, the correlation was -0.0716. For Type 1 neurons, it was slightly higher at 0.1459. We believe this indicates no significant relationship between the two variables.
Minor points:
I struggled with the overuse of acronyms in the paper. Some might be helpful but F-zone/N-zone, for example, or HE/HW, AW/EW are a bit of a struggle. After reading the paper a few times I learned them but a naive reader might need to often refer back to when they were first defined (as I frequently had to).
To increase readability, we removed acronyms that are not often used and changed HE/HW to head-entry/head-withdrawal.
I have a few questions about Figure 1F: in the text (line 150) it says that 'surgery was performed after three L sessions when the rats displayed a range of 30% to 60% AW'. This doesn't seem consistent with what is plotted, which shows greater variability in the proportion of AW behaviours both before and after surgery. It also appears that several rats only experienced two days of the L1 phase; please make clear if so. And finally, what is the line at 50% indicating? Neither the text nor the legend discuss any sort of thresholding at 50%. Instead, it would be best to make the distinction between pre- and post-surgery behaviour visually clearer.
Thank you for pointing out this issue. We acknowledge there was an error in the text description. As noted in the Methods section, we proceeded with surgery after three Lobsterbot sessions. We have removed the incorrect part from the Results section and revised the Methods section for clarity.
“After three days of Lobsterbot sessions, the rats underwent microdrive implant surgery, and recording data were collected from subsequent sessions, either Lobsterbot or shuttling sessions, depending on the experiment. For all post-surgery sessions, those with fewer than 20 approaches in 30 minutes were excluded from further analysis.”
Among the five rats, Rat2 and Rat3 did not approach the robot during the entire Lob2 session, which is why these two rats do not have Lob2 data points. We updated the caption for regarding issue.
Initially, we added a 50% reference line, but we agree it is unnecessary as we do not discuss this reference. We have updated the figure to include the surgery point, as shown in Supplementary Figure 1.
Fig. 2C: each dot is an ensemble of simultaneously recorded neurons, i.e. a subset of the total 800-odd units if I understand correctly. How many ensembles does each rat contribute? Similarly, is this evenly distributed across PL and IL?
Yes, each dot represents a single session, with a total of 40 sessions. Five rats contributed 11, 9, 8, 7, and 5 sessions, respectively. Although each rat initially had more than 10 sessions, we discarded some sessions with a low unit count (fewer than 10 sessions; as detailed in Materials and Methods - Data Collection). We collected 25 sessions from the PL and 15 sessions from the IL. Our goal was to collect more than 200 units per each region.
Please show individual data points for Fig. 2D.
We update the figure with individual data points.
Is there a reason why the section on removing the Lobsterbot (lines 200 - 215) does not have associated MAE plots? Particularly the critical comparison between Lob-Exp and Ctl-Exp.
We intentionally removed some graphs to create a more compact figure, but we appreciate your suggestion and have included the graph in Figure 2.
Some references to supplementary materials are not working, e.g. line 333.
Our submitted version of manuscript had reference error. For the current version, we used plane text, and the references are fixed.
The legend for Supp. Fig. 2B is incorrect.
We greatly appreciate this point. We changed the caption to match the figure.
Reviewer 3 (Public Review):
Thank you for recognizing our efforts in designing an ethologically relevant foraging task to uncover the multiple roles of the mPFC. While we acknowledge certain limitations in our methodology—particularly that we only observed correlations between neural activity and behavior without direct manipulation—we have conducted additional analyses to further strengthen our findings.
Weakness:
The primary concern with this study is the absence of direct evidence regarding the role of the mPFC in the foraging behavior of the rats. The ability to predict heterogeneous variables from the population activity of a specific brain area does not necessarily imply that this brain area is computing or using this information. In light of recent reports revealing the distributed nature of neural coding, conducting direct causal experiments would be essential to draw conclusions about the role of the mPFC in spatial encoding and/or threat evaluation. Alternatively, a comparison with the activity from a different brain region could provide valuable insights (or at the very least, a comparison between PL and IL within the mPFC).
Thank you for the comment. Indeed, the fundamental limitation of the recording study is that it is only correlational, and any causal relationship between neural activity and behavioral indices is only speculative. We made it clearer in the revision and refrained from expressing any speculative ideas suggesting causality throughout the revision. While we did not provide direct evidence that the mPFC is computing or utilizing spatial/foraging information, we based our assertion on previous studies that have directly demonstrated the mPFC's role in complex decision-making tasks (Martin-Fernandez et al., 2023; Orsini et al., 2018; Zeeb et al., 2015) and in certain types of spatial tasks (De Bruin et al., 1994; Sapiurka et al., 2016) . We would like to emphasize that, to the best of our knowledge, there was no previous study which investigated the mPFC function while animal is solving multiple heterogenous problems in semi-naturalistic environment. Therefore, although our recording study only provides speculative causal inference, it certainly provides a foundation for investigating the mPFC function. Future study employing more sophisticated, cell-type specific manipulations would confirm the hypotheses from the current study.
One of the key questions of this manuscript is how multiple pieces of information are represented in the recorded population of neurons. Most of the studies mentioned above use highly structured experimental designs, which allow researchers to study only one function of the mPFC. In the current study, the semi-naturalistic environment allows rats to freely switch between multiple behavioral sets, and our decoding analysis quantitatively assesses the extent to which spatial/foraging information is embedded during these sets. Our goal is to demonstrate that two different task hyperspaces are co-expressed in the same region and that the degree of this expression varies according to the rat’s current behavior (See Figure 8(b) in the revised manuscript).
Alternatively, we added multiple analyses. First, we included a single unit-level analysis looking at the place cell-like property to contrast with the ensemble decoding. Most neurons did not show well-defined place fields although there were some indications for place cell-like property. For example, some neurons displayed fragmented place fields or unusually large place fields only at particular spots in the arena (mostly around the gates). The accuracy from this place information at the single-neuron level is much lower than that acquired from population decoding. Likewise, although there were neurons with modulated firing around the time of particular behavior (head entry and withdrawal), overall prediction accuracy of avoidance decision was much higher when the ensemble-based classifier was applied.
Moreover, given that high-dimensional movement has been shown to be reflected in the neural activity across the entire dorsal cortex, more thorough comparisons between the neural encoding of task variables and movement would help rule out the possibility that the heterogeneous encoding observed in the mPFC is merely a reflection of the rats' movements in different behavioral modes.
Thanks for the comment. We acknowledge that the neural activity may reflect various movement components across different zones in the arena. We performed several analyses to test this idea. First, we want to recap our run-and-stop event analysis may provide an insight regarding whether the mPFC neurons are encoding locations despite the significant motor events. The rats typically move across the F-zone fairly routinely and swiftly (as if they are “running”) to reach the E-zone at which they reduce the moving speed to almost a halt (“stopping”). The PETHs around these critical motor events, however, did not show any significant modulation of neural activity indicating that most neurons we recorded from mPFC did not respond to movement.
We added this analysis to demonstrate that these sudden stops did not evoke the characteristic activation of Type 1 and Type 2 neurons observed during head entry into the E-zone. When we isolated these sudden stops outside the E-zone, we did not observe this neural signature (Supplementary Figure 2).
Second, our PCA results showed that population activity in the E-zone during dynamic foraging behavior was distinct from the activity observed in the N- and F-zones during navigation. However, there is a possibility that the two behaviorally significant events—entry into the E-zone and voluntary or sudden exit—might be driving the differences observed in the PCA results. To account for this, we designated ±1 second from head entry and head withdrawal as "critical event times," excluded the corresponding neural data, and reanalyzed the data. This method removed neural activity associated with sudden movements in specific zones. Despite this exclusion, the PCA still revealed distinct population activity in the E-zone, different from the other zones (Supplementary Figure 4). This result reduces the likelihood that the observed heterogeneous neural activity is merely a reflection of zone-specific movements.
Lastly, the main claim of the paper is that the mPFC population switches between different functional modes depending on the context. However, no dynamic analysis or switching model has been employed to directly support this hypothesis.
Thank you for this comment. Since we did not conduct a manipulation experiment, there is a clear limitation in uncovering how switching occurs between the two task contexts. To make the most of our population recording data, we added an additional results section that examines how individual neurons contribute to both the distance regressor and the event classifier. Our findings support the idea that distance and dynamic foraging information are distributed across neurons, with no distinct subpopulations dedicated to each context. This suggests that mPFC neurons adjust their coding schemes based on the current task context, aligning with Duncan’s (2001) adaptive coding model, which posits that mPFC neurons adapt their coding to meet the task's current demands.
Reviewer 3 (Recommendations):
The evidence for spatial encoding is relatively weak. In the F-zone (50 x 48 cm), the average error was approximately 17 cm, constituting about a third of the box's width and likely not significantly smaller than the size of a rat's body. The errors in the shuffled data are also not substantially greater than those in the original data. An essential test indicates that spatial decoding accuracy decreases when the Losterbot is removed. However, assessing the validity of the results is difficult in the current state. There is no figure illustrating the results, and no statistics are provided regarding the test for matching the number of neurons.
We acknowledge that the average error (~ 17 cm ) measured in our study is relatively large, even though the error is significantly smaller than that by the shuffled control model (22.6 cm). Previous studies reported smaller prediction errors but in different experimental conditions: 16 cm in Kaefer et al. (2020) and less than 10 cm in Ma et al. (2023) and Mashhoori et al. (2018). Most notably, the average number of units used in our study (15.8 units per session) is significantly smaller compared to the previous works, which used 63, 49, and 40 units, respectively. As our GLM results demonstrated, the number of recorded cells significantly influenced decoding accuracy (β = -0.43 cm/neuron). With a similar number of recorded cells, we would have achieved comparable decoding accuracy. In addition, unlike other studies that have employed a dedicated maze such as the virtual track or the 8-shaped maze, we exposed rats to a semi-naturalistic environment where they exhibited a variety of behaviors beyond simple navigation. As argued throughout the manuscript, we believe that the spatial information represented in the mPFC is susceptible to disruption when the animal engages in other activities. A similar phenomenon was reported by Mashhoori et al. (2018), where the decoder, which typically showed a median error of less than 10 cm, exhibited a much higher error—nearly 100 cm—near the feeder location.
As for the reviewer’s request for comparing spatial decoding without the Lobsterbot, we added a new figure to illustrate the spatial decoding results, including statistical details. We also applied a Generalized Linear Model to regress out the effect of the number of recorded neurons and statistically assess the impact of Lobsterbot removal. This adjustment directly addresses the reviewer's request for a clearer presentation of the results and helps contextualize the decoding performance in relation to the number of recorded neurons.
As indicated in the public review, drawing conclusions about the role of the mPFC in navigation and avoidance behavior during the foraging task is challenging due to the exclusively correlational nature of the results. The accuracy in AW/EW discrimination increases a few seconds before the response, implying that changes in mPFC activity precede the avoidance/escape response. However, one must question whether this truly reflects the case. Could this phenomenon be attributed to rats modifying their "micro-behavior" (as evidenced by changes in movement observed in the video) before executing the escape response, and subsequently influencing mPFC activity?
We appreciate the reviewer's thoughtful observation regarding the correlational nature of our results and the potential influence of pre-escape micro-behaviors on mPFC activity. We acknowledge that the increased accuracy in AW/EW discrimination preceding the response could also be correlated with micro-behaviors. However, there is very little room for extraneous behavior other than licking the sucrose delivery port within the E-zone, as the rats are highly trained to perform this stereotypical behavior. To support this, we measured the time delays between licking events (inter-lick intervals). The results show a sharp distribution, with 95% of the intervals falling within a quarter second, indicating that the rats were stable in the E-zone, consistently licking without altering their posture.
To complement the data presented in Author response image 2, a video clip showing a rat engaged in licking behavior was included. We carefully designed the robot compartment and adjusted the distance between the Lobsterbot and the sucrose port to ensure that rats could exhibit only limited behaviors inside the E-zone. The video confirms that no significant micro-behaviors were observed during the rat’s activity in the E-zone.
If mPFC activity indeed switches mode, the results do not clearly indicate whether individual cells are specifically dedicated to spatial representation and avoidance or if they adapt their function based on the current goal. Figure 7, presented as a schematic illustration, suggests the latter option. However, the proportion of cells in the HE and HW categories that also encode spatial location has not been demonstrated. It has also not been shown how the switch is manifested at the level of the population.
Thank you for this comment. As the reviewer pointed out, we suggest that mPFC neurons do not diverge based on their functions, but rather adapt their roles according to the current goal. To support this assertion, we added an additional results section that calculates the feature importance of decoders. This analysis allows us to quantitatively measure each neuron’s contribution to both the distance regressor and the event decoder. Our results indicate that distance and defensive behavior are not encoded by a small subset of neurons; instead, the information is distributed across the population. Shuffling the neural data of a single neuron resulted in a median increase in decoding error of 0.73 cm for the distance regressor and 0.01% for the event decoder, demonstrating that the decoders do not rely on a specific subset of neurons that exclusively encode spatial and/or defensive behavior
Although we found supporting evidence that mPFC neurons encode two different types of information depending on the current context, we acknowledge that we could not go further in answering how this switch is manifested. One simple explanation is that the function is driven by current contextual information and goals—in other words, a bottom-up mechanism. However, in our control experiment, simplifying the navigation task worsened the encoding of spatial information in the mPFC. Therefore, we speculate that an external or internal arbitrator circuit determines what information to encode. A precise temporal analysis of the timepoint when the switch occurs in more controlled experiments might answer these questions. We have added this discussion to the discussion section.
PL and IL are two distinct regions; however, there is no comparison between the two areas regarding their functional properties or the representations of the cells. Are the proportions of cell categories (HE vs HW or HE1 vs HE2, spatial encoding vs no spatial encoding) different in IL and PL? Are areas differentially active during the different behaviors?
Thank you for bringing up this issue. As mentioned in our response to the public review, we included a comparison between the PL and IL regions. While we did not observe any differences in spatial encoding (feature importance scores), the only distinction was in the proportion of Type 1 and Type 2 neurons, as the reviewer suggested. We have incorporated our interpretation of these results into the discussion section.
The results and interpretations of the cluster analysis appear to be highly dependent on the parameters used to define a cluster. For example, the HE2 category includes cells with activity that precedes events and gradually decreases afterward, as well as cells with activity that only follows the events.
We strongly agree that dependency on hyperparameters is a crucial point when using unsupervised clustering methods. To eliminate any subjective criteria in defining clusters, we carefully selected our clustering approach, which requires only two hyperparameters: the number of initial clusters (set to 8) and the minimum number of cells required to be considered a valid cluster (cutoff limit, 50). The rationale behind these choices was: 1) a higher number of initial clusters would fail to generalize neural activity, 2) clusters with fewer than 50 neurons would be difficult to analyze, and 3) to prevent the separation of clusters that show noisy responses to the event.
Author response table 2 shows the differences in the number of cell clusters when we varied these two parameters. As demonstrated, changing these two variables does result in different numbers of clusters. However, when we plotted each cluster type’s activity around head entry (HE) and head withdrawal (HW), an increased number of clusters resulted in the addition of small subsets with low variation in activity around the event, without affecting the general activity patterns of the major clusters.
The example mentioned by the reviewer—possible separation of HE2—appears when using a hyperparameter set those results in 4 clusters, not 3. In this result, 83 units, which were labeled as HE2 in the 3-cluster hyperparameter set, form a new group, HE3 (Group 3). This group of units shows increased activity after head entry and exhibited characteristics similar to HE2, with most of the units classified as HW2, maintaining high activity until head withdrawal. Among the 83 HE3 units, 36 were further classified as HW2, 44 as non-significant, and 3 as HW1. Therefore, we believe this does not affect our analysis, as we observed the separation of two major groups, Type 1 (HE1-HW1) and Type 2 (HE2-HW2), and focused our analysis on these groups afterward.
Despite this validation, there remains a strong possibility that our method might not fully capture small yet significant subpopulations of mPFC units. As a result, we have included a sentence in the methods section addressing the rationale and stability of our approach.
“(Materials and Methods) To compensate for the limited number of neurons recorded per session, the hyperparameter set was chosen to generalize their activity and categorize them into major types, allowing us to focus on neurons that appeared across multiple recording sessions. Although changes in the hyperparameter sets resulted in different numbers of clusters, the major activity types remained consistent (Supplementary Figure S8). However, there is a chance that this method may not differentiate smaller subsets of neurons, particularly those with fewer than 50 recorded neurons.”
Author response table 2.
Minor points:
Line 333: Error! Reference source not found. This was probably the place for citing Figure S2?
Lines 339, 343: Error! Reference source not found.
Thank you for mentioning these comments. In the new version, all reference functions from Word have been replaced with plain text.
-
-
ekkobit.gitlab.io ekkobit.gitlab.io
-
The above findings are differentiated by Blesse & Baskaran who found that there is a significant cost reduction associated with forced mergers, but not with voluntary mergers in a merger reform in the federal state of Brandenburg in East-Germany (post reunification) Blesse and Baskaran (2013). Their result indicates differential outcome, as a function of secondary characteristics of reform, i.e voluntariness. Given a correlation between administrative, economic efficiency and economic activity in
Here is a comment
-
-
pubs.acs.org pubs.acs.org
-
octanol–water partition coefficient (Kow)
This is a measure of the relationship of the pesticides lipophicility (fat solubility) and hydrophilicity (water solubility). Basically how fully soluble these pesticides are in the creek water. This test shows that in soil the pesticides that are the least water soluble and more fat soluble (higher KOW) are the most prevalent in soil. This makes sense!
-
aldrin, dieldrin
Aldrin breaks down into dieldrin in the environment, these are both know carcinogens and were banned in 1978
-
-
found.ward.bay.wiki.org found.ward.bay.wiki.org
-
www.sciencedirect.com www.sciencedirect.com
-
In historical research, the general issue of geocoding accuracy and the identification of locations becomes even more complex due to issues such as variant spelling, the change of location names over time, and various intricacies of historical sources.
This statement made me realize that geocoding in historical research is far more complex than I thought. Variations in the spelling of place names and the uncertainty of historical information can really challenge locational accuracy, especially when dealing with ambiguous or dated data.
-
-
yagmurrmez.github.io yagmurrmez.github.io
-
Bu yazı, mükemmeliyetçiliğin etrafımızdakilerin yüksek beklentisi sonucu ortaya çıkmış haliyle ilgili.
Veee bunu görüyorsanız telefondan da yorum atabiliyorum! Ama oturum açmanız gerekiyor hypothes de
-
-
upstream.force11.org upstream.force11.org
-
In this blog post the author argues that problematic incentive structures have led to a rapid increase in the publication of low-quality research articles and that stakeholders need to work together to reform incentive structures. The blog post has been reviewed by three reviewers. Reviewer 3 considers the blog post to be a ‘great piece’, and Reviewer 1 finds it ‘compellingly written and thought provoking’. According to Reviewer 2, the blog post does not offer significant new insights for readers already familiar with the topic. All three reviewers provide recommendations for clarifications. Reviewers 2 and 3 also suggest the blog post could be more critical toward publishers. Reviewers 1 and 3 suggest taking a broader perspective on incentives, for instance by also considering incentives related to teaching and admin or incentives for funders, libraries, and other organizations.
Competing interests: none
-
This op-ed addresses the issue with the exponential increase in publications and how this is leading to a lower quality of peer review which, in turn, is resulting in more bad science being published. It is a well-written article that tackles a seemingly eternal topic. This piece focussed more on the positives and potential actions which is nice to see as this is a topic that can become stuck in the problems. There are places throughout that would benefit from more clarity and at times there appears to be a bias towards publishers, almost placing blame on researchers. Very simple word changes or headings could immediately resolve any doubt here as I don't believe this is the intention of the article at all.
Additionally, this article is very focussed on peer review (a positive) but I think that it would benefit from small additions throughout that zoom out from this and place the discussion in the context of the wider issues - for example you cannot change peer review incentives without changing the entire incentives around "service" activities including teaching, admin etc. This occurs to a degree with the discussion on other outputs, including preprints and data. Moreover, when discussing service type activities, there is data that reveals certain demographics deliberately avoid this work. Adding this element into the article would provide a much stronger argument for change (and do some good in the new current political climate).
Overall, I thought this was a great piece when it was first posted online and does exactly what a good op-ed should - provoke thought and discussion. Below are some specific comments, in reading order. I do not believe that there are any substantial or essential changes required, particularly given that this is an op-ed article.
-----
Quote: "Academia is undergoing a rapid transformation characterized by exponential growth of scholarly outputs."
Comment: There's an excellent paper providing evidence to this: https://direct.mit.edu/qss/article/5/4/823/124269/The-strain-on-scientific-publishing which would be a very positive addition
Quote: "it’s challenging to keep up with the volume at which research publications are produced"
Comment: Might be nice to add that this was a complaint dating back since almost the beginning of sharing research via print media, just to reinforce that this is a very old point.
Quote: "submissions of poor-quality manuscripts"
Comment: The use of "poor quality" here is unnecessary. Just because a submission is not accepted, it has no reflection on "quality". As such this does seem to needlessly diminish work rejected by one journal
Quote: "Maybe there are too many poor quality journals too - responding to an underlying demand to publish low quality papers."
Comment: This misses the flip side - poor quality journals encourage and actively drive low quality & outright fraudulent submissions due to the publisher dominance in the assessment of research and academics.
Quote: "even after accounting for quality,"
Comment: Quality is mentioned here but has yet to be clearly defined. What is "quality"? - how many articles a journal publishes? The "prestige" of a journal? How many people are citing the articles?
Quote: "Researchers can – and do – respond to the availability by slicing up their work (and their data) into minimally publishable units"
Comment: I fully agree that some researchers do exactly this. However, again, this seems to be blaming researchers for creating this firehose problem. I think this point could be reworded to not place so much blame or be substantiated with evidence that this is a widespread practice - my experience has been very mixed in that I've worked for people who do this almost to the extreme (and have very high self-citations) and also worked for people who focus on the science and making it as high quality and robust as possible. I agree many respond to the explosion of journals and varied quality in a negative manner but the journals, not researchers are the drivers here.
Quote: "least important aspect of the expected contributions of scholars."
Comment: I think it may be worth highlighting here that sometimes specific demographics (white males) actively avoid these kinds of service activities - there's a good study on this providing data in support of this. It adds an extra dimension into the argument for appropriate incentives and the importance & challenges of addressing this.
Quote: "high quality peer review"
Comment: Just another comment on the use of "quality'. This is not defined and I think when discussing these topics it is vital to be clear what one means by "high quality". For example, a high quality peer review that is designed as quality control would be detecting gross defects and fraud, preventing such work from being published (peer review does not reliably achieve this). In contrast, a high quality peer review designed to help authors improve their work and avoid hyperbole would be very detailed and collegial, not requesting large numbers of additional experiments.
Quote: "conferring public trust in the oversight of science"
Comment: I'm not convinced of this. Conveying peer review as a stamp of approval or QC leads to reduced trust when regular examples emerge with peer review failures - just look at Hydroxychloroquine and how peer review was used to justify that during COVID or the MMR/autism issues that are still on-going even after the work was retracted. I think this should be much more carefully worded, removed or expanded on to provide this perspective - this occurs slightly in the following sentence but it is very important to be clear on this point.
Quote: "Researchers hold an incredible amount of market power in scholarly publishing"
Comment: I like the next few paragraphs but, again, this seems to be blaming researchers when they in fact hold no/little power. I agree that researchers *could* use market pressure but this is entirely unrealistic when their careers depend on publishing X papers in X journal. An argument as to why science feels increasingly non-collaborative perhaps. Funders can have immediate and significant changes. Institutions adopting reward structures, such as teaching for example, would have significant impacts on researcher behaviour. Researchers are adapting to the demands the publication system creates - more journals, greater quantity and reduced quality whilst maintaining control over the assessment - eLife being removed from Wos/Scopus is a prime example of publishers (via their parent companies) preventing innovation or even rather basic improvements.
Quote: "With preprint review, authors participate in a system that views peer review not as a gatekeeping hurdle to overcome to reach publication but as a participatory exercise to improve scholarship."
Comment: This is framing that I really like; improving scholarship, not quality control.
Quote: "buy"
Comment: typo
Quote: "adoption of preprint review can shift the inaccurate belief that all preprints lack review"
Comment: Is this the right direction for preprints though? If we force all preprints to be reviewed and only value reviewed-preprints, then we effectively dismantle the benefits of preprints and their potential that we've been working so hard to build. A recent op-ed by Alice Fleerackers et al provided an excellent argument to this effect. More a question than a suggestion for anything to change.
Quote: "between all of those stakeholders to work together without polarization"
Comment: I disagree here - publishers have repeatedly shown that their only real interest is money. Working with them risks undermining all of the effort (financial, careers, reputation, time) that advocates for change put in. The OA movement should also highlight perfectly why this is such a bad route to go down (again). Publishers grip on preprint servers is a great example - those servers are hard to use as a reader, lack APIs and access to data, are not innovative or interacting with independent services. The community should make the rules and then publishers abide by and within them. Currently the publishers make all of the rules and dominate. Indeed, this is possibly the biggest ommision from this article - the total dominance of publishers across the entire ecosystem. You can't talk about change without highlighting that the publishers don't just own journals but the reference managers, the assessment systems, the databases etc. I may be an outlier on this point but for all of the people I interact with (often those at the bottom of the ladder) this is a strong feeling. Again, not a suggestion for anything to change and indeed the point of an op-ed is to stimulate thought and discussion so dissent is positive.
Note that these annotations were made in hypothes.is and are available here, linked in-text for ease - comments are duplicated in this review.
-
Summary of the essay
In this essay, the author seeks to explain the ‘firehose’ problem in academic research, namely the rapid growth in the number of articles but also the seemingly concurrent decline in quality. The explanation, he concludes, lies in the ‘superstructure’ of misaligned incentives and feedback loops that primarily drive publisher and researcher behaviour, with the current publish or perish evaluation system at the core. On the publisher side, these include commercial incentives driving both higher acceptance rates in existing journals and the launch of new journals with higher acceptance rates. At the same time, publishers seek to retain reputational currency by maintaining consistency and therefore brand power of scarcer, legacy-prestige journals. The emergence of journal cascades (automatic referrals from one journal to another journal within the same publisher) and the introduction of APCs (especially for special issues) also contribute to commercial incentives driving article growth. On the researcher side, he argues that there is an apparent demand from researchers for more publishing outlets and simultaneous salami slicing by researchers because authors feel they have to distribute relatively more publications among journals that are perceived to be of lower quality (higher acceptance rates) in order to gain equivalent prestige to that of a higher impact paper. The state of peer review also impacts the firehose. The drain of PhD qualified scientists out of academia, compounded by a lack of recognition for peer review, further contributes to the firehose problem because there are insufficient reviewers in the system, especially for legitimate journals. Moreover, what peer review is done is no guarantee of quality (in highly selective journals as well as ‘predatory’). One of his conclusions is that there is not just a crisis in scholarly publishing but in peer review specifically and it is this crisis that will undermine science the most. Add AI into the mix of this publish or perish culture, and he predicts the firehose will burst.
He suggests that the solution lies in researchers taking back power themselves by writing more but ‘publishing’ less. By writing more he means outputs beyond traditional journal publications such as policy briefs, blogs, preprints, data, code and so on, and that these should count as much as peer-reviewed publications. He places special emphasis on the potential role of preprints and on open and more collegiate preprint review acting as a filter upstream of the publishing firehouse. He ends with a call for more collegiality across all stakeholders to align the incentives and thus alleviate the pressure causing the firehose in the first place.
General Comment
I enjoyed reading the essay and think the author does a good job of exposing multiple incentives and competing interests in the system. Although discussion of perverse incentives has been raised in many articles and blog posts, the author specifically focuses on some of the key commercial drivers impacting publishing and the responses of researchers to those drivers. I found the essay compellingly written and thought provoking although it took me a while to work through the various layers of incentives. In general, I agree with the incentives and drivers he has identified and especially his call for stakeholders to avoid polarization and work together to repair the system. Although I appreciate the need to have a focused argument I did miss a more in-depth discussion about the equally complex layers of incentives for institutions, funders and other organisations (such as Clarivate) that also feed the firehose.
I note that my perspective comes from a position of being deeply embedded in publishing for most of my career. This will have also impacted what I took away from the essay and the focus of my comments below.
Main comments
-
I especially liked the idea of a ‘superstructure’ of incentives as I think that gives a sense of the size and complexity of the problem. At the same time, by focusing on publisher incentives and researchers’ response to them he has missed out important parts of the superstructure contributing to the firehose, namely the role of institutions and funders in the system. Although this is implicit, I think it would have been worth noting more, in particular:
-
He mentions institutions and the role of tenure and promotion towards the end but not the extent of the immense and immobilizing power this wields across the system (despite initiatives such as DORA and CoARA).
-
Most review panels (researchers) assessing grants for funders are also still using journal publications as a proxy for quality, even if the funder policy states journal name and rank should not be used
-
Many Institutions/Universities still rely on number and venue of publications. Although some notable institutions are moving away from this, the impact factor/journal rank is still largely relied on. This seems especially the case in China and India for example, which has shown a huge growth in research output. Although the author discusses the firehose, it would have been interesting to see a regional breakdown of this.
-
Libraries also often negotiate with publishers based on volume of articles – i.e they want evidence that they are getting more articles as they renegotiate a specific contract (e.g. Transformative agreements), rather than e.g. also considering the quality of service.
-
Institutions are also driven by rankings in a parallel way to researchers being assessed based on journal rank (or impact factor). How University Rankings are calculated is also often opaque (apart from the Leiden rankings) but publications form a core part. This further incentivises institutions to select researchers/faculty based on the number and venue of their publications in order to promote their own position in the rankings (and obtain funding)
-
-
The essay is also about power dynamics and where power in the system lies. The implication in the essay is that power lies with the publishers and this can be taken back by researchers. Publishers do have power, especially those in possession of high prestige journals and yet publishers are also subject to the power of other parts of the system, such as funder and institutional evaluation policies. Crucially, other infrastructure organisations, such as Clarivate, that provide indexing services and citation metrics also exert a strong controlling force on the system, for example:
-
Only a subset of journals are ever indexed by Clarivate. And funders and Institutions also use the indexing status of a journal as a proxy of quality. A huge number of journals are thus excluded from the evaluation system (primarily in the arts and humanities but also many scholar-led journals from low and middle income countries and also new journals). This further exacerbates the firehose problem because researchers often target only indexed journals. I’d be interested to see if the firehose problem also exists in journals that are not traditionally indexed (although appreciate this is also likely to be skewed by discipline)
-
Indexers also take on the role of arbiters of journal quality and can choose to delist or list journals accordingly. Listing or delisting has a huge impact on the submission rates to journals that can be worth millions of dollars to a publisher, but it is often unclear how quality is assessed and there seems to be a large variance in who gets listed or not.
-
Clarivate are also paid large fees by publishers to use their products, which creates a potential conflict of interest for the indexer as delisting journals from major publishers could potentially cause a substantial loss of revenue if they withdraw their fees. Also Clarivate relies on publishers to create the journals on which their products are based which may also create a conflict if Clarivate wishes to retain the in-principle support of those publishers.
-
The delisting of elife recently, even though it is an innovator and of established quality, shows the precariousness of journal indexing.
-
-
All the stakeholders in the system seem to be essentially ‘following the money’ in one way or another – it’s just that the currency for researchers, institutions, publishers and others varies. Publishers – both commercial and indeed most not-for profit - follow the requirements of the majority of their ‘customers’ (and that’s what authors, institutions, subscribers etc are in this system) in order to ensure both sustainability and revenue growth. This may be a legacy of the commercialisation of research in the 20th Century but we should not be surprised that growth is a key objective for any company. It is likely that commercial players will continue to play an important role in science and science communication; what needs to be changed are the requirements of the customers.
-
The root of the problem, as the author notes, is what is valued in the system, which is still largely journal publications. The author’s solution is for researchers to write more – and for value to be placed on this greater range of outputs by all stakeholders. I agree with this sentiment – I am an ardent advocate for Open Science. And yet, I also think the focus on outputs per se and not practice or services is always going to lead to the system being gamed in some way in order to increase the net worth of a specific actor in the system. Preprints and preprint review itself could be subject to such gaming if value is placed on e.g. the preprint server or the preprint-review platform as a proxy of preprint and then researcher quality.
-
I think the only way to start to change the system is to start placing much more value on both the practices of researchers (as well as outputs) and on the services provided by publishers. Of course saying this is much easier than implementing it.
Other comments
-
A key argument is that higher acceptance rates actually create a perverse incentive for researchers to submit as many manuscripts as possible because they are more likely to get accepted in journals with higher acceptance rates. I disagree that higher acceptance rates per se are the main incentive for researchers to publish more. More powerful is the fact that those responsible for grants and promotion continue to use quantity of journal articles as a proxy for research quality.
-
Higher acceptance rates are not necessarily an indicator of low quality or a bad thing if it means that null, negative and inconclusive results are also published
-
The author states that Journal Impact Factors might have been an effective measure of quality in the past. I take issue with this because the JIF has, as far as I know, always been driven by relatively few outliers (papers with very high citations) and I don’t know of evidence to show that this wasn’t also true in the past. It also makes the assumption that citations = quality.
-
The author asks at one point “Why would field specialization need a lower threshold for publication if the merits of peer review are constant? ” I can see a case for lower thresholds, however, when the purpose of peer review is primarily to select for high impact, rather than rigour, of the science conducted. A similar case might be made for multidisciplinary research, where peer reviewers tend to assess an article from their discipline’s perspective and reject it because the part that is relevant to them is not interesting enough… Of course, this all points to the inherent problems with peer review (with which I agree with the author)
-
The author puts his essay in appropriate context, drawing on a range of sources to support his argument. I particularly like that he tried to find source material that was openly available.
-
He cites 2 papers by Bjoern Brembs to substantiate the claim that there is potentially poorer review in higher prestige journals than in lower ranked journals. These papers were published in 2013 and 2018 and the conclusions relied, in part, on the fact that higher ranked journals had more retractions. Apart from a potential reporting bias, given the flood of retractions across multiple journals in more recent years, I doubt this correlation now exists?
-
The author works out submission rates from the published acceptance rates of journals. The author acknowledges this is only approximate and discusses several factors that could inflate or deflate it. I can add a few more variables that could impact the estimate, including: 1) the number of articles a publisher/journal rejects before articles are assigned to any editor (e.g. because of plagiarism, reporting issues or other research integrity issues), 2) the extent to which articles are triaged and rejected by editors before peer review (e.g. because it is out of scope or not sufficiently interesting to peer review); the number of articles rejected after peer review; and 4) the extent to which authors independently withdraw an article at any stage of the process. When publishers publish acceptance rates, they don’t make it clear what goes into the numerator or the denominator and there are no community standards around this. The author rightly notes this process is too opaque.
Catriona J. MacCallum
As is my practice, I do not wish to remain anonymous. Please also note that I work for a large commercial publisher and am writing this review in an independent capacity such that this review reflects my own opinion, which are not necessarily those of my employer.
-
-
This is a well written and clear enough piece that may be helpful for a reader new to the topic. To people familiar with the field there is not so much which is new here. The final recommendation is not well expressed. As currently put it is, I think, wrong. But it is a provocative idea. I comment section by section below.
The first paragraphs repeat well established facts that there are too many papers. Seppelt et al’s contribution is missing here. It also reproduces the disengenuous claim, by a publisher’s employee, that publishers ‘only’ respond to demand. I do not think that is true. They create demand. They encourage authors to write and submit papers, as anyone who has been emailed by MDPI recently can testify. Why repeat something which is so inaccuate?
The section on ‘upstream of the nozzle’ is rather confusing. I think the author is trying to establish if more work is being submitted. But this cannot be deduced from the data presented. No trends are given. Rejection rates will be a poor guide if the same paper is being rejected by several journals. I was also confused by the sources used to track growth in papers – why not just use Dimensions data? The final paragraph again repeats well known facts about the proliferation of outlets and salami slicing. Thus far the article has not introduced new arguments.
Minor points in this section:
-
there are some unsupported claims. Eg ‘This is a practice that is often couched within the seemingly innocuous guise of field specialty journals.’
-
I also do not understand the logic of this rather long sentence: ‘The expansion of journals with higher acceptance rates alters the rational calculus for researchers - all things being equal higher acceptance rates create a perverse incentive to submit as many manuscripts as possible since the underlying probability of acceptance is simply higher than if those same publications were submitted to a journal with a lower acceptance rate, and hence higher prestige.’ I suggest it be rephrased
The section on peer review (Who’s testing the water) is mostly a useful review of the issues. But there are some problems which need addressing. Bizarrely, when discussing whether there enough scientists, it fails to mention Hanson et al’s global study, despite linking to it’s preprint in the opening lines. Instead the author adopts a parochial North American approach and refers only to PhDs coming from the US. It is not adequate to take trends in one country to cannot explain an international publishing scene. These are not the ‘good data’ the author claims. Likewise the value of data on doctorates not going onto a post-doc hinges on how many post-docs there are. That trend is not supplied. This statement ‘Almost everyone getting a doctorate goes into a non-university position after graduation’ may be true, but no supporting data are supplied to justify it. Nor do we know what country, or countries, the author is referring to.
The section ‘A Sip from the Spring’ makes the mistaken claim that researchers hold market power. This is not true. Researchers institutions, their libraries and governments are the main source of publisher income. It is here that the key proposal for improvement is made: researcher can write more and publish less. But if the problem is that there is too much poorly reviewed literature then this cannot be the solution. Removing all peer review, would mean there is even more material to read whose appearance is not slowed up by peer review at all. If peer review is becoming inadequate, evading it entirely is hardly a solution.
This does not mean we should not release pre-prints. The author is right to advocate them, but the author is mistaken to think that this will reduce publishing pressures. The clue is in their name ‘pre-print’. Publication is intended.
Missing from the author’s argument is recognition of the important role that communities of researchers form, and the roles that journals play in providing venues for conversation, disagreement and disucssion. They provide a filter. Yes researchers produce other material than publications as the author states: ‘grant proposals, editorials, policy briefs, blog posts, teaching curricula and lectures, software code and documentation, dataset curation, and labnotes and codebooks.’ I would add email and whatsapp messages to that list. But adding all that to our reading lists will not reduce the volume of things to be read. It must increase it. And it would make it harder to marshall and search all those words.
But the idea is provocative nonetheless. Running through this paper, and occasionally made explicit, is the fact that publishers earn billions from their ‘service’ to academia. They have a strong commercial interest in our publishing more, and in competing with each other to produce a larger share of the market. If writing more, and publishing less, means we need to find ways of directing our thoughts so that they earn less money for publishers, then that could bring real change to the system.
A minor point: the fire hosre analogy is fully exploited and rather laboured in this paper. But it is a North American term and image, that does not travel so easily.
-
-
August 27, 2024
-
February 27, 2025
-
March 13, 2025
-
Authors:
- Christopher Marcum christopher.steven.marcum@gmail.com
-
-
10.54900/r8zwg-62003
-
Drinking from the Firehose? Write More and Publish Less
-
-
medium.com medium.com
-
Heatmaps: Show the density of data points on a map. For example, here is a heatmap of San Francisco bike rentals:
It makes me think that heat maps are really intuitive and make it clear what areas the data is concentrated in. I do wonder though, if the amount of data is particularly large, will it affect the accuracy or readability of the heat map?
-
-
jeffreypalermo.com jeffreypalermo.com
-
faculty.washington.edu faculty.washington.edu
-
Even in a community, everyone is different: coming to agreement on who is being served, why they are being served, and what one believes is causing the problem, and how it impacts a particular group, is key to focusing design efforts.
I think this is a very valid statement. I see this diversity within my own community. There is a stereotype of my community being very opinionated, and while stereotypes are harmful, I find that this stereotype captures the diversity within my community. Even the "niche" community within my community has different needs and wants than the broader group. We are grouped in as one group, when truthfully, our culture of diaspora has created a lot of diversity. While we've had shared themes, such as persecution, our cuisines are different, our hymns sound different,etc. I am designing for a subcommunity of my community, and while we have problems and desires that overlap with the broader community, we have our own as well.
-
-
www.planalto.gov.br www.planalto.gov.br
-
VII
-
O exercício de um direito constitucional é garantia fundamental a ser protegida por esta Corte, desde que não exercido de forma abusiva. (...) ao considerar o exercício do direito de greve como falta grave ou fato desabonador da conduta, em termos de avaliação de estágio probatório, que enseja imediata exoneração do servidor público não estável, o dispositivo impugnado viola o direito de greve conferido aos servidores públicos no art. 37, VII, CF/1988, na medida em que inclui, entre os fatores de avaliação do estágio probatório, de forma inconstitucional, o exercício não abusivo do direito de greve. [ADI 3.235, voto do red. do ac. min. Gilmar Mendes, j. 4-2-2010, P, DJE de 12-3-2010]. Vide RE 226.966, red. do ac. min. Cármen Lúcia, j. 11-11-2008, 1ª T, DJE de 21-8-2009
-
A Justiça Comum Federal ou Estadual é competente para julgar a abusividade de greve de servidores públicos celetistas da administração direta, autarquias e fundações de direito público. [RE 846.854, red. do ac. min. Alexandre de Moraes, j. 1º-8-2017, P, DJE de 7-2-2018, Tema 544, com mérito julgado.]
- O exercício do direito de greve, sob qualquer forma ou modalidade, é vedado aos policiais civis e a todos os servidores públicos que atuem diretamente na área de segurança pública. É obrigatória a participação do poder público em mediação instaurada pelos órgãos classistas das carreiras de segurança pública, nos termos do art. 165 do CPC, para vocalização dos interesses da categoria. [ARE 654.432, red. do ac. min. Alexandre de Moraes, j. 5-4-2017, P, DJE de 11-6-2018, Tema 541, com mérito julgado.]
- A administração pública deve proceder ao desconto dos dias de paralisação decorrentes do exercício do direito de greve pelos servidores públicos, em virtude da suspensão do vínculo funcional que dela decorre, permitida a compensação em caso de acordo. O desconto será, contudo, incabível se ficar demonstrado que a greve foi provocada por conduta ilícita do poder público. [RE 693.456, rel. min. Dias Toffoli, j. 27-10-2016, P, DJE de 19-10-2017, Tema 531, com mérito julgado.]
-
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do you think there is information that could be discovered through data mining that social media companies shouldn’t seek out (e.g., social media companies could use it for bad purposes, or they might get hacked and others could find it)?
I think that deeply personal information such as one's mental state, financial hardship, or relationship problems could be used for targeted advertising or manipulation, which would be invasive and wrong. Even if businesses have no intention of using it negatively, keeping that type of sensitive information on file heightens the potential for catastrophic damage in case of a data breach. I believe there should be some off-limits boundaries, especially when those providing the data have no idea how it is going to be utilized. That data can be mined does not mean that it should.
-
-
lsbjordao.github.io lsbjordao.github.io
-
Tutela e Proteção
tutela tem conceito institucional, quem teria a responsabilidade de assegurar a preservação.
-
-
faculty.washington.edu faculty.washington.edu
-
. Every single solution will meet some people’s needs while failing to meet others. And moreover, solutions will meet needs in different degrees, meaning that every solution will require some customization to accommodate the diversity of needs inherent to any problem. The person you’re designing for is not like you and really not like anyone else. The best you can do is come up with a spectrum of needs to design against, and then decide who you’re going to optimize for.
I think it's easy to forget that there is no such thing as an "average user". I find that many of the digital designs in our every day lives create a bubble or filter that only pushes content curated for people like ourselves. I have never heard of solving problems be framed in this way and the insight that stuck with me from reading this chapter is to come up with a spectrum of needs before deciding who to design for.
-
-
minnstate.pressbooks.pub minnstate.pressbooks.pub
-
Where God blesses any branch of any noble or generous family with a spirit and gifts fit for government, it would be a taking of God’s name in vain to put such a talent under a bushel and a sin against the honor of magistracy to neglect such in our public elections
I’ve heard about this ideology before
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
In de zaak Zehentner tegen Oostenrijk ging het om een vrouw met een psychische aandoening die haar rekeningen niet betaalde en daardoor haar woning verloor. Pas na het verstrijken van de beroepstermijn werd er een curator aangesteld, maar op dat moment was het te laat om nog tegen de executie op te komen. Het EHRM vond dit een schending van haar rechten: de strikte toepassing van de termijn hield geen rekening met haar kwetsbare situatie.
Het Hof benadrukte dat staten bij de toepassing van procedurele regels bijzondere zorgvuldigheid moeten betrachten wanneer het gaat om kwetsbare personen, zoals mensen met psychische aandoeningen.
-
In de zaak Stagno tegen België erfden twee meisjes een bedrag uit een levensverzekering na het overlijden van hun vader. Hun moeder verbraste het geld, maar omdat de wet bepaalde dat zij binnen drie jaar na het overlijden een vordering moesten instellen, konden zij later niets meer doen. Het EHRM oordeelde dat deze termijn in hun geval disproportioneel was en hun recht op toegang tot de rechter had geschonden;
In deze zaak hadden twee minderjarige meisjes recht op een som geld uit een levensverzekering na het overlijden van hun vader. De moeder, die hun wettelijke vertegenwoordiger was, gebruikte het geld echter voor andere doeleinden.
Wat hier juridisch interessant en problematisch was, is dat volgens Belgisch recht de rechtsvordering tegen de moeder binnen drie jaar na het overlijden van de vader moest worden ingesteld. Deze termijn begon te lopen terwijl de meisjes nog minderjarig waren en dus feitelijk niet in staat waren om zelfstandig op te treden of te begrijpen wat er gebeurde met hun erfenis. Tegen de tijd dat zij oud genoeg waren om actie te ondernemen, was de verjaringstermijn verstreken.
Het Europees Hof voor de Rechten van de Mens oordeelde dat de rigide toepassing van deze verjaringstermijn in strijd was met het recht op toegang tot de rechter. Het Hof benadrukte dat procedurele regels weliswaar noodzakelijk zijn voor rechtszekerheid, maar dat deze regels niet zodanig strikt mogen zijn dat zij het wezenlijke recht op toegang tot de rechter illusoir maken, zeker niet in het geval van kwetsbare personen zoals minderjarigen.
-
-
apnews.com apnews.com
-
Tuesday to block an effort to reverse a law passed after the 2018 Parkland school shooting that raised the state’s gun-buying age from 18 to 21.
There is a law that was placed after the Parkland shooting that people are currently trying to reverse. They want to lower the gun-buying age back to 18, but students who experienced the violence on April 17th are working to block this effort.
-
t wasn’t the first time they have been traumatized by a school shooting.
For some of the students on the FSU campus, it wasn't there first school shooting. This isn't something that someone should even have to experience once, but the fact that they have experienced it twice is not normal but revealed as normal.
-
-
www.nytimes.com www.nytimes.com
-
taped off
the tape is a boundary that is keeping the students and community from entering the active crime scene, but this tape that is there now is a boundary that wasn't there before and didn't limit the shooter from dangering the people there. Its a boundary after the fact and there's no protection before the event.
-