7,306 Matching Annotations
  1. Sep 2018
    1. What if we design our games to be more socially meaningful?

      Even better; Why don't we design our social networks to be more socially meaningful? But first we need to consider deconstruction and re-engineering them as robust against the predatory environment of corporate and political exploitation.

      There is sooooo much going wanting in the areas of privacy, autonomy anti-fragility and economic rationalization and yes... user relevance and psycho/socio-ergonomics would fall into line in and around all that, but I do wish we could see the priority of undermining the existing tyranny of greed and power requires restructuring and winding back the entrenched dependencies on existing infrastructure and modalities. Walled gardens, phishing scams, Non-hierarchical network topology held captive and dependent upon on DNS and such hierarchical protocols and network topologies. Websites are forever usurping desktops and re-inventing the center and now apps all vying to be the new centrist portal/platform and gateway to corner a market if not a monopoly.

      If these fundamentals are not addressed there seems little point in meddling with top down procedural micro-management strategies designed to preempt the behavioral dynamics of players and bloat the front end interface codebase with more to manage in the eminent transition. I think we are over due for universal back-end overhaul and time to dial back the development of interface and the codebases of existing infrastructure dependency is somewhere from imminent to well over due.

      There may be many futile efforts put into redundancies far too late in the day. Cutting of losses and looking for the efficiency in fundamentals, may call for broad depreciation of 'legacy' code still in service. A couple to a few years down the track the dawning of a new paradigm will reward us and pay back our sacrifice.

      Having said that, I do actually like these ideas and not just for the game environment but for the plethora of dis jointed incompatible social apps/networks that avoid cross pollination and standardization of integration (compatibility/interoperability/profile-portability) and devour user contribution and conscription of their extended contact base as a commodity to exploit them with, rather than a commodity they should be rewarded for providing the platform.

      Open platforms of interoperable networking could be made to usurp the walled garden or to force them to segregate their platform and account code from the profile/user data base which they could hold in an encrypted and non-permanent form. Revoke their key and destroy their record of the user profile. Now they will play as responsible competitive providers for independent users with rewards to trade/revoke conditional. Now all these transaction provisions to describe the environment become relevant to the developers and the need to be feature and compatibility relevant cross-platform and retain profile agnostic interoperability, would drive an arms race in this customer centrist marketplace and a social experience of true value might rise from the source that matters.

    1. the self may be seen as a social actor, who enacts roles and displays traits by performing behaviors in the presence of others. Second,

      I feel like this is somewhat related to what most of us said in reference to inner and outer self. We enact different roles and behaviors depending on who we're with and how comfortable we are with that person. In the presence of others, I feel like we all have some social morals and behaviors we all uphold which I think what this idea is trying to say.

    2. the self may be seen as a social actor, who enacts roles and displays traits by performing behaviors in the presence of others.

      I find this very intreguing because although someone may be putting on a persona in the presence of others, it very rarely is intentional. Our bodies and minds snap into a certain act when we encounter situations and immediately react in whatever way we think will protect us the best. If its a group of mean girls the best form of protection is to put on a fake nice act and comply with what they say. With someone in a position of power people will adjust their disposition to be more submissive.

    1. When it happens: Have you ever felt proud of yourself for bringing lunch from home all week and then gone out for expensive meals over the weekend? What about working out in the morning only to binge on late-night snacks?The progress bias explains our tendency to overestimate the effects of our positive or goal-supportive actions (like exercising) and underestimate the effects of negative actions (like eating poorly), which can lead to making poor choices as we think we’re in a better position than we are.

      I understood that eating home made food during the weekdays and going out to eat on weekends doesn't benefit you much. As the author said It's like working out in the morning and binge on late-night snacks, sure it may save some money but in longer run it doesn't benefits you much.

    1. Communication technology, social media, electronic parenting, and many otherrecent technological advances may reduce social behaviors

      It’s very common in today’s society to see everyone sitting in a room on their phones. Even if two people are having a conversation, if there’s an awkward silence automatically someone will grab out their phone. I think this generation and younger generations are finding it harder and harder to communicate with others face to face, and that could be a possible reason as to why so many young people suffer from loneliness and depression. Even though we can sit on our phones and text or message multiple people, there is still a lack of connection that you can get from talking to someone face to face.

    1. "We have to start changing the climate of schools," Mr. Rose said, "and when we change the climate of schools, it takes time."

      Even though it may take a long time to change school climates and the ways we view those different than us, I absolutely think it's worth it. It's only natural to want a more immediate change, but I think it's going to take a lot of hard work and a lot of patient teachers, involved parents, and willing children to make the changes we need within our schools and society as a whole.I absolutely think it's possible that knowledge and compassion are the best ways to unite us all as people, no matter what our differences are.

    1. We are marching fast and steadily towards free trade. We must meet the views of the people of the Lower Provinces, who are hostile to high tariffs, and the demand of the Imperial authorities that we should not tax their manufactures so heavily as—in their phrase—almost to deprive them of our market. It was distinctly and officially stated the other day, in Newfoundland, that assurance had been given to the Government of Newfoundland that the views of the Canadian Government are unmistakably in this direction. And I do not think there is any mistake about that, either. To show how people at home, too, expect our tariff to come down, I may refer to the speech of Mr. HAMBURY TRACY, in seconding the Address in answer to the Speech from the Throne, in the House of Commons the other day. He could not stop, after saying generally that he was pleased with this Confederation movement, without adding that he trusted it would result in a very considerable decrease in the absurdly high and hostile tariff at present prevailing in Canada.

      §.121 of the Constitution Act, 1867.

    1. Our point is that this perspective is itself an implicit valuation. Itis simply arguing that nature is more valuable than any possiblealternative. While in many cases this may be true, society hasmade decisions that imply it is not always the case (Russell-Smith et al., 2015). Every time we build homes, schools, and hospi-tals, which are essential for human wellbeing, we appropriateecosystems and impact our natural capital. Thus, being more expli-cit about the value of ES and NC can help society make better deci-sions in the many cases in which trade-offs exist

      Very important. If we do not make an effort to value ecosystems with the argument that nature should be always protected, What we are saying is that nature has a higher value than enything else. Even when this might sound convincing, obviously we as society or individuals do not think so: We built houses and hospitals with the agreement of everyone and the obvious consequence of running over nature.

  2. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Convinced that reality has noinherent nature, which he might hope to identify as the truthabout things, he devotes himself to being true to his own nature.

      I don't buy this. I don't think most people have the time to be thinking about the nature of their reality or be solipsistic. It is a rather privileged position to be in to be able to spend one's time pondering about the social construction of reality.

      The author also fails to provide examples of what he means as "anti-realist" doctrines. So I am just speaking on my interpretation of his writing.

      In practice, perhaps what the author is talking about are those who believe in "alternative facts" are anti-realists. I think it takes a lot of time, education, intellectual engagement to b able to triangulate data, and be media literate -- i.e. privileged, in order to really have an 'objective' view on the world. While the author does state that objectivity may never be achieved but its a goal, he fails to consider what type of society/culture, education system, values, institutions, resources are needed to create a democracy of "informed" citizens.

      I think for many people in the world their sense of reality is based on patterns they see, socialization, personal experiences, in addition to appeals of authority, etc, So I think we should be more understanding of where and why anti-realist doctrines come from in social life

    2. His eyeis not on the facts at all, as the eyes of the honest man and of theliar are, except insofar as they may be pertinent to his interest ingetting away with what he says

      Getting away with what he says or getting what he wants.

      I think there may be something here in the relationship of the parties. We wouldn't be able to "bullshit" someone who knew us well or intimately. That would be lying.

      Also, we would definitely try to bullshit our way through an interview or an exam essay, but would not lie. Unless we were stating an untrue opinion because we were afraid our true feelings would be judged negatively, which again comes back to the relationship question.

    3. think of him.

      Similar to mawaters comment above, I feel like there's a bit of a leap being taken here, while other rhetorical motives are being ignored. I'm not really buying the 'self-impression' motive. He may believe what he is saying, or he may not, but what does not change is the position from which he is speaking, and the audience he is speaking to. I feel as though there is definitely room in between those two objective points to recognize the humbuggery-- that being a statement and a motive which we cannot fully discern, yet one we are certain holds an ulterior agenda-- and to come up with a number of reasons as to the intentions of the orator.

    1. Everything is not provided for, because a great deal is trusted lo the common sense of the people. I think it is quite fair and safe to assert that there is not the slightest danger that the Federal Parliament will perpetrate any injustice upon the local legislatures, because it would cause such a reaction as to compass the destruction of the power thus unjustly exercised. The veto power is necessary in order that the General Government may have a control over the proceedings of the local legislatures to a certain extent. The want of this power was the great source of weakness in the United States, and it is a want that will be remedied by an amendment in their Constitution very soon. So long as each state considered itself sovereign, whose acts and laws could not be called in question, it was quite clear that the central authority was destitute of power to compel obedience to general laws. If each province were able to enact such laws as it pleased, everybody would be at the mercy of the local legislatures, and the General Legislature would become of little importance. It is contended that the power of the General Legislature should be held in check by a veto power with reference to its own territory, resident in the local legislatures, respecting the application of general laws to their jurisdiction. All power, they say, comes from the people and ascends through them to their representatives, and through the representatives to the Crown. But it would never do to set the Local above the General, Government. The Central Parliament and Government must, of necessity, exercise the supreme power, and the local governments will have the exercise of power corresponding to the duties they have to perform. The system is a new and untried one, and may not work so harmoniously as we now anticipate, but there will always but p ¡war in the British Parliament and our own to remedy any defects that may be discovered after the system is in operation. Altogether, I regard the scheme as a magnificent one, and I look forward to the future with anticipate- tins of seeing a country and a government possessing great power and respectability, and of being, before I die, a citizen of an immense empire built up on our part of the North American continent, where the folds of the British flag will float in triumph over a people possessing freedom, happiness and prosperity equal to the people of any other nation on the earth.

      §.90 of the Constitution Act, 1867.

    2. And, besides that, we have provision made for extending the representation east or west, as occasion may require, according to the increase of our population shown at the decennial periods for taking the census. Any thing fairer than that could not possibly be demanded. And if Lower Canada increases more rapidly in population than Canada West, she will obtain representation accordingly. For, although the number of her members cannot be changed from sixty-five, the proportion of that number to the whole will be changed relatively to the progress of the various colonies. On the other hand if we extend, as I have no doubt we will do, westward, towards the centre of the continent, we will obtain a large population for our Confederation in the west. In that quarter we must look for the largest increase of our population in British America, and before many years elapse the centre of population and power will tend westward much farther than most people now think. The increase in the representation is therefore almost certain to be chiefly in the west, and every year will add to the influence and power of Western Canada, as well as to her trade and commerce. The most important question that arises relates to the constitution of the Upper House. It is said that in this particular the scheme is singularly defective—that there has been a retrograde movement in going back from the elective to the nominative system. I admit that this statement is a fair one from those who contended long for the application of the elective principle to the Upper House; but it can have no weight with another large class, who, like myself, never believed in the wisdom of electing the members of two Houses of Parliament with coordinate powers. I have always believed that a change from the present system was inevitable, even with our present political organization. (Hear, hear.) The constitution of an Upper House or Senate seems to have originated in the state of society which prevailed in feudal times ; and from being the sole legislative body—or at least the most powerful—in the State, it has imperceptibly become less powerful, or secondary in importance to the lower chamber, as the mass of the people became more intelligent, and popular rights became more fully understood. Where there is an Upper House it manifestly implies on the part of its members peculiar duties or peculiar rights. In Great Britain, for instance, there is a large class of landed proprietors, who have long held almost all the landed property of the country in their hands, and who have to pay an immense amount of taxes. The fiscal legislation of Britain for many years has tended to the reduction of impost and excise duties on articles of prime necessity, and to the imposition of heavy taxes on landed property and incomes. Under such a financial system, there are immense interests at stake, and the House of Lords being the highest judicial tribunal in the kingdom, there is a combination of peculiar rights and peculiar duties appertaining to the class represented which amply justify its maintenance. We have no such interests, and we-impose no such duties, and hence the Upper House becomes a mere court of revision, or one of coordinate jurisdiction ; as the latter it is not required ; to become the former, it should be constituted differently from the House of Assembly. The United States present the example of a community socially similar to ourselves,

      §§.24 and 51 of the Constitution Act, 1867.

    1. “There’s worry that you can’t remove the plastic without removing marine life at the same time,” said George Leonard, chief scientist at the Ocean Conservancy. “We know from the fishing industry if you put any sort of structure in the open ocean, it acts as a fish-aggregating device.”

      This is a great risk they will have to take. I think if they work with marine biologists they may be able to find a solution that will be potentially less harmful to the wildlife.

    1. As I saw that they were very friendly to us, and perceived that they could be much more easily converted to our holy faith by gentle means than by force, I presented them with some red caps, and strings of beads to wear upon the neck, and many other trifles of small value, wherewith they were much delighted, and became wonderfully attached to us. Afterwards they came swimming to the boats, bringing parrots, balls of cotton thread, javelins, and many other things which they exchanged for articles we gave them, such as glass beads, and hawk’s bells; which trade was carried on with the utmost good will. But they seemed on the whole to me, to be a very poor people. They all go completely naked, even the women, though I saw but one girl. All whom I saw were young, not above thirty years of age, well made, with fine shapes and faces; their hair short, and coarse like that of a horse’s tail, combed toward the forehead, except a small portion which they suffer to hang down behind, and never cut. Some paint themselves with black, which makes them appear like those of the Canaries, neither black nor white; others with white, others with red, and others with such colors as they can find. Some paint the face, and some the whole body; others only the eyes, and others the nose. Weapons they have none, nor are acquainted with them, for I showed them swords which they grasped by the blades, and cut themselves through ignorance. They have no iron, their javelins being without it, and nothing more than sticks, though some have fish-bones or other things at the ends. They are all of a good size and stature, and handsomely formed. I saw some with scars of wounds upon their bodies, and demanded by signs the of them; they answered me in the same way, that there came people from the other islands in the neighborhood who endeavored to make prisoners of them, and they defended themselves. I thought then, and still believe, that these were from the continent. It appears to me, that the people are ingenious, and would be good servants and I am of opinion that they would very readily become Christians, as they appear to have no religion. They very quickly learn such words as are spoken to them. If it please our Lord, I intend at my return to carry home six of them to your Highnesses, that they may learn our language. I saw no beasts in the island, nor any sort of animals except parrots.” These are the words of the Admiral.

      First concern is whether a war will have to break out, but the Admiral sees this, finally, as a conversion opportunity.

      He then goes into detail, in the next passages, about the dress and customs of the Taíno people. https://en.wikipedia.org/wiki/Ta%C3%ADno

      With this stroke, indigenous history and customs is erased.

      I think it's important to keep in mind that this ideological work - suppressing the cultures of the native peoples - is just as necessary a part of the eventual conquest of the Caribbean and North and South America as the diseases and the military and economic might of the settling forces.

    1. This suggests that displacement of physical activity may not be a strong link between screen time and obesity.

      This was very interesting to read and I am going to have to say I disagree with this statement. Like stated in the article, there are difficulties when it comes to measuring screen media exposure and physical activity. Based off of experience such as babysitting, once the kids are glued in to a show or game, there is no way I am getting them to go outside. A 3 year old that I babysat threw a tantrum when I told him we were done watching TV and it was time to go on a walk. I think physical activity and screen time directly correlates with obesity.

    1. The first way you may think to use Wikipedia is as a source—that is, as a text you can quote or paraphrase in a paper.

      Schools I've been taught at never mentioned that you could do this. I like how this is challenging our perceptions over what is a good source. Just because our past teachers said it was not a good way to research, but now we are doing a complete 180 and now it is acceptable.

    1. "More availability means more usage and honestly, I don't think Utah voters … understand that this is really a whole new system of distribution," he said, leading to a major influx of dispensaries in a newly opened market. "We don't use Rite Aid, we don't use Walgreens." Plumb and the Utah Medical Association have each argued that giving patients marijuana through a dispensary is playing fast and loose with a potent substance, skipping the pharmaceutical safeguards required for the distribution of other drugs.

      In this passage, I feel that Plumb is trying to inflict fear into the community about a "new system of distribution." Creating separate dispensaries is not a bad thing and I believe he is blowing the issue out of proportion. Having a separate building may actually be safer for the community since the dispensaries in other states won't let anyone under the age of 21 walk in the door or without a licensed medical card. At Rite Aid and Walgreen's, everyone in the public is welcome inside no matter the age. There are no security guards or other precautionary measures taken to guard the prescriptions in these pharmacies. In these pharmacies there are far more addictive and life threatening drugs than marijuana. If anything, there needs to be tighter measures taken if at pharmacies if he is afraid of theft. If someone steals a bottle of Opioids form Walgreens, there is a good chance that the person could die, while if someone stole marijuana, there is no chance of causing death. Marijuana would not skip "pharmaceutical safeguards" as he claims. Other states test and monitor the plants to make sure they are clean and consumable. Marijuana, unlike the other medications, do not require chemicals and labs to produce the product. They require sunlight, soil and water, just like vegetables. He is trying to inflict doubt into something that is far safer than the prescription drugs pumped out of labs.

    1. Large computer networks (and their associated users) may “wake up” as superhumanly intelligent entities.

      The great "AI" has been around for a while now, we human are largely working on a computer machine to think for "itself". As fascinating as it sounds, aren't we just being lazy; depending on a robot to do the work for us. What will happen with the human race if these AI start producing more and better equipped AI. We have a brain that can produce so much if we just decide to do things on our own.

    1. So please do read the Code and Google’s values, and follow both in spirit and letter, always bearing in mind that each of us has a personal responsibility to incorporate, and to encourage other Googlers to incorporate, the principles of the Code and values into our work. And if you have a question or ever think that one of your fellow Googlers or the company as a whole may be falling short of our commitment, don’t be silent. We want – and need – to hear from you.

      Main purpose of the code of ethics. Everyone is included and everyone must be held to the standards that the company sets.

    1. This is unquestionably a grave and serious subject of consideration, and especially so to the minority in this section of the province, that is the English-speaking minority to which I and many other members of this House belong, and with whose interests we are identified. I do not disguise that I have heard very grave and serious apprehensions by many men for whose opinions I have great respect, and whom I admire for the absence of bigotry and narrow-mindedness which they have always exhibited. They have expressed themselves not so much in the way of objection to specific features of the scheme as in the way of apprehension of something dangerous to them in it— apprehensions which they cannot state explicitly or even define to themselves. They seem doubtful and distrustful as to the consequences, express fears as to how it will affect their future condition and interests, and in fact they almost think that in view of this uncertainty it would be better if we remained as we are. Now, sir, I believe that the rights of both minorities—the French minority in the General Legislature and the English-speaking minority in the Local Legislature of Lower Canada—are properly guarded. I would admit at once that without this protection it would be open to the gravest objection ; I would admit that you were embodying in it an element of future difficulty, a cause of future dissension and agitation that might be destructive to the whole fabric ; and therefore it is a very grave and anxious question for us to consider —especially the minorities in Lower Canada —how far our mutual rights and interests are respected and guarded, the one in the General and the other in the Local Legislature. With reference to this subject, I think that I , and those with whom I have acted—the English speaking members from Lower Canada—may in some degree congratulate ourselves at having brought about a state of feeling between the two races in this section of the province which has produced some good effect. (Hear, hear.)

      §.93 of the Constitution Act, 1867.

    2. of the scheme, without which it would certainly, in my opinion, have been open to very serious objection. (Hear, hear.) I will not now criticize any other of the leading features of the resolutions as they touch the fundamental conditions and principles of the union. I think there has been throughout a most wise and statesmanlike distribution of powers, and at the same time that those things have been carefully guarded which the minorities in the various sections required for their protection, and the regulation of which each province was not unnaturally desirous of retaining for itself. So far then as the objection is concerned of this union being federative merely in its character, and liable to all the difficulties which usually surround federal governments, I think we may fairly consider that there has been a proper and satisfactory distribution of power, which will avert many of those difficulties. (Hear, hear.)

      §§.91 and 92 of the Constitution Act, 1867.

    1. There has been considerable concern that television may negatively influence young children’s executive function, especially the ability to focus and sustain attention in task situations.

      I feel like technology in general has positive and negative aspects just like anything else. I believe that it is important for children to have a balance with technology. I think I grew up at a nice time because I know how to use technology, but it was not as dominant. I played outside and created special bonds with people, but also got a Game Boy and electronics for birthdays and special occasions. I think society today is a lot more dependent on technology I know from high school until now I have became a lot more attached. I think moderation is the biggest thing we should focus on when it comes to technology.

    1. Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling. Those who conscientiously attempt to keep abreast of current thought, even in restricted fields, by close and continuous reading might well shy away from an examination calculated to show how much of the previous month's efforts could be produced on call. Mendel's concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

      The results of research that are finalized now are becoming outdated due to the fact that we continue to grow and have no purpose for that type of research.

    2. Certainly progress in photography is not going to stop. Faster material and lenses, more automatic cameras, finer-grained sensitive compounds to allow an extension of the minicamera idea, are all imminent.

      I find this quote very relatable to today's society because everyday new research and experiments are being conducted to improve technology in any way whether it's so make it faster, smaller, or more efficient. Technology is constantly growing and becoming a part of our daily lives.

    3. There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends.

      This reminds me of how people say technology is making us more stupid, because it is becoming so advanced it does so much for us. It should stay simple and to the point.

    4. For years inventions have extended man's physical powers rather than the powers of his mind

      Inventions are created to make doing tasks easier, however, there is no technological supplement that could enhance brain power (e.g, a device to help you absorb information in studying for a test)

    5. For the biologists, and particularly for the medical scientists, there can be little indecision, for their war has hardly required them to leave the old paths. Many indeed have been able to carry on their war research in their familiar peacetime laboratories. Their objectives remain much the same.

      In the medical field, there is great repetition in terms of objective. There is more innovation coming out in the medical field, however, it remains associated with the same ideas of traditional medicine rather than new medical research. Medicine is a field that relies on old facts to create new discoveries. Functional medicine or holistic/alternative medicine is becoming more popular today.

    6. They have given him increased knowledge of his own biological processes so that he has had a progressive freedom from disease and an increased span of life. They are illuminating the interactions of his physiological and psychological functions, giving the promise of an improved mental health.

      Technology has given us great insight into our internal/mental well-being. With the technology of today we have more knowledge of how our body works to thereby increase one's lifespan and promote better quality of life. Technology betters not only the physical aspects of life (communication, education, etc.,) but also the intangible aspects (mental health).

    1. I’m going to assume most people in the room here have read Vannevar Bush’s 1945 essay As We May Think. If you haven’t read it yet, you need to.

      I seem to run across references to this every couple of months. Interestingly it is never in relation to information theory or Claude Shannon references which I somehow what I most closely relate it to.

    1. We have heard much about the proposed new constitution of the Legislative Council. We have been told it was political necessity that first forced the elective system of minds that were by no means enamoured of it, and this, I think, has been fully established. Now, it would ill become me, as an elected member, to dwell on any merits or excellences the elective system may have possessed as applied to this branch of the Legislature— it is a subject we can none of us touch upon with the same freedom which we might if we were not ourselves elected—but I may call the attention of the House to this, that none of the evils that were dreaded, as likely to flow from the elective system, have yet shown themselves, and I do not think it at all reasonable, much less necessary, that they should be anticipated in time to come. My own views were in perfect accord with those of hon. gentlemen who protested against the system when it was first introduced. I did not then consider it an improvement, and my views have not changed since ; I have, consequently, no personal predilections for an Elective Council, but far prefer a Chamber nominated by the Crown.

      §.24 of the Constitution Act, 1867.

    1. And yes, we must be ready to receive their guidance as well.

      I like this, I think people may overlook the fact that there is a lot that can be learned from students too, not just adults/professors/elders

    1. Students who receive encouraging feedback from teachers may feel more personally efficacious and work harder to succeed

      I believe that this is extremely important to note as future teachers. At an early age, positive feedback is going to work much more effectively than anything else. My fourth grade teacher was very strict on what we could write about in our writing time. Other teachers allowed more creativity than mine did so I think that tarnished my view of writing as a whole when I was younger.

    1. Filthy Lucre' Is A Modern Remix Of The Peacock Room's Wretched Excess 'Filthy Lucre' Is A Modern Remix Of The Peacock Room's Wretched Excess Listen· 7:267:26Queue Toggle more options Download Embed Embed <iframe src="https://www.npr.org/player/embed/408226983/408407242" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player"> Transcript Facebook Twitter Flipboard Email May 21, 20153:34 AM ET Heard on Morning Edition Susan Stamberg James McNeill Whistler lavishly decorated the Peacock Room — an actual London dining room — for shipping magnate Frederick Leyland in 1876. Freer Gallery of Art, Smithsonian hide caption toggle caption Freer Gallery of Art, Smithsonian An artist has just converted a legendary piece of 19th-century art into an utter ruin. And two Smithsonian institutions — the Freer and Sackler galleries of Asian art — have given their blessings. The Peacock Room at the Freer Gallery is an actual dining room from London, decorated by James McNeill Whistler in 1876. Its blue-green walls are covered with golden designs and painted peacocks. Gilded shelves hold priceless Asian ceramics. It's an expensive, lavish cocoon, rich in beauty with a dab of menace. Freer security guard Shaquan Harper spends hours at a time in the Peacock Room — and says it's a peaceful, meditative experience. "Blue is my favorite color, and whenever I wear jewelry it's gold," he says. "So I kind of make a personal connection with the room. This is one of my favorite galleries in the Smithsonian." "Even though it's a room, it's really a six-sided painting that you literally walk into. ... You have no sense whatsoever of the outside world. It's a world in which art has completely overtaken life." Curator Lee Glazer Curator Lee Glazer agrees that the Peacock Room is a completely immersive experience. "Even though it's a room, it's really a six-sided painting that you literally walk into," she says. The Peacock Room is a gorgeous, gilded cage. "You have no sense whatsoever of the outside world," says Glazer. "It's a world in which art has completely overtaken life." It was shipping magnate Frederick Leyland's world. It was created in the Victorian era when self-made men with new fortunes were buying their way into British society through fine houses and important works of art. Whistler paints his wealthy patron as a golden peacock, at one end of the dining room. Nearby, another peacock — representing the "poor" artist. "They're actually in a face-off," Glazer says. Article continues after sponsorship Fighting, for reasons to be revealed in a bit. It's a dispute about art and money — although Whistler named the room Harmony in Blue and Gold. Next door, in the Sackler Museum of Asian Art, painter Darren Waterston has reproduced and re-interpreted Whistler's dining room in an installation called Filthy Lucre — which means "dirty money." This "Peacock Room Remix" looks as if a wrecking ball has been slammed into Whistler's work. The priceless Asian vases in the original are smashed — their shards litter the floor. "The shelves are all broken," Waterston says. "The gold gild is either melting off or puddling on the floor." Enlarge this image In Darren Waterston's Filthy Lucre it looks as if a wrecking ball has been slammed into Whistler's lavish work. Hutomo Wicaksono/Freer Sackler Gallery hide caption toggle caption Hutomo Wicaksono/Freer Sackler Gallery In Darren Waterston's Filthy Lucre it looks as if a wrecking ball has been slammed into Whistler's lavish work. Hutomo Wicaksono/Freer Sackler Gallery The original room feels claustrophobic in its excess. The remix feels scary as if there's been an earthquake and another tremor is coming any minute. "There's a sense of danger," says Waterston. He seems cheerful and sweet, but don't be fooled: "My work absolutely has a perversity," he says. "There's always an underbelly to it." Enlarge this image Shards of smashed Asian vases litter the floor of Waterston's Filthy Lucre. Amber Gray/Freer Sackler Gallery hide caption toggle caption Amber Gray/Freer Sackler Gallery Shards of smashed Asian vases litter the floor of Waterston's Filthy Lucre. Amber Gray/Freer Sackler Gallery Here, Waterston says he wanted to show the volatility of beauty. The big, cancerous, gilded cysts he's blobbed onto Whistler's reproduced golden shelves, the spilled paint oozing onto the rug — these are his reactions to what's happening between art and money these days. "This is what it means to be a living artist in this contemporary art world," Waterston says. "It is so filled with excess and this incredible consumption, this insatiable consumption of the object and of aesthetics." The most vivid, even yuck-making example is what Waterston's done to Whistler's two golden peacocks; in this remix, the birds aren't just fighting, they're eviscerating each other. They're "literally disemboweling each other," he describes. "One has the other's entrails being pulled out — talons are out." The golden peacocks in Filthy Lucre are "literally disemboweling each other," Waterston says. Hutomo Wicaksono/Freer Sackler Gallery hide caption toggle caption Hutomo Wicaksono/Freer Sackler Gallery They hate each other's guts! Which is exactly what happened between Whistler and Leyland. The patron asked the artist to just make some modest adjustments in his new dining room. Glazer says Whistler put a few wavy dabs of gold paint here, some metal color there, "and everyone was very happy with that." Leyland and his family left London for the summer. And that, Glazer says, is when Whistler's imagination took flight. He transformed the room, covering every surface with blue and gold paint. He worked like a madman. "Whistler talks about being up on the scaffolding at 6 in the morning and not coming down until 9 at night," says Glazer. " 'I'm blind with sleep and blue peacock feathers,' he says." He kept his friend and patron more or less informed about what he was doing: "All through the summer Leyland received letters from Whistler talking about the gorgeous surprise that Whistler was preparing for him and the family," Glazer explains. Well, Leyland comes home, sees the extent of work — and the 2,000 pounds that Whistler wanted to be paid for it (about a quarter of a million dollars today) — and, as they used to say in Victorian days: Leyland blew a gasket. In the middle of the dispute, with Leyland paying half of what Whistler requested, the artist went back to the dining room to finish up. "And that was really when he exacted his vengeance," says Glazer. Enlarge this image James McNeill Whistler's mother — immortalized in his 1871 painting Arrangement in Grey and Black No. 1: Portrait of the Artist's Mother — worried about all the time and energy her son was pouring into the Peacock Room. "A gentleman's house isn't an exhibition," she told him. Detroit Institute of Arts via Getty Images hide caption toggle caption Detroit Institute of Arts via Getty Images James McNeill Whistler's mother — immortalized in his 1871 painting Arrangement in Grey and Black No. 1: Portrait of the Artist's Mother — worried about all the time and energy her son was pouring into the Peacock Room. "A gentleman's house isn't an exhibition," she told him. Detroit Institute of Arts via Getty Images He painted those fighting peacocks — the just-plain-angry ones, not Waterston's gut-wrenching birds — and laid on even more blue paint. Then Whistler left, and never saw the Peacock Room again. Now, we can't end this story without talking about Whistler's mother — that iconic profiled figure in gray and black. What did Mama Whistler think of the whole thing — the frenzied work, the manic effort? Glazer reports that Anna Whistler was worried about her son; she thought he was working too hard, not eating, not sleeping: "She chides him about that and says, 'You know, Jimmy, a gentleman's house isn't an exhibition' — meaning: Get out there and make some money and make some things that are going to sell," says Glazer. "And so, always listening to his mother — Whistler was kind of a Momma's boy — he did invite the press in to watch him work in the Peacock Room." Yet another thing he forgot to tell Frederick Leyland! The results of this delicious dispute can be seen on the National Mall in Washington, D.C. — at the Freer, site of the original Peacock Room, and the Sackler, where Filthy Lucre, Darren Waterston's remix, is on display until January 2017.

      The struggle of an artist who is aprreciated by the audience but not able to recive the amount that is supposedly what must come after as a reward. The article also tells the corruption and no exchange of artist from the buyers.

    2. 'Filthy Lucre' Is A Modern Remix Of The Peacock Room's Wretched Excess 'Filthy Lucre' Is A Modern Remix Of The Peacock Room's Wretched Excess Listen· 7:267:26Queue Toggle more options Download Embed Embed <iframe src="https://www.npr.org/player/embed/408226983/408407242" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player"> Transcript Facebook Twitter Flipboard Email May 21, 20153:34 AM ET Heard on Morning Edition Susan Stamberg James McNeill Whistler lavishly decorated the Peacock Room — an actual London dining room — for shipping magnate Frederick Leyland in 1876. Freer Gallery of Art, Smithsonian hide caption toggle caption Freer Gallery of Art, Smithsonian An artist has just converted a legendary piece of 19th-century art into an utter ruin. And two Smithsonian institutions — the Freer and Sackler galleries of Asian art — have given their blessings. The Peacock Room at the Freer Gallery is an actual dining room from London, decorated by James McNeill Whistler in 1876. Its blue-green walls are covered with golden designs and painted peacocks. Gilded shelves hold priceless Asian ceramics. It's an expensive, lavish cocoon, rich in beauty with a dab of menace. Freer security guard Shaquan Harper spends hours at a time in the Peacock Room — and says it's a peaceful, meditative experience. "Blue is my favorite color, and whenever I wear jewelry it's gold," he says. "So I kind of make a personal connection with the room. This is one of my favorite galleries in the Smithsonian." "Even though it's a room, it's really a six-sided painting that you literally walk into. ... You have no sense whatsoever of the outside world. It's a world in which art has completely overtaken life." Curator Lee Glazer Curator Lee Glazer agrees that the Peacock Room is a completely immersive experience. "Even though it's a room, it's really a six-sided painting that you literally walk into," she says. The Peacock Room is a gorgeous, gilded cage. "You have no sense whatsoever of the outside world," says Glazer. "It's a world in which art has completely overtaken life." It was shipping magnate Frederick Leyland's world. It was created in the Victorian era when self-made men with new fortunes were buying their way into British society through fine houses and important works of art. Whistler paints his wealthy patron as a golden peacock, at one end of the dining room. Nearby, another peacock — representing the "poor" artist. "They're actually in a face-off," Glazer says. Article continues after sponsorship Fighting, for reasons to be revealed in a bit. It's a dispute about art and money — although Whistler named the room Harmony in Blue and Gold. Next door, in the Sackler Museum of Asian Art, painter Darren Waterston has reproduced and re-interpreted Whistler's dining room in an installation called Filthy Lucre — which means "dirty money." This "Peacock Room Remix" looks as if a wrecking ball has been slammed into Whistler's work. The priceless Asian vases in the original are smashed — their shards litter the floor. "The shelves are all broken," Waterston says. "The gold gild is either melting off or puddling on the floor." Enlarge this image In Darren Waterston's Filthy Lucre it looks as if a wrecking ball has been slammed into Whistler's lavish work. Hutomo Wicaksono/Freer Sackler Gallery hide caption toggle caption Hutomo Wicaksono/Freer Sackler Gallery In Darren Waterston's Filthy Lucre it looks as if a wrecking ball has been slammed into Whistler's lavish work. Hutomo Wicaksono/Freer Sackler Gallery The original room feels claustrophobic in its excess. The remix feels scary as if there's been an earthquake and another tremor is coming any minute. "There's a sense of danger," says Waterston. He seems cheerful and sweet, but don't be fooled: "My work absolutely has a perversity," he says. "There's always an underbelly to it." Enlarge this image Shards of smashed Asian vases litter the floor of Waterston's Filthy Lucre. Amber Gray/Freer Sackler Gallery hide caption toggle caption Amber Gray/Freer Sackler Gallery Shards of smashed Asian vases litter the floor of Waterston's Filthy Lucre. Amber Gray/Freer Sackler Gallery Here, Waterston says he wanted to show the volatility of beauty. The big, cancerous, gilded cysts he's blobbed onto Whistler's reproduced golden shelves, the spilled paint oozing onto the rug — these are his reactions to what's happening between art and money these days. "This is what it means to be a living artist in this contemporary art world," Waterston says. "It is so filled with excess and this incredible consumption, this insatiable consumption of the object and of aesthetics." The most vivid, even yuck-making example is what Waterston's done to Whistler's two golden peacocks; in this remix, the birds aren't just fighting, they're eviscerating each other. They're "literally disemboweling each other," he describes. "One has the other's entrails being pulled out — talons are out." The golden peacocks in Filthy Lucre are "literally disemboweling each other," Waterston says. Hutomo Wicaksono/Freer Sackler Gallery hide caption toggle caption Hutomo Wicaksono/Freer Sackler Gallery They hate each other's guts! Which is exactly what happened between Whistler and Leyland. The patron asked the artist to just make some modest adjustments in his new dining room. Glazer says Whistler put a few wavy dabs of gold paint here, some metal color there, "and everyone was very happy with that." Leyland and his family left London for the summer. And that, Glazer says, is when Whistler's imagination took flight. He transformed the room, covering every surface with blue and gold paint. He worked like a madman. "Whistler talks about being up on the scaffolding at 6 in the morning and not coming down until 9 at night," says Glazer. " 'I'm blind with sleep and blue peacock feathers,' he says." He kept his friend and patron more or less informed about what he was doing: "All through the summer Leyland received letters from Whistler talking about the gorgeous surprise that Whistler was preparing for him and the family," Glazer explains. Well, Leyland comes home, sees the extent of work — and the 2,000 pounds that Whistler wanted to be paid for it (about a quarter of a million dollars today) — and, as they used to say in Victorian days: Leyland blew a gasket. In the middle of the dispute, with Leyland paying half of what Whistler requested, the artist went back to the dining room to finish up. "And that was really when he exacted his vengeance," says Glazer. Enlarge this image James McNeill Whistler's mother — immortalized in his 1871 painting Arrangement in Grey and Black No. 1: Portrait of the Artist's Mother — worried about all the time and energy her son was pouring into the Peacock Room. "A gentleman's house isn't an exhibition," she told him. Detroit Institute of Arts via Getty Images hide caption toggle caption Detroit Institute of Arts via Getty Images James McNeill Whistler's mother — immortalized in his 1871 painting Arrangement in Grey and Black No. 1: Portrait of the Artist's Mother — worried about all the time and energy her son was pouring into the Peacock Room. "A gentleman's house isn't an exhibition," she told him. Detroit Institute of Arts via Getty Images He painted those fighting peacocks — the just-plain-angry ones, not Waterston's gut-wrenching birds — and laid on even more blue paint. Then Whistler left, and never saw the Peacock Room again. Now, we can't end this story without talking about Whistler's mother — that iconic profiled figure in gray and black. What did Mama Whistler think of the whole thing — the frenzied work, the manic effort? Glazer reports that Anna Whistler was worried about her son; she thought he was working too hard, not eating, not sleeping: "She chides him about that and says, 'You know, Jimmy, a gentleman's house isn't an exhibition' — meaning: Get out there and make some money and make some things that are going to sell," says Glazer. "And so, always listening to his mother — Whistler was kind of a Momma's boy — he did invite the press in to watch him work in the Peacock Room." Yet another thing he forgot to tell Frederick Leyland! The results of this delicious dispute can be seen on the National Mall in Washington, D.C. — at the Freer, site of the original Peacock Room, and the Sackler, where Filthy Lucre, Darren Waterston's remix, is on display until January 2017.

      Filthy Lucre composes of fragments from the original Peacock room and overall of the article it explains not the comparison but the interpretation of why Filthy Lucre is as it is in addition to extra research.

    1. The elective principle, kept within proper bounds, is very good indeed, and hitherto, no doubt, has worked well in this House. But I doubt whether, in the course of time, this House would not lose its present high status if the elective principle was continued in it for ever. As regards this, however, I merely state my own opinion, and other honorable gentlemen may hold contrary opinions, as they are perfectly entitled to do. (Hear, hear.) Having thus, honorable gentlemen, explained the reasons which induced the Government, m 1856, to propose that the elective principle should be extended to this House, with the concomitant circumstances which assisted in bringing that about—and having also explained the reasons which have induced the Government now to look for another state of political existence, as we may call it, by Confederation with the Maritime Provinces, I think I am clear from any imputation of inconsistency or levity of purpose.

      §.24 of the Constitution Act, 1867.

    2. I think that the engrafting of this system of government upon the British Constitution has a tendency to at least introduce the republican system. It is republican so far as it goes, and that is another reason why I do not approve of it. If we commence to adopt the republican system, we shall perhaps get the idea of continuing the system until we go too far. I t is also said that we are to have a new nationality. I do not understand that term, honorable gentlemen. If we were going to have an independent sovereignty in this country, then I could understand it. I believe honorable gentlemen will agree with me, that after this scheme is fully carried into operation, we shall still be colonies. HON. SIR E. P. TACHÉ—Of course. HON. MR. MOORE—NOW, that being the case, I think our Local Government will be placed in a lower position than in the Government we have now. Every measure resolved upon in the Local Government will be subject to the veto of the Federal Government—that is, any measure or bill passing the Local Legislature may be disallowed within one year by the Federal Government. HON. SIR E. P. TACHÉ—That is the case at present as between Canada and the Imperial Government. HON. MR. MOORE—I beg to differ slightly with the honorable gentleman. Any measure passed by this province may be disallowed within two years thereafter by the Imperial Government. But the local governments, under Confederation, are to be subjected to having their measures vetoed within one year by the Federal Government, and then the Imperial Government has the privilege of vetoing anything the Federal Government may do, within two years. The veto power thus placed in the hands of the Federal Government, if exercised frequently, would be almost certain to cause difficulty between the local and general governments. I observe that my honorable friend, Sir ETIENNE P. TACHÉ, does not approbate that remark. HON. SIR E. P. TACHÉ—You understand me correctly.

      Preamble and §.90 of the Constitution Act, 1867.

    1. It envisioned a 1km stretch of dual carriageway between Salford University and Manchester city centre as a 4-lane linear Park. One lane is grassed, another a water channel, another sand and the last a running track. Commuters leave their cars in a multi-storey Car (P)Ark. The interchange also incorporates a suburban train station, cycle docking station, stables, and a boathouse and changing rooms. From the Car (P)Ark commuters head east into Manchester walking, jogging, cycling, rollerblading, horse riding, swimming or rowing. The Park terminates at a Suit Park where commuters can shower, change and get a coffee. (The word “suit” refers to the business suit). Eight hours later, on their way home, commuters deposit their clothes and return through the Park, to the interchange to collect their car or catch a train. The scheme could be extended to each of the radial routes into Manchester and at intervals these Parks could link, completing a comprehensive green commuter infrastructure. Save this picture! The Park + Jog proposal, 1998. Image Courtesy of Henley Halebrown Rorrison Architects Save this picture! Rendering of the Park + Jog proposal, 1998. Image Courtesy of Henley Halebrown Rorrison Architects What is striking about these parks is the positive impact they can have on their surrounding neighbourhoods, particularly when one considers the alternative. With roads, be it a dual carriageway or a street, comes heavy traffic, noise and pollution, at the expense of those who live and work around it. In the case of a High Street we forego certain types of shops, cafés and restaurants that engender a street life. At the scale of the dual-carriageway the A40 that tears through west London illustrates beautifully how dramatic the blight on homes can be, as this Mid-20th Century residential avenue has been transformed into a slum wrapped around a congested commuter road. These zones lack the 'density' of the city centre and the space of the suburb. And, each successive wave of Greenfield development adds to the expanse of this grey space.Active transportation routes and linear parks, on the other hand, regenerate their surroundings, bringing activity and value to blighted sections of the city. They also radically alter the political situation for the suburb and its inevitable commute. Of course, the creation of these green networks need not be at the expense of the motorist. On the 10th July London’s Transport Commissioner Peter Hendy launched a study for London that envisaged burying sections of the North and South Circular ring roads, and stretches of road close to the Thames. The initiative would create linear parks overhead, much as the Big Dig did for Boston. Save this picture! The Olympic Sculpture Park in Seattle, Washington, designed by Weiss Manfredi. Image © Benjamin Benschneider Although originally conceived for Manchester, I believe that Park+Jog may be adapted to any city worldwide and serve as an example for how Cycle Space could lay the ideological foundation to change our cities for the better. Combining new transportation methods that encourage the principles of a healthy life style with traditional roads can raise land values, attract investment and activate the urban environment. The social revolution that Bazalgette offered London in the 19th Century, Cycle Space might just bring to London and our world’s cities in the 21st. Simon Henley is a teacher, author of the well-received book The Architecture of Parking, and co-founder of London-based studio Henley Halebrown Rorrison (HHbR). His column, London Calling, looks at London’s every-day reality, its architectural culture, and its role as a global architectural hub; above all, it will explore how London is influencing design everywhere, whilst being forever challenged from within. You can follow him @SiHenleyHHbR and be a fan of his Facebook page, HHbR Architecture.Further Reading Park+JogLondon’s answer to Boston’s Big DigRogers 80th Birthday retrospective at the Royal AcademyThe Lidoline, YN Studio's "Swim to Work" Proposal AIA Presents 2013 Educational Facility Design Excellence Awards Architecture News Tretyakov Gallery Competition Entry / PAPER | TOTEMENT Unbuilt Project Save this article Share in Whatsapp About this author Simon Henley Author Follow See more: News Articles London CallingLondonBicyclingUrban Planning Cite: Simon Henley. "Why Cycle Cities Are the Future" 06 Aug 2013. ArchDaily. Accessed 3 Sep 2018. <https://www.archdaily.com/409556/why-cycle-cities-are-the-future/> ISSN 0719-8884 Read comments Browse the Catalog Ceramic Sunscreen - ALPHATUBE® Shildan Danpalon 3DLITE - Solar Control Danpal Elevator in Round Stairs Brembo Ascensori - The Elevator Company Upholstery Fastening System - Textile Range Fastmount® Frameless Sliding Doors - Sky-Frame Plain Sky-Frame Stainless Steel Bollards Reliance Foundry Please enable JavaScript to view the <a href='http://disqus.com/?ref_noscript'>comments powered by Disqus.</a> › 世界上最受欢迎的建筑网站现已推出你的母语版本! 想浏览ArchDaily中国吗? 是 否 翻译成中文 现有为你所在地区特制的网站?想浏览ArchDaily中国吗? Take me there » Recommended for you Hawkins\Brown's London Pride Float Celebrates the "Dual Identities" of LGBT+ Architects Bicycle Club / NL Architects 10 Points of a Bicycling Architecture London Skyline Debate Taken to City Hall More Articles Could You Live in 15 Square Meters of Space? SUMATORIA's 'Tiny Home' May Make You Think Twice 17 Spectacular Living Roofs in Detail What it Means to Build Without Bias: Questioning the Role of Gender in Architecture More Articles » most visited 22 of the World’s Greatest Architecture Projects Selected by Time Magazine Dragons, Rocks, and Sails Inspire Sceno Light's Floating Theater in Vietnam's Ha Long Bay New Concrete House / Wespi de Meuron Most visited products Structural wood boards in Freiland-Hof Home | EGGER WEBNET Stainless Steel Frames | Jakob Siding Façade System | Technowood

      This shows that the architect has a clear plan for the bicycle sharing plan.

    1. had the right of making selections from all over the country. If that had been proposed, I think many honorable gentlemen would have found fault with it. (Hear, hear.) It was due to courtesy that the members of this House should not be overlooked, and not only that, but there were acquired rights which had to be respected. My honorable friend appears to dissent from this statement. Well, the last choice of the people are now in this House, and by the fact of their election they have acquired a right to a seat ; and I think those gentlemen who have been appointed for life have gained rights which should not be overlooked. (Hear, hear.) HON. MR. CURRIE—The honorable and gallant gentleman says we have an acquired right. I admit we have a right to sit here during the term for which we have been elected ; but what right have we to seat ourselves here for the remainder of our lives ? The people did not send us here to make this change in the composition of this House. (Hear, hear.) And what right even have the appointed members of this House to seats here during their lifetime? I have a despatch here, written by the late Duke of NEWCASTLE, who will be considered pretty good authority upon the point, to the Lieutenant-Governor of Prince Edward Island, on this very question. I need not read the words of the despatch, but the sense of it is, that legislative councillors have no right of property in their position, but simply a naked trust which the Legislature may at any time call upon them to surrender to other hands, if, in their opinion, the public interest shall require such transfer. HON. SIR E. P. TACHÉ—That is merely a matter of opinion. That may for a time have been the view of the Imperial authorities, but previous to 1856 they held and said directly the contrary. (Hear, hear.) They then said that they had granted certain privileges to certain gentlemen for life, and that they would not commit the injustice of withdrawing those privileges when the gentlemen had done nothing to forfeit them. (Hear, hear.) HON. MR. CURRIE—I am surprised at the honorable and gallant Premier questioning the ability of the distinguished gentleman who wrote the despatch to which I have just referred. Whatever may have been the opinion of the Colonial Office in 1856, this is a later opinion, for the despatch is dated the 4th of February, 1862, The honorable and gallant gentleman says they do not propose to take from any honorable gentleman the rights he now enjoys. I could understand this argument if they did not propose to take away the rights of any honorable member of this House ; but I cannot understand it when you propose to drive from this House faithful subjects who have served their country honestly in the Legislature, and I am afraid we have not yet had from the gallant Premier that explanation to which the House is entitled. (Hear, hear.) Why is it that the legislative councillors from Prince Edward Island are excepted ? In that province, as we know, the Legislative Council is elective, and it is an elected Chamber that is now in existence there, but the members of it are excepted from the provisions that apply to the legislative councils of the other provinces. Why is this ? I think there must be some reason, in the first place, for breaking the good rule that in no way shall the prerogative of the Crown be restricted ; and, in the second, for making an exception in regard to one that does not apply to the others. I think a reason may be found for this in the fact, that it was doubted whether the resolutions in a different shape would have passed through some of the chambers that compose the legislatures of the different provinces. (Hear, hear.) I would like to know what justice will be done if this change is carried out ? What, for instance, will be done with regard to two honorable members who come from the city of Hamilton ? One of them (the Hon. Mr. MILLS) is an appointed member ; the other (the Hon. Mr. BULL) was the almost unanimous choice of the people only a few months since. Under the working of the resolutions, one of these honorable gentlemen will forfeit his seat. HON. MR. ROSS—Why ? (Hear, hear.) HON. MR. CURRIE—If it does not follow that one of these honorable gentlemen will lose his seat, it must follow that some other portion of Upper Canada will be unrepresented in this House. (Hear, hear.) Let honorable gentlemen take either horn of the dilemma they please. It may be quite true that the gentlemen who have been sent here possess the confidence of their constituents, but it does not follow that they will be retained in their seats. It is plain that a great injustice will be done these honorable gentlemen, some of whom have served their country faithfully, without, in any way trenching upon the rights of the Crown or infringing on those of the people; and I think the conclusion this House and the country, as well as the other branch of the Legislature, will arrive at, is that those re-

      §§.24, 25 and 146 of the Constitution Act, 1867.

    1. Teachers not only like attractive childrenbetter but also perceive them as less likely tomisbehave, more intelligent, and even morelikely to get advanced degrees.

      This statement is surprising to me! I feel like this is a classic example that can apply to the saying mentioned above as, "Don't judge a book by it's cover." Out of all the members in society, i would not think teachers would judge their student's potential by their attractiveness. That honestly seems pretty biased and unethical. The attractive students may catch their eye (not in a creepy way), but to perceive them as more troublesome behaviorally, more intelligent, or more successful is uncalled for in my opinion. Every child should be treated the same in regards to their education. But, we all know that hardly ever happens. It is what is right though.

  3. Aug 2018
    1. Design history has long overlooked women in our narrative, despite continuously having a large group of women active in the field of graphic design over the past century. Lucinda Hitchcock is a professor in Graphic Design at the Rhode Island School of Design, as well as a member of the Design Office in Providence, Rhode Island. “For me, it has to do with the imbalance of genders in the educational environment and in the framework of the design history that is being taught,” Hitchcock explains. Careful to point out it may not be the same situation in all design schools, Hitchcock adds, “Why does design history still teach about male designers 80% more than women designers? Why do we have 80 % women in the student body (in our [RISD] department) and 80% men in the faculty?”

      i think it is strange that female students is more than male but designer male are more than female. i have many female designer tutor in my school, and i know a lot of famous female designer. i think this issue is better than before

    2. Forty or fifty years ago, the workforce was overwhelmingly a man’s world. In the design field, many women may have been assistants or “office girls” and so few held the top titles, such as art director or creative director. In a basic sense, women’s careers have rarely followed the same path of men’s, since there has historically been immense pressure placed on women to be solely homemakers and nurture families (see: Beyond The Glass Ceiling: an open discussion, Astrid Stavro, Elephant #6) with more sinister pressures of socially-accepted sexism and segregation discouraging, or even disqualifying, the career ambitions of capable women.

      Because of social issue, female suffer the unfair treatment, i think it is a loss of design development. because female designer has many different idea with male designer. Design should be variety and creative. it should not limited by gender.

    3. As discussed earlier, the US design profession is not predominantly male— just over half of the profession is female— yet with celebrity designers so often male, the representation is primarily male. In “Type Persons Who Happen to be Female” Susanne Dechant explains that despite many typographic achievements, women remained underrepresented at type conferences. “TypoBerlin (2009: 5% female presenters) or Atypl (2009: 12%), as well as in various type foundries (Linotype 2005: 12.3%; Myfonts.com 2008: 14%). Today an equal number of women and men are studying type design—so we can expect or at least hope for a levelling of the playing field.”

      Discrimination happens in both genders and not only women (from what a person may think when they first read the article.)

    1. But almost all arguments about student privacy, whether those calling for more restrictions or fewer, fail to give students themselves a voice, let alone some assistance in deciding what to share online.

      I think students' voices need to be heard since they have grown up with technology their whole lives. They are being represented by adults who may or may not use technology as frequently and have definitely not grown up from a young age with the advanced technology there is today.

  4. instructure-uploads.s3.amazonaws.com instructure-uploads.s3.amazonaws.com
    1. learningasasocialundertakingandaccomplishmentshapedbypolitical,cultural,historicalandeconomiccontexts

      I am especially interested in this aspect of the class, as we consider intersectionality, and the inability of untangling social systems of oppression - I think solutions may lie in finding where they tangle.

    1. HON. MR. MCCREA—Does the honorable member from Grandville not remember the increase of members in the representation of the other House, in 1853, and the amendment of the constitution of this House in 1856, the very question I am now debating ? Surely these measures were amendments of that act, and who knows but under the new Constitutional Act—the favorite measure of my honorable friend—the election of members of this House, may not again be resorted to, if the nominative principle shall not be found to work well ? But let us examine for a moment what the amendment of my honorable friend from Wellington is intended to effect. It will be seen by referring to the amendment itself, that the honorable gentleman proposes that the members of this House from Canada and from the Maritime Provinces shall have a different origin or, as it were, a different parentage, elected by the people with us, and appointed by the Crown from the eastern provinces. I take it that it is very desirable that in whatever way the members of this House may be chosen, there should be uniformity in the system. By the honorable gentleman’s plan we shall have one-third of the members from below representing the Crown, and two-thirds from above, representing the people ; a curious sort of incongruity which I think should by all means be avoided. I may be answered that our present House is constituted in that very way ; but honorable gentlemen must remember that the life member» are not the sole representatives of any particular section of the province, but are chosen indiscriminately from all parts of the province. This is not likely to lead to a sectional collision like the scheme of my honorable friend, and be sides that, the appointment of life members in this House is not to be continued after the seats of the present members shall have become vacant from any cause whatever. I think the scheme of my honorable friend the most objectionable of all.

      §.24 of the Constitution Act, 1867.

    2. HON. MR. REESOR—Well, there it is. The honorable gentleman acknowledges his determination to reward his political supporters. Is this the way to obtain an independent branch of the Legislature, one that will operate as a wholesome check on hasty legislation? Those who receive favors from a political party are not likely to turn their backs upon that party. I think we are not likely, under any circumstances, to have a more independent House under the proposed system than we now have, or one which will better advance the interests of the country. If you wish to raise the elective franchise, for elections to the Upper House—if you would confine their election to voters on real estate of $400 assessed value, and tenants holding a lease-hold of $100 annual value, and thus place these elections out of the reach of a mere money influence that may sometimes operate upon the masses—if you think this body is not sufficiently conservative—let them be elected by a more conservative portion of the community— that portion which has the greatest stake in the community—but do not strike out the elective principle altogether.

      §.24 of the Constitution Act, 1867.

    1. HON. MR. REESOR—There are several other provisions in the proposed Constitution which seem to be ambiguous in their meaning, and before discussion upon them it would be well to have them fully explained. In the eleventh clause of the twenty-ninth resolution, for instance, it is declared that the General Parliament shall have power to make laws respecting ” all such works as shall, although lying wholly within any province, be specially declared by the acts authorizing them to be for the general advantage.” It would appear from this, that works like the Welland canal, which yield a very large revenue, will be given over to the General Government; and this being the case, surely this is a sufficient setoff, five times over, for the railways given by New Brunswick, without the annual subsidy proposed to be given to that province of $63,000. HON. MR. MACPHERSON—The cost of these works forms part of the public debt of Canada, which is to be borne in part by the Lower Provinces under the Confederation. HON. MR. CAMPBELL—The honorable gentleman will see that there are some works which, although local in their geographical position, are general in their character and results. Such works become the property of the General Government. The Welland canal is one of them, because, although it is local in its position, it is a work in which the whole country is interested, as the chief means of water communication between the western lakes and the sea. Other works, in the Lower Provinces, may be of the same character, and it is not safe to say that because a certain work lies wholly in one province, it is not to belong to the General Government. HON. MR. REESOR—I do not object to the General Government having the control of these works. It is, I believe, a wise provision to place them under such control. But I do say that it is unfair that an express stipulation should be made to pay one province a large sum per annum for certain works, while, at the same time, we throw in our public works, such as the Welland and St. Lawrence canals, without any consideration whatever. This, I think, is paying quite too much for the whistle. Then the answer of the Commissioner of Crown Lands about the export duty on minerals in Nova Scotia is not at all satisfactory. Whatever dues may be levied on minerals in Canada—and Canada, although it may contain no coal, is rich in gold, silver, copper, iron, and other ores—in the shape of a royalty or otherwise, go to the General Government, while in Nova Scotia they accrue for the benefit of the Local Government. HON. MR. ROSS—NO, they will not go to the General Government. HON. MR. REESOR—Well, there is nothing to the contrary in the resolutions, and you may depend upon it that whatever revenues the General Government may claim, under the proposed Constitution, will be fully insisted upon.

      §.92(10) of the Constitution Act, 1867.

    2. HON. MR. AIKINS—The honorable gentleman says they will have the power, through their representatives, to make their appointments. Well, after reading the fourteenth resolution, it does appear to me that, after the first election of the Chamber, the people will have nothing at all to do with it. (Hear, hear.) The honorable gentleman says, however, that the representatives of the people will have the power of making these appointments. Who are the representatives of the people he refers to? The members of the Government, who will have this power ; or, in other words, the Crown will make the appointments. Hon. MR. MACPHERSON—With the advice of the representatives of the people. HON. MR. AIKINS—Yes, undoubtedly; but the people, nevertheless, will have nothing at all to do with the matter ; we advert again, in fact, to the old principle when the Crown made all the appointments. (Hear, hear.) Now, with regard to this question, I feel myself in this position, that although I may be in favour of the Crown making these appointments— upon which principle I express no opinion at this moment—if I voted for these resolutions I would give a vote, and every member of this House would give a vote, by which they would give themselves seats in this House as long as Providence thought fit to let them main. (Hear, hear.) I came here, honorable gentlemen, to conserve certain interests, to represent certain classes, and to reflect the views of those who sent me here so far as they accorded with my own judgment. But they did not send me here to change the Constitution under which I was appointed, and to sweep away at one dash the privileges they possess, one of which is, to give a seat in this House to him in whom they have confidence. It does not appear right to me that the members of this House should declare, by their own votes, that we shall remain here for all time to come. (Hear, hear.) The reasons given for the proposed change are various, and to some extent conflicting. We find one member of the Government telling us that it is because the Maritime Provinces are opposed to an elective Chamber, and hence we in Canada—the largest community and the most influential—give way to them, and set aside a principle that was solemnly adopted here, and so far has worked without prejudice to our interests. We find another gentleman, who, when the question came up years ago, Strongly opposed the elective principle, quite as strongly opposes it now, because since then certain municipalities have borrowed more than they are able to pay ! These are somewhat extraordinary reasons, and I trust the House will give them their due weight. I think, honorable gentlemen, that prior to the proposed change taking place, we ought not to declare by our own votes that we are entitled to permanent seats in this House,— without, at any rate, knowing whether the people consent to it or not ; and I do not think I am wrong in using this line of argument, when we have reason to believe that, even if the Crown-appointed members remain here, a large number of the elected members will also remain.

      §.24 of the Constitution Act, 1867.

    1. Think of regular media as a one-way street where you can read a newspaper or listen to a report on television, but you have very limited ability to give your thoughts on the matter. Social media, on the other hand, is a two-way street that gives you the ability to communicate too.

      Very interesting, I would've never thought of there being completely two different media outlets. Being a staff of Met Media, specifically with the Metropolitan, it's promotes questions on how we as the newspaper can promote more active users in the traditional media outlets. I think of how we upload the newspaper also on the web so that students may access it quicker and more convenient.

    1. It is said they have not [Page 89] the power. But what is to prevent them from enforcing it? Suppose we had a conservative majority here, and a reform majority above— or a conservative majority above and a reform majority here—all elected under party obligations,—- what is to prevent a dead-lock between the chambers ? It may be called unconstitutional—- but what is to prevent the Councillors (especially if they feel that in the dispute of the hour they have the country at their back) from practically exercising all the powers that belong to us ? They might amend our money bills, they might throw out all our bills if they liked, and bring to a stop the whole machinery of government. And what could we do to prevent them ? But, even supposing this were not the case, and that the elective Upper House continued to be guided by that discretion which has heretofore actuated its proceedings,—still, I think, we must all feel that the election of members for such enormous districts as form the constituencies of the Upper House has become a great practical inconvenience. I say this from personal experience, having long taken an active interest in the electoral contests in Upper Canada. We have found greater difficulty in inducing candidates to offer for seats in the Upper House, than in getting ten times the number for the Lower House. The constituencies are so vast, that it is difficult to find gentlemen who have the will to incur the labor of such a contest, who are sufficiently known and popular enough throughout districts so wide, and who have money enough — (hear) — to pay the enormous bills, not incurred in any corrupt way,—do not fancy that I mean that for a moment—but the bills that are sent in after the contest is over, and which the candidates are compelled to pay if they ever hope to present themselves for re-election. (Hear, hear.) But honorable gentlemen say—” This is all very well, but you are taking an important power out of the hands of the people, which they now possess.” Now this is a mistake. We do not propose to do anything of the sort. What we propose is, that the Upper House shall be appointed from the best men of the country by those holding the confidence of the representatives of the people in this Chamber. It is proposed that the Government of the day, which only lives by the approval of this Chamber, shall make the appointments, and be responsible to the people for the selections they shall make. (Hear, hear.) Not a single appointment could be made, with regard to which the Government would not be open to censure, and which the representatives of the people, in this House, would not have an opportunity “of condemning. For myself, I have maintained the appointed principle, as in opposition to the elective, ever since I came into public life, and have never hesitated, when before the people, to state my opinions in the broadest manner ; and yet not in a single instance have I ever found a constituency in Upper Canada, or a public meeting declaring its disapproval of appointment by the Crown and its desire for election by the people at large. When the change was made in 1855 there was not a single petition from the people asking for it—-it was in a manner forced on the Legislature. The real reason for the change was, that before Responsible Government was introduced into this country, while the old oligarchical system existed, the Upper House continuously and systematically was at war with the popular branch, and threw out every measure of a liberal tendency. The result was, that in the famous ninety-two resolutions the introduction of the elective principle into the Upper House was declared to be indispensable. So long as Mr. ROBERT BALDWIN remained in public life, the thing could not be done ; but when he left, the deed was consummated. But it is said, that if the members are to be appointed for life, the number should be unlimited— that, in the event of a dead lock arising between that chamber and this, there should be power to overcome the difficulty by the appointment of more members. Well, under the British system, in the case of a legislative union, that might be a legitimate provision. But honorable gentlemen must see that the limitation of the numbers in the Upper House lies at the base of the whole compact on which this scheme rests. (Hear, hear.) It is perfectly clear, as was contended by those who represented Lower Canada in the Conference, that if the number of the Legislative Councillors was made capable of increase, you would thereby sweep away the whole protection they had from the Upper Chamber. But it has been said that, though you may not give the power to the Executive to increase the numbers of the Upper House, in the event of a dead-lock, you might limit the term for which the members are appointed. I was myself in favor of that proposition. I thought it would be well to provide for a more frequent change in the composition of the Upper House, and lessen the danger of the chamber being largely composed of gentlemen whose advanced years might forbid the punctual and vigorous discharge of their public [Page 90] duties. Still, the objection made to this was very strong. It was said : ” Suppose you appoint them for nine years, what will be the effect ? For the last three or four years of their term they would be anticipating its expiry, and anxiously looking to the Administration of the day for re-appointment ; and the consequence would be that a third of the members would be under the influence of the Executive.” The desire was to render the Upper House a thoroughly independent body—one that would be in the best position to canvass dispassionately the measures of this House, and stand up for the public interests in opposition to hasty or partisan legislation. It was contended that there is no fear of a dead-lock. We were reminded how the system of appointing for life had worked in past years, since Responsible Government was introduced ; we were told that the complaint was not then, that the Upper Chamber had been too obstructive a body—not that it had sought to restrain the popular will, but that it had too faithfully reflected the popular will. Undoubtedly that was the complaint formerly pressed upon us—{hear, hear)—and I readily admit that if ever there was a body to whom we could safely entrust the power which by this measure we propose to confer on the members of the Upper Chamber, it is the body of gentlemen who at this moment compose the Legislative Council of Canada. The forty-eight Councillors for Canada are to be chosen from the present chamber. There are now thirty-four members from the one section, and thirty-five from the other. I believe that of the sixty-nine, some will not desire to make their appearance here again, others, unhappily, from years and infirmity, may not have strength to do so ; and there may be others who will not desire to qualify under the Statute. It is quite clear that when twenty-four are selected for Upper Canada and twenty-four for Lower Canada, very few indeed of the present House will be excluded from the Federal Chamber ; and I confess I am not without hope that there may be some way yet found of providing for all who desire it, an honorable position in the Legislature of the country. (Hear, hear.) And, after all, is it not an imaginary fear—that of a dead-lock ? Is it at all probable that any body of gentlemen who may compose the Upper House, appointed as they will be for life, acting as they will do on personal and not party responsibility, possessing as they must, a deep stake in the welfare of the country, and desirous as they must be of holding the esteem of their fellow-subjects— would take so unreasonable a course as to imperil the whole political fabric ? The British House of Peers itself does not venture, à l’outrance, to resist the popular will, and can it be anticipated that our Upper Chamber would set itself rashly against the popular will? If any fear is to be entertained in the matter, is it not rather that the Councillors will be found too thoroughly in harmony with the popular feeling of the day ? And we have this satisfaction at any rate, that, so far as its first formation is concerned—so far as the present question is concerned—we shall have a body of gentlemen in whom every confidence may be placed.

      §§.24, 26, and 29 of the Constitution Act, 1867.

    1. In this context, the rather linear practice of stratigraphic excavation with its institutional, disciplinary, and performative underpinnings gives way to the raucous and uneven performance of punk rock music which often eschews expertise, barriers to access, and specialized knowledge (see Gnecco 2013)

      I like to think of the practice of stratigraphic excavation as our attempt to rein in the chaos that inevitably results as we are continually faced with and misunderstand the remains of the past. I would argue that disciplined stratigraphic excavation and a punk spirited practice are not entirely incompatible. You can rebel against the system in every aspect of your practice as long as you stand in the dole queue (and fill out your paperwork and context sheets properly). This may be slightly off your main argument.

    1. The grosser feeds the purer, Earth the Sea, Earth and the Sea feed Air, the Air those Fires Ethereal, and as lowest first the Moon; Whence in her visage round those spots, unpurg'd Vapours not yet into her substance turnd. [ 420 ] Nor doth the Moon no nourishment exhale From her moist Continent to higher Orbes. The Sun that light imparts to all, receives From all his alimental recompence In humid exhalations, and at Even [ 425 ] Sups with the Ocean: though in Heav'n the Trees Of life ambrosial frutage bear, and vines Yield Nectar, though from off the boughs each Morn We brush mellifluous Dewes, and find the ground Cover'd with pearly grain: yet God hath here [ 430 ] Varied his bounty so with new delights, As may compare with Heaven; and to taste Think not I shall be nice. So down they sat, And to thir viands fell,

      Where does this concept of hierarchical feed come from? Is it pure Milton invention?

    1. These possibili- ties are more likely to be seen if we think of large crises as the outcome of smaller scale enactments. When the enactment perspective is applied to crisis situations, several aspects stand out that are normally overlooked. To look for enactment themes in crises, for example, is to listen for verbs of enactment, words like manual control, intervene, cope, probe, alter, design, solve, decouple, try, peek and poke (Perrow, 1984, p. 333), talk, disregard, and improvise. These verbs may signify actions that have the potential to construct or limit later stages in an unfolding crisis

      Curious why temporality is never mentioned as a dynamic of enactment. It's somewhat implied in the idea of acting in the moment or responding after the fact, but sensemaking and social construction is inherently temporal.

    1. I made him just and right, Sufficient to have stood, though free to fall.

      Yes, and God also made mankind ignorant, with strong sensory appetites (for fruit like apples), with a desire for pleasure etc. etc. (I'm thinking of my mother who was so unattuned to childrearing that she expected me to act like an adult when I was 2 years old and punished me for acting by impulse according to reason). Just how much time did God spend teaching Adam and Eve how to control their desires, or role model such behavior for them?

      It seems to me that anyone who is authoritarian and makes strong rules- especially for someone who is not yet really adult, experienced and knowledgeable -is asking for rebellion. The gestalt therapists speak of Topdog and Underdog. When there is an authoritarian Topdog, there's bound to be an Underdog who rebels. What's needed is to assimilate Topdog (integrating some facets of our SHOULDs and throwing out others that are not necessary), building a self in the process that it is NOT split in two. In Freudian terms, we're talking about a healthy ego that can help us integrate our id and superego rather than a strict superego that is authoritarian with a rebellious id. But the root of the Old Testament is such a split.

      Adam and Eve were just born, right, though born as adults? (Personally, I think we can get beyond the split too of Creationism vs. Evolution. Why not view God as having given a lightning blast to chimpanzees which quickly led to ther evolving into humans?). So they weren't likely to have a lot of experience or become very mature yet. Of course they needed to go through the rebellious terrible twos!

      In Greek mythology too, we have the first female Pandora who almost immediately after she is created is left in a room with a box and told that she must not open it. So she does, of course. Her curiosity gets the better of her. And so she is blamed for all the evil in the world, as Eve is blamed. Unfair!

      Both of these situations are "set ups". What I don't understand is why God set up a test which Adam and Eve were bound to fail. So that he could fully assert His power over them?

      The Old Testament seems to me to be based on a split consciousness with a Topdog God and an Underdog mankind. This is a kind of parent/child, authority /subordinate setup. But it is not the only way to live.

      Yes, I'm trying to understand Milton, but in the process clarifying my own attitude toward his interpretation of The Fall AND that of the Bible and Christianity. As a Gnostic deeply influenced by Elaine Pagel's Gnostic Gospels and her Adam, Eve and the Serpent, I highly recommend these two books. To me, the make much more sense than the Fall in the Old Testament or the Miltonian interpretation.of it.

      Those of us who are expressing our own views here abd criticizing Milton and the Bible (and certainly I'm doing a lot of it) may be at odds with those who are dedicated believers in the Bible and take Genesis literally. But I'd be happy to hear a variety of views.

    1. Consult how we may henceforth most offendOur Enemy, our own loss how repair,How overcome this dire Calamity,What reinforcement we may gain from Hope, [ 190 ]If not what resolution from despare.

      Ironic, I think. The Fallen Angel has just rejected Satan's proposal of out right war as a bad idea. It would end in a disaster all over again. So he calls for a consultation of the fallen demons, one might say that he is call for a Parliament, which can come up with a better idea! I would rather suspect that Milton has in mind the "success" of resent English Parliament, as I might have of the US Congress, as he will unfold this grand consult. Love it!

    1. not conclusive

      I think "not conclusive" is still too certain based on the actual level of confidence in the paper. I'd have said "tentative" would reflect the paper more accurately. eg. The paper uses the term "risk averse approach" for the proposed 2C threshold, and say "we cannot exclude the risk...". It also use caveated language such as "could" and "may" a lot, and talk of "probability ... difficult to quantify".

    1. Why should a republic be small?  What happens, according to Rousseau, when a republic is too large?

      I think Rousseaus argument makes sense in the context of his era, but as Julie suggested, modern transportation and communication may have shrunk our nation to some degree. I also think Rousseau fails to address the other component that our Founders incorporated--Federalism. Some of Rousseaus issues are addressed through Federalism. We maintain smaller republics within a whole republic.

    1. It should be in miniature an exact portrait of the people at large. It should think, feel, reason, and act like them.

      This is the foundation of republicanism as the government should be based on popular sovereignty. However, a discussion question for students may be, how can we expect the government to act like the people when there are so many other influences such as money? or is the government truly a reflection of the people if only 58% of eligible voters vote? What is a solution to this problem?

    1. More importantly non-symbolic expressions of reality are traditionally understood to be outside the disciplinary boundaries of the human social sciences. These reasons may explain the convention but they cannot justify it. We can accept that for us to be able to talk and think about time necessitates our putting it into words. If this is all that is being expressed, it is not very much; if it equates reality with the symbol, it goes too far. There is no need to deny that all humans formulate meanings symbolically or that this is a fundamentally social process. There is an urgent need, however, to appreciate that time is an aspect of nature, and that nature encompasses the symbolic universe of human society. Once we recognise ourselves as bearers of all the multiple times of nature, and once we allow for nature to include symbolic expression, the gulf between the symbolic knower and nature as an external (unknowable) object can be dispensed with. The mutually exclusive dichotomies of nature and culture, subject and object become irrelevant.

      This is pretty dense but I think Adam is arguing that "social time" can exist without symbols and that "natural time" can itself be symbolic. If this is true, then conceptualizing time can be more holistic and rely less on dichotomy.

  5. Jul 2018
    1. Scholars have known for decades that people tend to search for and believe information that confirms what they already think is true. The new elements are social media and the global networks of friends who use it. People let their guard down on online platforms such as Facebook and Twitter, where friends, family members, and coworkers share photos, gossip, and a wide variety of other information. That’s one reason why people may fall for false news, as S. Shyam Sundar, a Pennsylvania State University communication professor, explains in The Conversation. Another reason: People are less skeptical of information they encounter on platforms they have personalized — through friend requests and “liked” pages, for instance — to reflect their interests and identity.
    1. There was, after all, public humanities before there (quite recently) was the phrase “public humanities”, and those of us for whom the term has meaning know that there are still many more public humanists than the very small proportion who now claim the name explicitly.

      And I think we still have a common conception of a "public intellectual" that may be a humanist, or someone like Neil DeGrasse Tyson. Is it useful to claim and apply a term that has baggage to describe working with the public in mind as an audience?

  6. course-computational-literary-analysis.netlify.com course-computational-literary-analysis.netlify.com
    1. The moment he saw me, he pulled out the pocket-book and pencil, and obstinately insisted on taking notes of everything that I said to him.

      In Mr.Jenning's narrative, the word "obstinate" has been used to describe Betteredge several times, and it seems from his descriptions of their interactions that Betteredge is indeed an obstinate old man, but a quick search through the story gives a peculiarly high frequency of this certain adjective. I think we could run an analysis on which characters have been described as "obstinate" more often, details of their character traits and what led to this comment. Or do some narrators use the word more often and it may be a particular way they see the difference in personality from themselves and other people?

    2. “What do you mean by pitying me?” she asked in a bitter whisper, as she passed to the door. “Don’t you see how happy I am? I’m going to the flower-show, Clack; and I’ve got the prettiest bonnet in London.” She completed the hollow mockery of that address by blowing me a kiss–and so left the room.

      Rachel's moodiness seems bizarre and creepy to me, and I suspect that huge disturbance to one's mind might be one of the verifications of the curse from the moonstone.Of course, I also think that the description of Rachel’s abnormal behavior here may because of Miss Clack's prejudice against her, which magnifies those negative qualities.I have to say,we can vividly feel from this part that a strong subjectivity of the narrative content is a significant feature of using first-person perspective to describe a story.

    3. And what of that?–you may reply–the thing is done every day. Granted, my dear sir. But would you think of it quite as lightly as you do, if the thing was done (let us say) with your own sister?

      Mathew Bruff carefully anticipates the reader's objections, and tries to persuade him ("my dear sir") to reconsider his assessment of Godfrey Ablewhite. To better understand how and why The Moonstone's various narrators directly address readers, we could run a word collocation analysis and/or a sentiment analysis on each moment that features a narrator addressing a reader. Then, we would be informed enough to speculate about the extent to which such addresses prove effective.

    4. For a wonder, he had had a good night’s rest at last; and the unaccustomed luxury of sleep had, as he said himself, apparently stupefied him.

      I think this action(good night's rest) is abnormal. We have known Mr. Franklin care about the moon stone very much and he has fell in love with Ms. Rachel. How could he have a good night's rest after the loss? It's so strange that something may happened to him at night.

    1. In these analyses of plasticity we see how, like clock time, digital time is not simply a property of technologies, nor does it straightforwardly emerge as a sociotechnical con-vention associated with their use. Rather, it has coevolved with broader shifts in the temporality of everyday life, such as the emergence of fractured rhythms, and the associated need to fill the gaps between them.

      Digital time is a type of sociotemporality that has co-evolved through influence of technology and its influence on technology AND rhythms/trajectories/horizons of modern life. See Rattenbury above.

      Think more about how Reddy's and Pschetz's work may be important here re: social coordination.

    1. “I support a social transition for a kid who is in distress and needs to live in a different way. And I do so because I am very focused on what the child needs at that time,” said Johanna Olson-Kennedy, medical director of the Center for Transyouth Health and Development at Children’s Hospital Los Angeles, the largest transgender youth clinic in the United States with some 750 patients. A social transition to the other gender helps children learn, make friends, and participate in family activities. Some will decide later they are not transgender, but Olson-Kennedy says the potential harm in such cases may be overstated.

      This is one of the major problems in how so many approach this whole issue weather as a topic or in deciding a course of action for their own child. Furthermore the possibility of that happiness now rests on either on secrecy and passing or as is more often the case today it rests on the cooperation and orchestration of a comprehensive enough segment of the total people with whom your child is interacting to support this transition. What if we did that for gay kids. How much different would things be if tital 9 applied to all gender nonconforming kids even those who identified as gay? What if 12 states didn't have laws against speaking positively about gay as an identity in schools. What if parents where expected to do the work to insure that a self identified gay student was provided a social network for similarly identified adults and young people. And for just about any teen how might life be different emotionally speaking if we had been chemically castrated during our teen years. What if gay kids had the same wealth of support materials - public discourse etc. The reaason they don't is because we can not deal with their difference and we can not deal with it being about their sexual desire because we are unnerved by a the fact that children can identify and feel and act on sexual interests at a very young age. Gay kids know this and that is a big hurdle to comming out. I wished so much to have a boyfirend then I felt I could come out because it wouldn't mean telling my parents that I think about boys in a sexual way but I love this boy and won't deny him to anyone. No sad to say as was noted when oposition was initially raised amoung APA members over the introduction of GID to the DSM when they stated that it may just be that gay is a normal healthy worthy course of human development that as part of that process involves being in some way emotionally maimed by which they meant that there are certain painfull encounters with being different than ones own parents and most people in your community that gay people by dfinitioon must edure and untill society changes being gay is known to be a bad undesirable thing by children at a tremendously young age. So to be and develop as a person who is homosexual is not going to happen without certain paiuns and obsticles that others can easily avoid and mostly do.

    1. Do you think that this question can be substantively answered via your report-generating processes?

      As an assessment person, my desired learning outcome for instructors is that they can "align their instructional approaches and assessments with their desired student learning outcomes". This skill is a basic instructional design skill to ensure that we're teaching what we think we're teaching and testing what we think we're testing. There are a number of ways that instructors can demonstrate this. Looking at how instructors articulate their alignment in a report (possibly using a rubric) is an efficient way for a single assessment person to identify the areas of the college that are more and less in need of a deeper dive.

      Philosophically, I see assessment as a formative process rather than a summative one. I'm here to help, not to punish. The assessment reports give instructors an opportunity to show me what they understand about teaching and learning principles and share how they're systematically tackling their departmental problems. This gives me a starting point for a conversation with a department about areas of strength and weakness and to then assist them where needed. Sometimes the assistance is training and other times it may be to help them make a case for more resources. (Many professors in higher education get no training in teaching and learning as part of their graduate education. How can we expect people to be good at something without having had opportunities for training and practice with feedback?)

    1. On 2017 Oct 19, Polina Vishnyakova commented:

      Dear colleagues, in order to maintain a healthy scientific debate we need to clarify the statement, which you provided in Discussion of this paper. You wrote: «These findings are inconsistent with the findings of Vishnyakova et al. who reported that OPA1 was upregulated in PE placentas. […] The findings of Vishnyakova et al. were mainly based on gene expression. Levels of DNA or RNA may be unable to predict protein levels accurately, as they do not account for post-transcriptional/translational modifications». It is important to note than in our work we observed changes both on mRNA and protein content level of OPA1 and these findings surely disagree with your data. But we think that this could be explained by the difference in patients characteristic: we divided patients with preeclampsia basing on gestational age while you included women only with severe preeclampsia. Plus in current paper you analyzed only OPA1-L form, but not both forms (full OPA1-L and cleaved OPA1-S). So obtained difference in our findings could be explained with these points. Best regards.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 25, Jason R Richardson commented:

      Drs. Blakely and Melikian, thank you both for your insightful comments. In working with these cells over a number of years, we have found that while the cell line expresses all of the necessary components to be identified as dopaminergic, it neither synthesizes a significant amount of dopamine nor has functional dopamine uptake. This is likely because of the diffuse nature of the protein expression identified by Dr. Melikian, which may be because it was generated by immortalizing cells from embryonic day 12 rat. I believe when I was a postdoc, I did some co-labeling and observed that the DAT was primarily present in the ER and golgi in these cells, suggesting that the intracellular machinery may not be mature enough to fully generate a functional and fully glycosylated DAT. I should note that the original group that made the N27 line recently re-cloned it and purified cells from this new clone had higher expression levels of both TH and DAT (Gao et al., 2016). Although, there were no functional studies for DAT-mediated uptake with the re-cloned line, they did show a modest increase in susceptibility to 6-OHDA and MPP+. Our primary goal for this paper was to better characterize the role of histone acetylation and transcription factor binding in the epigenetic regulation of DAT expression based on our previous studies in SK-N-AS cells (Green et al., 2015) in a rat cell line that we could then translate to in vivo studies. I certainly agree that additional studies in cells that display a more mature phenotype that allow for determination of function are warranted. I think both comments bring out a very important point regarding the study of transporter regulation. That is, cell context and system are critical to the interpretation and translation of mechanisms regulating the DAT to in vivo systems.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 21, Lydia Maniatis commented:

      (Comment #2)

      The three sentences of the conclusion (which I annotate below, in its entirety, reflects the article's utter lack of content:

      1. *"Motion information generated by moving specularities across a surface is used by human observers when judging the bumpiness of 3D shapes." *

      The boundaries of specularities are effectively contour lines. It would be thoroughly unrealistic to predict that they would not play a role, moving or not. And the observation had already been made.

      1. "In the presence of specular motion, observers tend to not rely on the motion parallax information generated by the matte-textured reflectance component."

      The two parts of this sentence seem to be a non-sequitur - how could observers of specular motion employ information generated by matte-textured objects (i.e. objects other than the ones they were observing)? What the authors mean to say is that observers don't use the motion parallax info generated by the specular stimulus. While they frame this as though it were an actual finding, it is, as discussed above, a purely speculative attempt to explain the poorer performance with specular objects.

      1. *"This study further highlights how 3D shape, surface material, and object motion interact in dynamic scenes." *

      It really doesn't, given the mixed results and failed predictions. It couldn't for a number of other reasons, discussed below.

      1. All of the heavy lifting in this article is done by computer programmers, whose renderings are supposed to qualify as "specular" "specular motion" "matte-textured" etc. These renderings rest on theoretical assumptions most of which are never made explicit. They are, however, inadequate; we learn that observers sometimes saw the moving specular stimuli as non-rigid. This is a problem. There is no objective description of the phenomenon "specular object in motion around an axis" other than "objects generated by this particular program." Is there any doubt that results would have been different if the renderings had accurately mimicked the physical phenomenon? If the surface of the object is seen as changing, don't this affect the "motion parallax" hypothesis? The speed with which a particular point on a surface is moving optically is confounded with the speed with which it is moving on its own.

      2. The so-called matte-textured objects appeared purely reflective when not in motion. The apparent specularities were "stuck-on" so that they moved with the surface. I have never seen a matte surface with this characteristic. I would be curious to see the in-motion renderings, because I cannot imagine what they look like. What is clear is that a simple reference to matte textured objects is not appropriate. We are talking about a different phenomenon, which may not correspond to any physically actualizable one. This latter fact wouldn't matter if the theoretical framework were tight enough that such stimuli allowed isolation of some particular factor of interest. Here, however, it is just means that "matte" doesn't mean what it normally is thought to mean.

      3. Observers were confused about the meaning of the term "bumpiness." Stimuli involve hills and ridges of various extents as well as varying apparent heights. The authors were interested in height. They instructed observers who asked for clarification (not the others) that they were interested in "the amplitude not the frequency." I would say a large hill or a wide ridge could qualify as more ample for people not thinking in terms of graphs with height in the ordinate. In other words, I think there is a observational confound between extent and height of the bumps.

      4. In the introduction, the authors refer to previous papers which came to opposite conclusions. Presumably, this means that some relevant factors/confounds were not considered. But the authors don't attempt to analyze these conflicted citations, which thus merely function as window-dressing. They move on to their experiments, on the slightest and vaguest of pretexts, with poorly described stimuli and poorly controlled tasks.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jun 14, Youhe Gao commented:

      I found two "overexpression"s in the paper. "Although the extent of bait overexpression is difficult to judge and varies across IP's, previous experimentation has shown that over-expression has little effect on identification of true interacting partners (Sowa et al., 2009)" "VAPBWT overexpression strongly increased the association of EGFP-LSG1 and OSBP with the ER (Figure 7E,G)" Personally, I am not sure if those are enough. In a system, increasing [A] or [B] will lead to more [AB]. As we know more about protein interaction now, this kind of systematic false positive should not be ignored any more. In cells, overexpression with tag may even change the location of the protein. That is why I think the next generation of massive protein interaction studies should start from in vivo crosslinking. I do not want to overemphasize the problem. Most of the protein interactions identified are probably true in cells. The amount of work done is very impressive and respected. I hope users who is using a particular interaction data as the only clue for their future experiment design, maybe they should start with an in vivo crosslinking as a conformation of that interaction. It may make them more confident to proceed.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 22, Lydia Maniatis commented:

      Part 1 This publication is burdened with an unproductive theoretical approach as well as methodological problems (including intractable sampling problems). Conclusions range from trivial to doubtful.

      Contemporary vision science seems determined to take organization of the retinal stimulation out of the picture, and replace it with raw numbers, whether neural firing rates or statistics. This is a fundamental error. A statistical heuristic strategy doesn’t work in any discipline, including physics. For example, a histogram of the relative heights of all the point masses in a particular patch of the world wouldn’t tell anything about the mechanical properties of the objects in that scene, because it would not tell us about distribution and cohesiveness of masses. (Would it tell us anything of interest?)

      In perception, it is more than well established that the appearance of any point in the visual field –with respect to lightness, color, shape, etc - is intimately dependent on the intensities/spectral compositions of the points in the surrounding (the entire) field (specifically their effects on the retina) and on the principles of organization that the visual process effectively applies to the stimulation. Thus, a compilation of, for example, the spectral statistics of Purves’ colored cube would not allow us either to explain or predict the appearance of colored illumination or transparent overlays. Or, rather, it wouldn’t allow us to predict these things unless we employed a very special sample of images, all of which produced such impressions of colored illumination. Then we might get a relatively weak correlation. This is because, within this sample, a preponderance of certain wavelengths would tend to correlate with e.g. a yellow, illumination impression, rather than being due, as might be true for the general case, to the presence of a number of unified apparently yellow and opaque surfaces. Thus, we see how improper sampling can allow us to make better (and, I would add, predictable) predictions without implying explanatory power. In perception, explanatory power strictly requires we take into account principles of organization.

      In contrast, the authors here take the statistics route. They want to show, or rather, don’t completely fail to corroborate the observation that when surfaces are wet, their look colors are deeper and more vivid, and also to corroborate the fact that changes in perception are linked to changes in the retinal stimulation. Using a set of ready-made images (criteria for the selection of which are not provided), they apply to them a manipulation (among others) that has the general effect of increasing the saturation of the colors perceived. One way to ascertain whether this manipulation causes a surface to appear wet would be to simply ask observers to describe the surface, without any clues to what was expected. Would the surface be spontaneously be described as “wet” or “moist”? This would be the more challenging test, but is not the approach taken.

      Instead, observers are first trained on images (examples of which are not provided - I have requested examples) that we are told appear very wet (and the dry versions), and include shape-based cues, such as drops of water or puddles. They are told to use these as a guide to what counts as very wet, or a rating of 5. They are then shown a series of images containing both original and manipulated images (with more saturated colors, but lacking any shape-based cues), and asked to rate wetness from 1 to 5.

      The results are messy, with some transformed images getting higher ratings than the originals and others not, though on average they are more highly rated. But the ratings for all the images are relatively low; and we have to ask, how have the observers understood their task? Are they reporting an authentic perception of wetness or moistness, or do they believe are they trying to guess at how wet a surface actually is, based on a rule of thumb adopted during the training phase, in which, presumably, the wet images were also more color-saturated? (In other words, is the task authentically perceptual, or is it more cognitive guesswork?) What does it mean to rate the wetness of a surface at e.g. the “2” level?

      The cost of ignoring the factor of shape/structure is evident in the authors’ attempt to explain why the ratings for all images were so low, reaching 4 in only one case. They explain that it may be because their manipulation didn’t include areas that looked like drops or puddles. Does this mean that the presence of drops or puddles actually changes the appearance of the surrounding areas, and/or that perhaps those very different training images included other organized features that were overlooked and that affected perception? Did the training teach observers to apply a cue in practice that by itself produces somewhat different perceptual outcomes? I suppose we could ask the observers about their strategy, but this would muddy the facade of quantitative purity.

      At any rate, the manipulation (like most ad hoc assumptions) fails as a tool for prediction, leading the authors to acknowledge that “The image transformation greatly increased the wetness rating for some images but not for others…” (Again, it isn’t clear that “wetness rating” correlates with an authentically perceptual scale). Thus, relative success or failure of the transformation is image-specific, and thus sample-specific; some samples and sample sets would very likely not reach statistical significance. Thus the decision to investigate further (Experiment 1b) using (if I’m reading this correctly) only a single custom-made image that was not part of the original set (on what basis was this chosen?) seems unwise. (This might seem to worsen the sampling problem, but the problem is intractable anyway. As there is no possible sample that would allow the researchers to generate reliable statistics-based predictions for the individual case, any generalization would be instantly falsifiable, and thus lack explanatory power).

      The degree to which any conclusions are tied to the specific (and unrationalized) sample is illustrated by the fact that the technical manipulations were tailored to it (from Experiment 1a): “In deciding [the] parameters of the WET transformation, we preliminarily explored a range of parameters and chose ones that did not disturb the apparent naturalness of all the images used in Experiment 1a.” Note the lack of objective criteria for “naturalness.”). (We’re not told on what basis the parameters in Experiment 1b were chosen). In short, I don’t think this numbers game can tell us anything more from a theoretical point of view than casual observation and e.g., trial and error by artists, already have.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 06, Hilda Bastian commented:

      The conclusion that implicit bias in physicians "does not appear to impact their clinical decision making" would be good news, but this systematic review does not support it. Coming to any conclusion at all on this question requires a strong body of high quality evidence, with representative samples across a wide range of representative populations, using real-life data not hypothetical situations. None of these conditions pertain here. I think the appropriate conclusion here is that we still do not know what role implicit racial bias, as measured by this test, has on people's health care.

      The abstract reports that "The majority of studies used clinical vignettes to examine clinical decision making". In this instance, "majority" means "all but one" (8 out of 9). And the single exception has a serious limitation in that regard, according to Table 1: "pharmacy refills are only a proxy for decision to intensify treatment". The authors' conclusions are thus related, not to clinical decision making, but to hypothetical decision making.

      Of the 9 studies, Table 1 reports that 4 had a low response rate (37% to 53%), and in 2 studies the response rate was unknown. As this is a critical point, and an adequate response rate was not defined in the report of this review, I looked at the 3 studies (albeit briefly). I could find no response rate in any of the 3. In 1 of these (Haider AH, 2014), 248 members of an organization responded. That organization currently reports having over 2,000 members (EAST, accessed 6 May 2017). (The authors report that only 2 of the studies had a sample size calculation.)

      It would be helpful if the authors could provide the full scoring: given the limitations reported, it's hard to see how some of these studies scored so highly. This accepted manuscript version reports that the criteria themselves are available in a supplement, but that supplement was not included.

      It would have been helpful if additional important methodological details of the included studies were reported. For example, 1 of the studies I looked at (Oliver MN, 2014) included an element of random allocation of race to patient photos in the vignettes: design elements such as this were not included in the data extraction reported here. Along with the use of a non-validated quality assessment method (9 of the 27 components of the instrument that was modified), these issues leave too many questions about the quality rating of included studies. Other elements missing from this systematic review (Shea BJ, 2007) are a listing of the excluded studies and assessing the risk of publication bias.

      The search strategy appears to be incompletely reported: it ends with an empty bullet point, and none of the previous bullet points refer to implicit bias or the implicit association test.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 24, Lydia Maniatis commented:

      This article’s casual approach to theory is evident in the first few sentences. After noting irrelevantly, that “Since their introduction (Wilkinson, Wilson, & Habak, 1998), RF patterns have become a popular class of stimuli in vision science, commonly used to study various aspects of shape perception,” the authors immediately continue to say that “Theoretically, RF pattern detection (discrimination against a circle) could be realized either by local filters matched to the parts of the pattern, or by a global mechanism that integrates local parts operating on the scale of the entire pattern.” No citation is offered for this vague and breezy assertion, which begs a number of questions.

      1. How did we jump from “shape perception” to “RF detection against a circle”? How is the latter related to the former?

      2. Is the popularity of a pattern sufficient reason to assume that there exist special mechanisms – special detectors, or filters – tailored to its characteristics? Is there any basis whatsoever for this assertion?

      3. Given that we know that the whole does determine the parts perceived, why are we talking about integration of “local” elements? And how do we define local? Doesn’t a piece of a shape also consist of smaller pieces, etc? What is the criterion for designating part and whole in a stimulus pattern (as opposed to the fully-formed percept)?

      Apparently, there have been many ‘models’ proposed for special mechanisms for “RF detection against a circle,” addressing the question in these local/local-to-global terms. Could the mechanism involve maximum curvature integration, tangent orientations at inflection points, etc.? These simply take for granted the underlying assumption that there are special “filters” for “RF discrimination against a circle.” The only question is to what details of the figure are these mechanisms attuned.

      What if we were dealing with different types of shapes? What if the RF boundary shape were formed by different sized dots, or dashes, or rays of different lengths radiating from a center? Would we be talking about dot filters, or line length filters? Why put RF patterns in general, and RF patterns of this type in particular, on such an explanatory pedestal?

      More critically, how is it possible to leverage such patterns to dissect the neural processes underlying perception? When I look at one of these patterns, I don’t have any trouble distinguishing it from a circle. What can this tell me about the underlying process?

      A subculture of vision science has opted to uncritically embrace the view that underlying processes can be inferred quite straightforwardly on the basis of certain procedures that mimic the general framework of signal detection. This view is labeled “signal detection theory” or SDT, but “theory” is overstating it. As noted in my earlier comment, Schmidtmann and Kingdom (2017) never explain why they make what, to a naïve observer, must seem very arbitrary methodological choices, nor does their main reference, Wilkerson, Wilson and Habak (1998). So we have to go back further to find some suggestion of a rationale.

      The founding fathers of the aforementioned subculture include Swets, Tanner and Birdsall (e.g. 1961). As may be seen from a quote from that article (below), the framing of the problem is artificial; major assumptions are adopted wholesale; “perception” is casually converted to “detection” (in order to fit the analogy of a radar observer attempting to guess which blip is the object of interest).

      “In the fundamental detection problem, an observation is made of events occurring in a fixed interval of time and a decision is made; based on this observation, whether the interval contained only the background interference or a signal as well. The interference, which is random, we shall refer to as noise and denote as N; the other alternative we shall term signal plus noise, SN. In the fundamental problem, only these two alternatives exist…We shall, in the following, use the term observation to refer to the sensory datum on which the decision is based. We assume that this observation may be represented as varying continuously along a single dimension…it may be helpful to think of the observation as…the number of impulses arriving at a given point in the cortex within a given time.” Also “We imagine the process of signal detection to be a choice between Gaussian variables….The particular decision that is made depends on whether or not the observation exceeds a criterion value….This description of the detection process is an almost direct translation of the theory of statistical decision.”

      In what sense does the above framework relate to visual perception? I think we can easily show that, in concept and application, it is wholly incoherent and irrational.

      I submit, first, that when I look around me, I don’t see any noise, I just see things. I’m also not conscious of looking for a signal to compare to noise; I just see whatever comes up. I don’t have a criterion for spotting what I don’t know will come up, and I don’t feel uncertain of - I certainly hardly ever have to guess at – what I’m seeing. The very effortlessness of perception is what made it so difficult to discern the fundamental theoretical problems. This is not, of course, to say that what the visual system does in constructing the visual percept from the retinal stimulation isn’t guesswork; but the actual process is light years more complex and subtle than a clumsy and artificial “signal detection” framework.

      Given the psychological certainty of normal perceptual experience, it’s hard to see how to apply this SDT framework. The key seems to be to make conditions of observation so poor as to impede normal perception, making the observer so unsure of what they saw or didn’t see that they must be forced to choose a response, i.e. to guess. One way to degrade viewing conditions is to make the image of interest very low contrast, so that it is barely discernible; another way is to flash it for very brief intervals. Now, in these presentations, the observer presumably sees something; so these manipulations don’t necessarily produce an uncertain perceptual situation (though the brevity of the presentation may make the recollection of that impression mnemonically challenging). Where the uncertainty comes in is in the demand by investigators that observers decide whether the impression is consistent with a quick, degraded glimpse of a particular figure, in this case an RF of a certain type or a circle. I don’t see how one can defend the notion put forth by Swets et al (1961) that this decision, which is more a conscious, cognitive one than a spontaneous perceptual one, is based on a continuously varying criterion. The decision, for example, may be based on a glimpse of one diagnostic feature or another, or on where, by chance, the fovea happens to fall in the 180ms (Schmidtmann and Kingdom, 2017) or 167ms (Wilkerson et al, 1998) interval allowed. But the forced noisiness (due to the poor conditions), the Gaussian presumptions, the continuous variable assumption, and the binary forced choice outputs are needed for the SDT framework to be laid on top of the data.

      For rest of comment (here limited by comment size limits), please see PubPeer.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 06, John Greenwood commented:

      (cross-posted from Pub Peer, comment numbers refer to that discussion but content is the same)

      To address your comments in reverse order -

      Spatial vision and spatial maps (Comment 19):

      We use the term “spatial vision” in the sense defined by Russell & Karen De Valois: “We consider spatial vision to encompass both the perception of the distribution of light across space and the perception of the location of visual objects within three-dimensional space. We thus include sections on depth perception, pattern vision, and more traditional topics such as acuity." De Valois, R. L., & De Valois, K. K. (1980). Spatial Vision. Annual Review of Psychology, 31(1), 309-341. doi:doi:10.1146/annurev.ps.31.020180.001521

      The idea of a "spatial map” refers to the representation of the visual field in cortical regions. There is extensive evidence that visual areas are organised retinotopically across the cortical surface, making them “maps". See e.g. Wandell, B. A., Dumoulin, S. O., & Brewer, A. A. (2007). Visual field maps in human cortex. Neuron, 56(2), 366-383.

      Measurement of lapse rates (Comments 4, 17, 18):

      There really is no issue here. In Experiment 1, we fit a psychometric function in the form of a cumulative Gaussian to responses plotted as a function of (e.g.) target-flanker separation (as in Fig. 1B), with three free parameters: midpoint, slope, and lapse rate. The lapse rate is 100-x where x is the asymptote of the curve. It accounts for lapses (keypress errors etc) when performance is otherwise high - i.e. it is independent of the chance level. In this dataset it is never about 5%. However its inclusion does improve estimate of slope (and therefore threshold) which we are interested in. Any individual differences are therefore better estimated by factoring out individual differences in lapse rate. Its removal does not qualitatively affect the pattern of results in any case. You cite Wichmann and Hill (2001) and that is indeed the basis of this three-parameter fit (though ours is custom code that doesn’t apply the bootstrapping procedures etc that they use).

      Spatial representations (comment 8):

      We were testing the proposal that crowding and saccadic preparation might depend on some degree of shared processes within the visual system. Specific predictions for shared vs distinct spatial representations are made on p E3574 and in more detail on p E3576 of our manuscript. The idea comes from several prior studies arguing for a link between the two, as we cite, e.g.: Nandy, A. S., & Tjan, B. S. (2012). Saccade-confounded image statistics explain visual crowding. Nature Neuroscience, 15(3), 463-469. Harrison, W. J., Mattingley, J. B., & Remington, R. W. (2013). Eye movement targets are released from visual crowding. The Journal of Neuroscience, 33(7), 2927-2933.

      Bisection (Comments 7, 13, 15):

      Your issue relates to biases in bisection. This is indeed an interesting area, mostly studied for foveal presentation. These biases are however small in relation to the size of thresholds for discrimination, particularly for the thresholds seen in peripheral vision where our measurements were made. An issue with bias for vertical judgements would lead to higher thresholds for vertical vs. horizontal judgements, which we don’t see. The predominant pattern in bisection thresholds (as with the other tasks) is a radial/tangential anisotropy, so vertical thresholds are worse than horizontal on the vertical meridian, but better than horizontal thresholds on the horizontal meridian. The role of biases in that anisotropy is an interesting question, but again these biases tend to be small relative to threshold.

      Vernier acuity (Comment 6):

      We don’t measure vernier acuity, for exactly the reasons you outline (stated on p E3577).

      Data analyses (comment 5):

      The measurement of crowding/interference zones follows conventions established by others, as we cite, e.g.: Pelli, D. G., Palomares, M., & Majaj, N. J. (2004). Crowding is unlike ordinary masking: Distinguishing feature integration from detection. Journal of Vision, 4(12), 1136-1169.

      Our analyses are certainly not post-hoc exercises in data mining. The logic is outlined at the end of the introduction for both studies (p E3574).

      Inclusion of the authors as subjects (Comment 3):

      In what way should this affect the results? This can certainly be an issue for studies where knowledge of the various conditions can bias outcomes. Here this is not true. We did of course check that data from the authors did not differ in any meaningful way from other subjects (aside from individual differences), and it did not. Testing (and training) experienced psychophysical observers takes time, and authors tend to be experienced psychophysical observers.

      The theoretical framework of our experiments (Comments 1 & 2):

      We make an assumption about hierarchical processing within the visual system, as we outline in the introduction. We test predictions that arise from this. We don’t deny that feedback connections exist, but I don’t think their presence would alter the predictions outlined at the end of the introduction. We also make assumptions regarding the potential processing stages/sites underlying the various tasks examined. Of course we can’t be certain about this (and psychophysics is indeed ill-poised to test these assumptions) and that is the reason that no one task is linked to any specific neural locus, e.g. crowding shows neural correlates in visual areas V1-V4, as we state (e.g. p E3574). Considerable parts of the paper are then addressed at considering whether some tasks may be lower- or higher-level than others, and we outline a range of justifications for the arguments made. These are all testable assumptions, and it will be interesting to see how future work then addresses this.

      All of these comments are really fixated on aspects of our theoretical background and minor details of the methods. None of this in any way negates our findings. Namely, there are distinct processes within the visual system, e.g. crowding and saccadic precision, that nonetheless show similarities in their pattern of variations across the visual field. We show several results that suggest these two processes to be dissociable (e.g. that the distribution of saccadic errors is identical for trials where crowded targets were correctly vs incorrectly identified). If they’re clearly dissociable tasks, how then to explain the correlation in their pattern of variation? We propose that these properties are inherited from earlier stages in the visual system. Future work can put this to the test.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Apr 05, Lydia Maniatis commented:

      You say that what you mean by “reduced ability to resolve figure-ground competition…is an open question.” But the language is clear, and regardless of whether concave or convex regions are seen as figure, the image is still being resolved into figure and ground. In other words, your experiments in no sense provide evidence that older people are not resolving images into figure and ground, only that convexity may not be as dispositive a factor as in younger people. Perhaps they are influenced more by the location of the red dot, as I believe that it is more likely that fixated regions will be seen as figure, all other things being equal.

      In your response you specify that ‘failure to resolve’ may be interpreted in the sense of “decreased stability of the dominant percept and increased flipping.” However, in your discussion, you note, that, on the contrary, other researchers have found increased stability of the initial percept and difficulty in reversing ambiguous stimuli in older adults. If your inhibition explanation is consistent with BOTH increased flipping and greater stability, then it’s clearly too flexible to be testable. And, again, increased flipping rate is not really the same thing as “inability to resolve.”

      The second alternative you propose is that stimuli are “not perceived to have figure ground character, perhaps being perceived as flat patterns.” This is obviously also in conflict with the other studies cited above. If the areas are perceived as adjacent rather than as having a figure-ground relationship, this also involves perceptual organization. For normal viewers, such a percept – e.g. simultaneously seeing both faces and vase in the Rubin vase, is very difficult, so it hard to imagine it occurring in older viewers, but who knows. If such an idea is testable, then you should test it.

      You say the logic of your hypothesis is sound and your interpretations parsimonious, but in fact it isn’t clear what your hypothesis is, (what failure to resolve means). If your results are replicable, you may have demonstrated that, under the conditions of your experiment, convex region is less dispositive a factor in older adults. But in no sense have you properly formed or tested any explanatory hypotheses as to why this occurred.

      In addition, I don’t think its fair to say that you’ve excluded the possible effect of the brevity of the stimulus. 250ms is still pretty short, considering that saccades typically take about 200ms to initiate. We know that older people generally respond more slowly at any task. The fact that practical considerations make it hard to work with longer exposure times doesn’t make this less of a problem.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Apr 04, Jordan W Lass commented:

      The interpretation that “differences in the ability to resolve the competition between alternative figure-ground interpretations of those stimuli" comes from the combination of results across experiments, and the literature on figure-ground and convexity context effects in specific. Given that we used a two-alternative forced choice paradigm, which has been commonly used to measure perception even when stimuli are presented below threshold, chance performance is P(convex=figure) = .5. Our observation was regressions to chance in the older group in both convexity bias and CCEs, which is consistent with the interpretation that the older group showed reduced ability to resolve figure-ground competition. Interestingly, as you may be getting at, what "reduced ability to resolve figure-ground" means is an open question: could it be decreased stability of the dominant percept and increased flipping between them or time spent in transition states? could it be that the stimuli are not perceived to have figure-ground character, perhaps being perceived as flat patterns? These are interesting questions indeed, which your idea of adding another response option "no figure-ground observed" is one way of addressing, although it comes with its own set of limitations.

      Alternatively, as you propose, it may be the case that the older adults are resolving equally well as younger adults, but with increased tendency of perceiving concave figures compared to the younger group, which would also bring P(convex=figure) closer to .5. However, I can think of no literature or reasoning as to why that would be the case, so I see that as a less parsimonious interpretation. I am intrigued though, and if you are able to develop a hypothesis as to why this would be the case, it could make for an interesting experiment that might shed light into the nature of figure-ground organization in healthy aging.

      Critically, the results of Experiment 4 showed a strong CCE in older adults when only concave regions were homogeneously coloured, which is a stimulus class that has been shown to be processed more quickly in younger adults (e.g., Salvagio and Peterson, 2012). Since no conCAVity-context effects were observed when only convex regions were homogeneously coloured (the opposite stimulus properties of the reduced competition stimuli), the Experiment 4 results are strongly supportive of the notion that older adults do show the CCE pattern well-characterized in younger adults, but that the high competition stimuli used in Experiment 1 are are particularly difficult for them to resolve.

      The logic of our hypothesis is sound, and our interpretation is the most parsimonious we are aware of based on all the results. Thank you for your question, I would be happy to discuss further if you would like further clarification, or are interested in discussing some of these interesting followups.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Mar 04, Lydia Maniatis commented:

      What Ratnasingam and Anderson are doing here is analogous to this imaginary example: Let’s say that I have a strong allergy to food x, a milder one to food y, none to food z, and so on, and that my allergies produce various symptoms. Let’s assume also that some of these effects can be interpreted fairly straightforwardly in terms of formal structural relationships between my immune system and the molecular components of the foods, and others not. For these others, we can assume either a functional rationale or perhaps consider them a side effect of structure or function. We don’t know yet. For other individuals, other allergy/food combinations have corresponding effects. Again, if we know something about the individual we can predict some of the allergic reactions based on known principles.

      How much sense would it make now, to conduct a study whose goal is: “to articulate general principles that can predict when the size of an allergic reaction will be large or small for arbitrarily chosen food/patient combinations…. What (single) target food generates the greatest allergic difference when ingested by two arbitrarily chosen patients?” (“Our goal is to articulate general principles that can predict when the size of induction will be large or small for arbitrarily chosen pairs of center-surround displays…. What (single) target color generates the greatest perceptual difference when placed on two arbitrarily chosen surround colors?”)

      Furthermore, having gotten their results, our researchers now decline to attempt to interpret them in terms of the nuanced understanding already available.

      The most striking thing about the present study is that a researcher who has done (unusually) good work in studying the role of structure and chromatic/lightness relationships in the perception of color is now throwing all this insight overboard, ignoring what is known about these factors and lumping them all together, in the hope of arriving at some magic, universal formula for “simultaneous contrast” that is blind to them. Obviously the effort is bound to fail, and the title – framed as a question, not an answer – is evidence of this. Here is a sample, revealing caveat:

      “Finally, it should also be noted that although some of our comparisons involved target–surround combinations in which some targets can appear as both an increment and decrement relative to the two surrounds, which would induce differences in both hue and saturation (e.g., red and green). Such pairs may be rated as more dissimilar than two targets of the same hue (e.g., red and redder), but it could be argued that this does not imply that the size of simultaneous contrast is larger in these conditions. However, it should be noted that such conditions are only a small subset of those tested herein.” Don’t bother us with specifics, we’re lumping.

      As the authors discuss in their introduction, studies (treating “simultaneous contrast” in a crude, structure-and-relationship-blind way) produce conflicting results: “The conflicting empirical findings make it difficult to articulate a general model that predicts when simultaneous contrast effects will be large or small, since there is currently no model that captures how the magnitude of induction varies independently of method used…. “ Of course. When you don’t take into account relevant principles, and control for relevant factors, your results will always mystify you.

      The conflation between, or refusal to distinguish explicitly, cases in which transparency arises and in which it does not arise is really inexplicable.

      "The suggestion that the strongest forms of simultaneous contrast arise in conditions that induce the perception of transparency gains conceptual support from evidence showing that transparency can generate dramatic transformations in both perceived lightness and color..." But the contextual conditions that produce transparency are really quite...transparent...There's no clear reason to lump these with situations that are perceptually and logically distinct.

      Also: "In simultaneous contrast displays, the targets and surrounds are also texturally continuous, in the sense that they are both uniform, but there are no strong geometric cues for the continuation of the surround through the target region of the kind known to give rise to vivid percepts of transparency (such as contours or textures). It is therefore difficult to generate a prediction for when transparency should be induced in homogeneous center-surround patterns, or how the induction of transparency should modulate the chromatic appearance of a target as a function of the chromatic difference between a target and its surround."

      First, I'll pay him the compliment of saying that I don't think that it would be that difficult for Anderson to generate predictions for when transparency should occur...(I think even I could do it). Second, if this theoretical gap really exists, then this is the problem that should be addressed, not "what happens if we test a lot of random combinations and average the results." It might be useful to take into consideration a demo devised by Soranzo, Galmonte & Agostini (2010) which is a case of transparency effect that lacks the "cues" mentioned here - and thus by these authors' criteria qualifies as a basic simultaneous contrast display. (I don't think its that difficult to explain, but maybe I haven't thought about it enough.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 08, Cicely Saunders Institute Journal Club commented:

      We selected and discussed this paper at our monthly journal club on 1st November 2017.

      The paper generated a lot of discussion and we felt that this was an important concept, especially for clinicians, to think about. The topic of QALYs was unfamiliar to some of us and we found that the authors explained it very clearly in the paper. We were intrigued by the use of an integrative review method and discussed this at length. It may have been helpful to read more explanation of this method and know how it differs from other types of review methods. We also wondered about some of the inclusion/exclusion criteria such as the exclusion of reviews and the decision making process for the theoretical papers included. We enjoyed discussing the themes which emerged from this paper and the wider debate around the most appropriate measures for palliative care populations, particularly in light of the recent paper by Dzingina et al. 2017 (https://www.ncbi.nlm.nih.gov/pubmed/28434392). We feel this paper will be a useful educational resource.

      Commentary by Dr. Nilay Hepgul & Dr. Deokhee Yi on behalf of researchers at Cicely Saunders Institute of Palliative Care, Policy & Rehabilitation, King’s College London.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 15, Lydia Maniatis commented:

      I don’t think the authors have addressed the unusual aspects of the dress in saying that “The perceived colors of the dress are due to (implicit) assumptions about the illumination.” As they note themselves, “This is exactly what would be predicted from classical color science…”

      The spectrum of light reflected to our eye is a function of the reflectance properties of surfaces and the spectrum of the illumination; both are disambiguated on the basis of implicit assumptions, and both are represented in the percept. The two perceptual features (surface color and illumination) are two sides of the same coin: Just as we can say that seeing a surface as having color x of intensity y is due to assumptions about the color and intensity of the illuminants, so we can say that seeing illumination of color x and intensity y is due to implicit assumptions about the reflectance (how much light they reflect) and the chromaticity (which wavelengths they reflect/absorb) of the viewed surfaces. We haven’t explained anything unless we can explain both things at the same time.

      The authors are choosing one side of the perceptual coin – the apparent illumination – and claiming to have explained the other. Again, it’s a truism to say that seeing a patch of the dress as color “x” implies we are seeing it as being under illumination “y,” while perceiving the patch as a different color means perceiving a different illumination. This doesn’t explain what makes the dress unusual - why it produces different color/illumination impressions in different people.

      The authors seem to want to take the “experience” route (“prior experiences may influence this perception”); this is logically and empirically untenable, as has been shown and argued innumerable times in the vision literature. For one thing, such a view is circular, since what we see in the first place is a product of the assumptions implicit in the visual process. It’s not as though we see things first, and then adopt assumptions that allow us to see it…In addition, why would such putative experience influence only the dress, and not each and every percept? (The same objection applies to explanations in terms of physiological differences). Again, the question of what makes the dress special is left unaddressed.

      It’s odd that, for another example of such a phenomenon, vision researchers need to turn to “poppunkblogger.” If they understood it in principle, then they would be able to construct any number of alternative versions. Even if they could show the perception of the dress to be experience-based (which, again, is highly unlikely to impossible), this would not not help; they would still be at a loss to explain why different people see different versions of one image and not most others. To understand the special power of the dress, they need at a minimum to analyze its structure, not only in terms of color but in terms of shape, which is the primary mediator of all aspects of perception. Invoking “scene interpretation” and “the particular color distributions” are only placeholders for all the things the authors don’t understand.

      The construction of images that show that the dress itself can produce consistent percepts is genuinely interesting, but it is a problem that the immediate backgrounds are not the same (e.g. arm placements). This produces confounds. The claim that these confounds are designed to produce the opposite effect of what is seen, based on contrast effects, is not convincing, since the idea that illusions involving transparency/illumination are based on local contrast effects is a claim that is easy to falsify empirically, and has been falsified. So we are dealing with unanalyzed confounds, and one has to wonder how much blind trial and error was involved in generating the images.

      Finally, I’m wondering why a cutout of the dress wasn’t also placed against a plain background as a control; what happens in this case? Has this been done yet?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Nov 03, Elisabeth Schramm commented:

      In reply to a comment by Falk Leichsenring

      Allegiance effects controlled

      Elisabeth Schramm, PhD; Levente Kriston, PhD; Ingo Zobel, PhD; Josef Bailer, PhD; Katrin Wambach, PhD; Matthias Backenstrass, PhD; Jan Philipp Klein, MD; Dieter Schoepf, MD; Knut Schnell, MD; Antje Gumz, MD; Paul Bausch, MSc; Thomas Fangmeier, PhD; Ramona Meister, MSc; Mathias Berger, MD; Martin Hautzinger, PhD; Martin Härter,MD, PhD

      Corresponding Author: Elisabeth Schramm, PhD, Department of Psychiatry, Faculty of Medicine, University of Freiburg, Hauptstrasse 5, 79104 Freiburg, Germany (elisabeth.schramm@uniklinik-freiburg.de)

      We acknowledge the comment of Drs. Steinert and Leichsenring (1) on our study (2) reasoning that our findings may at least in part be attributed to allegiance effects. Unfortunately, they provide neither a clarification of what they exactly refer to with the term “allegiance effects” nor a specific description of the presumed mechanisms (chain of effects) through which they think allegiance may have influenced our results. In fact, as specified both in the trial protocol (3) and the study report (2), we took a series of carefully safeguarded measures to minimize bias. Unlike stated in the comment, training and supervision of the study therapists and the center supervisors were performed by qualified and renowned experts for both investigated approaches (Martin Hautzinger for Supportive Psychotherapy and Elisabeth Schramm for Cognitive Behavioral Analysis System of Psychotherapy). Moreover, none of them has been involved in treating any study patients in this trial. We are confident that any possible allegiance of the participating researchers, therapists, supervisors, or other involved staff towards any, both, or none of the investigated interventions is very unlikely to have been able to surmount all of the implemented measures against bias and to affect the results substantially.

      References

      (1) Steinert C, Leichsenring F. The need to control for allegiance effects in psychotherapy research. PubMed Commons. Sep 08 2017

      (2) Schramm E, Kriston L, Zobel I, Bailer J, Wambach K, Backenstrass M, Klein JP, Schoepf D, Schnell K, Gumz A, Bausch P, Fangmeier T, Meister R, Berger M, Hautzinger M, Härter M. Effect of Disorder-Specific vs Nonspecific Psychotherapy for Chronic Depression: A Randomized Clinical Trial. JAMA Psychiatry. Mar 01 2017; 74(3): 233-242

      (3) Schramm E, Hautzinger M, Zobel I, Kriston L, Berger M, Härter M. Comparative efficacy of the Cognitive Behavioral Analysis System of Psychotherapy versus supportive psychotherapy for early onset chronic depression: design and rationale of a multisite randomized controlled trial. BMC Psychiatry. 2011;11:134


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 08, Lydia Maniatis commented:

      To make clear what Adamian and Cavanagh (2017) do, and what they don’t do, in this publication. What they don’t do is to test a hypothesis. What they do is present a casual, ad hoc explanation of the Frohlich effect based on the results of past experiments, which they replicate here. The proposal remains untested. Even the ad hoc, untested assumptions (“we assume that the critical delay in producing the Fröhlich effect is not just the delay of attention in arriving at the target but also the time a saccade would then need to land on the target, if one were executed;”) can’t explain the results of their experiments, requiring more ad hoc proposals about complex processes: “The results suggest that the simultaneous onsets may be held in iconic memory and the cued motion trajectory can be retrieved if the cue arrives soon enough;” “A late SOA implies a longer memory retention period, and that means that the reported shifts could arise from working memory limitations and might not be perceptual in nature.”

      Is Adamian and Cavanagh’s assumption that “the critical delay is not just the delay of attention….but also the time a saccade would then need to land on the target…” testable?

      How would one go about testing it, as well as the additional assumptions the authors feel obliged to make with respect to memory?

      Why didn’t the authors attempt to test their proposal to begin with, rather than simply performing replications that, even if successful, could do no more than leave the issue unresolved? They have not even proposed possible tests.

      Obviously, replication was the safer choice, but one, again, that is essentially uninformative vis a vis an ad hoc proposal. It should be clear that the subject of eye movements and their role in perception is extremely complex and that casual speculations are unlikely to be borne out, if properly tested.

      I think Adamian and Cavanagh’s proposal is so vague, the confounds so many, and (least of all, at present) the technical demands so great, that it cannot be tested. If all of the main and subsidiary assumptions, and their implications, were clarified enough to allow them to be critically assessed for logical coherence and consistency with other known facts, it might well fail at this stage, obviating the need for experimental tests.

      Of course, I could be wrong in the present case; the authors may intend, post-replication, to attempt to concretize and subject their proposal to a genuine test; that would be genuinely refreshing.

      I would note, as an afterthought, the uninformative nature of the title of the article, which is typical of many vision science articles and reflects the essentially uninformative nature of the work itself. The title tells us what the article is about, but not what it concluded or implied.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Aug 05, Jon-Patrick Allem commented:

      There are at least five problems with this paper: First, the authors simply assume that the pro-e-cigarette tweets are wrong and need their corrective input. What if users are right to be positive? The authors have not demonstrated any material risk from vapour aerosol. To the extent that there is evidence of exposure the levels so low as to be very unlikely to be a health concern. The presence of a hazardous agent does not in itself imply a risk to health, there has to be sufficient exposure to be toxicologically relevant.

      This critique is misguided. The goal of this paper was to characterize public perception of e-cigarette aerosol by using a novel data source (tweets) and not to demonstrate any material risk from e-cigarette aerosol.

      Second, they have also not considered what harmful effect that their potentially misleading 'health education messages' may have. For example, by exaggerating a negligible risk they may be discouraging people from e-cigarette use, and potentially causing relapse to smoking and reducing the incentive to switch - thus doing more harm than had they not intervened. We already know the vast majority of smokers think e-cigarettes are much more dangerous than the toxicological profile of the aerosol suggests - see National Cancer Institute HINTS data. The authors' ideas would aggravate these already highly damaging misperceptions of risk.

      This critique is misguided. This study did not design educational messages. It described people’s perceptions about e-cigarette aerosol.

      Third, as so often happens with tobacco control research, the authors make a policy proposal for which their paper comes nowhere close to providing an adequate justification. Public health and regulatory agencies could use social media and traditional media to disseminate the message that e-cigarette aerosol contains potentially harmful chemicals and could be perceived as offensive. They have not even studied the effects of the messages they are recommending on the target audience or tested such messages through social media. If they did, they would discover that users are not passive or compliant recipients of health messages, especially if they suspect they are wrong or ill-intentioned. Social media creates two-way conversations in which often very well-informed users will respond persuasively to what they find to be poorly informed or judgemental health messages. Until the authors have tested a campaign of the type they have in mind, they have no basis for recommending that agencies spend public money in this way.

      This critique is misguided. There was no policy proposal made in the passage highlighted here. The suggestion that social media platforms can be used as a communication channel is not a policy. It is a communication strategy. The idea that social media can be used to obtain information and later communicate messages is completely in line with the work presented in this paper. The notion that every paper answers every research question pertaining to a topic is an unreasonable expectation.

      Fourth, the authors suggest that users should be warned by public health agencies that "e-cigarette aerosol ... could be perceived as offensive". If there were warnings from public health and regulatory agencies about everything that could be perceived as offensive by someone, then we would be inundated with warnings. This is not a reliable basis or priority for public health messaging. Given the absence of any demonstrable material risk from e-cigarette aerosol, the issue is one of etiquette and nuisance. This does not require government intervention of any sort. Vaping policy in any public or private place should be a matter for the owners or managers, who may not find it offensive nor wish to offend their clientele. It is not a matter for legislators, regulators or health agencies.

      This critique here is based on one’s own opinion about the role of government and could be debated with no clear stopping point.

      Fifth (and with thanks to Will Moy's tweet), the work is pointless and wasteful. Who cares what people are saying on twitter about e-cigarettes and secondhand aerosol exposure? Why is this even a subject worthy of study and what difference could it make to any outcomes that are important for health or any other policy? What is the rationale for spending research funds on this form of vaguely creepy social media surveillance? Updated 21-Jan-17 with fifth point.

      Big social media data (Twitter, Instagram, Google Webs Search) can be used to fill certain knowledge gaps quickly. While one study using one data source is by no means definitive, one study based on timely data can provide an important starting-off point to address an issue of great import to public health. This paper describes why understanding public sentiment toward e-cigarette aerosol is relevant and utilizes a data source that allowed people to organically report on their sentiment toward e-cigarette aerosol unprimed by a researcher, without instrument bias, and at low costs. Also, policy development and communication campaigns are two distinct areas of research. The goal of this study was to inform the latter.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2017 Jan 21, Clive Bates commented:

      There are at least five problems with this paper:

      First, the authors simply assume that the pro-e-cigarette tweets are wrong and need their corrective input. What if users are right to be positive? The authors have not demonstrated any material risk from vapour aerosol. To the extent that there is evidence of exposure the levels so low as to be very unlikely to be a health concern. The presence of a hazardous agent does not in itself imply a risk to health, there has to be sufficient exposure to be toxicologically relevant.

      Second, they have also not considered what harmful effect that their potentially misleading 'health education messages' may have. For example, by exaggerating a negligible risk they may be discouraging people from e-cigarette use, and potentially causing relapse to smoking and reducing the incentive to switch - thus doing more harm than had they not intervened. We already know the vast majority of smokers think e-cigarettes are much more dangerous than the toxicological profile of the aerosol suggests - see National Cancer Institute HINTS data. The authors' ideas would aggravate these already highly damaging misperceptions of risk.

      Third, as so often happens with tobacco control research, the authors make a policy proposal for which their paper comes nowhere close to providing an adequate justification.

      Public health and regulatory agencies could use social media and traditional media to disseminate the message that e-cigarette aerosol contains potentially harmful chemicals and could be perceived as offensive.

      They have not even studied the effects of the messages they are recommending on the target audience or tested such messages through social media. If they did, they would discover that users are not passive or compliant recipients of health messages, especially if they suspect they are wrong or ill-intentioned. Social media creates two-way conversations in which often very well-informed users will respond persuasively to what they find to be poorly informed or judgemental health messages. Until the authors have tested a campaign of the type they have in mind, they have no basis for recommending that agencies spend public money in this way.

      Fourth, the authors suggest that users should be warned by public health agencies that "e-cigarette aerosol ... could be perceived as offensive". If there were warnings from public health and regulatory agencies about everything that could be perceived as offensive by someone, then we would be inundated with warnings. This is not a reliable basis or priority for public health messaging. Given the absence of any demonstrable material risk from e-cigarette aerosol, the issue is one of etiquette and nuisance. This does not require government intervention of any sort. Vaping policy in any public or private place should be a matter for the owners or managers, who may not find it offensive nor wish to offend their clientele. It is not a matter for legislators, regulators or health agencies.

      Fifth (and with thanks to Will Moy's tweet), the work is pointless and wasteful. Who cares what people are saying on twitter about e-cigarettes and secondhand aerosol exposure? Why is this even a subject worthy of study and what difference could it make to any outcomes that are important for health or any other policy? What is the rationale for spending research funds on this form of vaguely creepy social media surveillance?

      Updated 21-Jan-17 with fifth point.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jan 21, Clive Bates commented:

      How did the author manage to publish a paper with the title "E-cigarettes: Are they as safe as the public thinks?", without citing any data on what the public actually does think? There is data in the National Cancer Institute's HINTS survey 2015. This is what it says:

      Compared to smoking cigarettes, would you say that electronic cigarettes are…

      • 5.3% say much less harmful
      • 20.6% say less harmful
      • 32.8% say just as harmful
      • 2.7% say more harmful
      • 2.0% say much more harmful
      • 1.2% have never heard of e-cigarettes
      • 33.9% don’t know enough about these products

      Which brings me to the main issue with the paper. The author claims that there is insufficient knowledge to determine if these products are safer than cigarettes. This is an extraordinary and dangerous claim given what is known about e-cigarettes and cigarettes. It is known with certainty that there are no products of combustion of organic material (i.e tobacco leaf) in e-cigarette vapour - this is a function of the physical and chemical processes involved. We also know that products of combustion cause almost all of the harm associated with smoking. There is also extensive measurement of harmful and potentially harmful constituent of cigarette smoke and e-cigarette aerosol showing many are not detectable or present at levels two orders of magnitude lower in the vapour aerosol (e.g. see Farsalinos KE, 2014, Burstyn I, 2014). So the emissions are dramatically less toxic and exposures much lower.

      The author provides a familiar non-sequitur: "There are no current studies that prove that e-cigarettes are safe". There never will be. Firstly because it is impossible to prove something to be completely safe, and almost nothing is. Secondly, no serious commentators claim they are completely safe, just very much safer than smoking. Hence the term 'harm reduction' to describe the benefits of switching to these products.

      This view commands support in the expert medical profession. The Royal College of Physicians (London) assessed the toxicology evidence in its 2016 report Nicotine without smoke: tobacco harm reduction and concluded:

      Although it is not possible to precisely quantify the long-term health risks associated with e-cigarettes, the available data suggest that they are unlikely to exceed 5% of those associated with smoked tobacco products, and may well be substantially lower than this figure. (Section 5.5 page 87)

      This is a carefully measured statement that aims to provide useful information to both users of the products and health and medical professionals while reflecting residual uncertainty. It contrasts with the author's information leaflet for patients, which even suggests there is no basis for believing e-cigarettes to be safer than smoking:

      If you are smoking and not planning to quit, we don't know if e-cigarettes are safer. Talk to your health care provider.

      But we do know beyond any reasonable doubt that e-cigarettes are very much safer - the debate is whether they are 90% safer or 99.9% safer than smoking. Regrettably, only 5.3% of American adults correctly believe that e-cigarettes are very much less harmful than smoking, while 37% incorrectly think they are as harmful or more harmful (see above). The danger with these misperceptions of risk is that they affect behaviour, causing people to continue to smoke when they might otherwise switch to much safer vaping. The danger with a paper like this and its patient-facing leaflet is that it nurtures these harmful risk misperceptions and becomes, therefore, a vector for harm.

      To return to the author's title question: E-Cigarettes: Are They as Safe as the Public Thinks?. The answer is: "No, they are very much safer than the public thinks".


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 04, DAVID LUDWIG commented:

      Boiling down his comment of 3 Feb 2017, Hall disputes that the metabolic process of adapting to a high-fat/low-carbohydrate diet confounds interpretation of his and other short term feeding studies. If we can provide evidence that this process could take ≥ 1 week, the last leg of his attack on the Carbohydrate-Insulin Model collapses. Well, a picture is worth a thousand words, and here are 4:

      For convenience, these figures can be viewed at this link:

      Owen OE, 1983 Figure 1. Ketones are, of course, the hallmark of adaptation to a low-carbohydrate ketogenic diet. Generally speaking, the most potent stimulus of ketosis is fasting, since the consumption of all gluconeogenic precursors (carbohydrate and protein) is zero. As this figure shows, the blood levels of each of the three ketone species (BOHB, AcAc and acetone) continues to rise for ≥3 weeks. Indeed, the prolonged nature of adaptation to complete fasting has been known since the classic starvation studies of Cahill GF Jr, 1971. It stands to reason that this process might take even longer on standard low-carbohydrate diets, which inevitably provide ≥ 20 g carbohydrate/d and substantial protein.

      Yang MU, 1976 Figure 3A. Among men with obesity on an 800 kcal/d ketogenic diet (10 g/d carbohydrate, 50 g/d protein), urinary ketones continued to rise for 10 days through the end of the experiment, and by that point had achieved levels equivalent only to those on day 4 of complete fasting. Presumably, this process would be even slower with a non-calorie restricted ketogenic diet (because of inevitably higher carbohydrate and protein content).

      Vazquez JA, 1992 Figure 5B. On a conventional high-carbohydrate diet, the brain is critically dependent on glucose. With acute restriction of dietary carbohydrate (by fasting or a ketogenic diet), the body obtains gluconeogenic precursors by breaking down muscle. However, with rising ketone concentrations, the brain becomes adapted, sparing glucose. In this way, the body shifts away from protein to fat metabolism, sparing lean tissue. This phenomenon is clearly depicted among women with obesity given a calorie-restricted ketogenic diet (10 g carbohydrate/d) vs a nonketogenic diet (76 g carbohydrate/d), both with protein 50 g protein/d. For 3 weeks, nitrogen balance was strongly negative on the ketogenic diet compared to the non-ketogenic diet, but this difference was completely abolished by week 4. What would subsequently happen? We simply can’t know from the short-term studies.

      Hall KD, 2016 Figure 2B. Hall’s own study shows that the transient decrease in rate of fat loss upon initiation of the ketogenic diet accelerates after 2 weeks.

      The existence of this prolonged adaptive process explains why metabolic advantages for low-fat diet are consistently seen in very short metabolic studies. But after 2 to 4 weeks, advantages for low-carbohydrate diets begin to emerge, as summarized in my comment of 3 Feb 2017, below.

      Fat adaptation on low-carbohydrate diets has admittedly not been thoroughly studied, and its duration may differ among individuals and between experimental conditions. Nevertheless, there is strong reason to think that short feeding studies (i.e., < 3 to 4 weeks) have no relevance to the long-term effects of macronutrients on metabolism and body composition.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Dec 28, Lydia Maniatis commented:

      Cherniawsky and Mullen’s (2016) article lies well within the perimeter of a school of thought that, despite its obvious intellectual and empirical absurdity, is popular within the vision science community.

      The school persists, and is relentlessly prolific, because it has insulated itself from the possibility of falsification, mainly by ignoring both fact and reason.

      Explanatory schemes are concocted with respect to a narrow set of stimuli and conditions. Data generated under this narrow set of conditions are always interpreted in terms of the narrow scheme of assumptions, via permissive post hoc modeling. When, as here, results contradict expectation, additional ad hoc assumptions are made with reference to the specific, narrow type of stimuli used, which then, of course, may subsequently be corroborated, more or less, using those same stimuli or mild variants thereof.

      The process continues ad infinitum via the same ad hoc route. This is the reason that, as Kingdom (2011) has noted, the study of lightness, brightness and transparency (and I would add, vision science in general) is divided into camps “each with its own preferred stimuli and methodology” and characterized by “ideological divides.“ The term “ideological” is highly appropriate here, as it indicates a refusal to face facts and arguments that contradict or challenge the preferred view. It is obviously antithetical to the scientific attitude and, unfortunately, very typical of virtually all of contemporary vision science.

      The title of this paper ”The whole is other than the sum...” indicates that a prediction of “summation” failed even under the gentle treatment it received. The authors don’t quite know what to make of their results, but a conclusion of “other” is enough by today’s standards.

      The ideological camp to which this article belongs is a scandal on many counts. First, it adopts the view that there are certain figures whose retinal projections trigger visual processes such that the ultimate percept directly reflects local “low-level” processes. More specifically, it reflects “low-level” processes as they are currently (and crudely) understood. The figures supposed to have this quality are those for which the appropriate “low-level” story du jour has been concocted.

      The success of the method is well-described by Graham (1997, discussed in PubPeer), who notes that countless experiments were "consistent" with the behavior of V1 neurons at a time when V1 had only begun to be explored and when researchers were unaware not only of the complexities of V1 but also of the many hierarchically higher-level and processes that intervene between retina and percept. This amazing success is rationalized (if we may use the term loosely) by Graham, who with magical thinking reckons that under certain conditions the brain becomes “transparent” down to the initial processing levels. Teller (1984) had earlier (to no apparent effect) described such a view as “the nothing mucks it up proviso,” and pointed out the obvious logical problems.

      Cherniawsky and Mullen premise their article on this view with their opening sentence: “Two-dimensional orthogonal gratings (plaids) are a useful tool in the study of complex form perception, as early spatial vision is well described by responses to simple one-dimensional sinusoidal gratings…” In fact, the “one-dimensional sinusoidal gratings” in question typically produce 3D percepts of light and shadow, and the authors’ plaids in Figure 1 appear curved and partially obscured by a foggy overlay. So as illogical as the transparent brain hypothesis is to begin with, the stimuli supposed to tap into lower level processes aren’t even consistent with a strictly “low-level” interpretive process.

      The uninitiated might wonder why the authors use the term “spatial vision.” It is because they have uncritically adopted the partner of the transparent brain hypothesis, the view that the early visual processes perform a Fourier analysis on the retinal projection. It is not clear that this is at all realistic at the physiological level, but there is also no apparent functional reason for such a challenging process, as it would in no way further the achievement of the goal of organizing the incoming light into figures and grounds as the basis for further interpretation leading to a (usually) veridical representation of the environment. The Fourier conceit is, of course, maintained by employing sinusoidal gratings while ignoring their actual perceptual effects. That is, the sinusoidal gratings and combinations thereof are said to tap into the low-level frequency channels, which then determine contrast via summation, inhibition, etc, (whatever post hoc interpretation the data of any particular experiment seem to require). These contrast impressions, though experienced in the context of, e.g. impressions of partially-shadowed tubes, are never considered with respect to these complex 3D percepts. Lacking necessary interpretive assumptions, investigators are reduced to describing their results in terms of “other,” precisely described, but theoretically unintelligible and tangled effects.

      The idea that “summation” of local neural activities can explain perception is contradicted by a million cases, and counting, including the much-loved sinusoidal gratings and their shape-from-shading effects. But ideology is stronger and, apparently, good enough for vision science today.

      Finally, the notion of “detectors” is a staple of this school and the authors’ discussion; for a discussion of why this concept is untenable, please see Teller (1984).

      p.s. As usual, I’ll ask why its ok for an author to be one of a small number of subjects, the rest of whom are described as “naïve.” If it’s important to be naïve, then…

      Also, why use forced choices, and thus inject more uncertainty than necessary into the results? It’s theoretically possible that observers never see what you think they’re seeing…Obviously, if you’re committed to interpreting results a certain way, it’s convenient to force the data to look a certain way…

      Also, no explanation is given for methodological choices, e.g. the (very brief) presentation times.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 22, Lydia Maniatis commented:

      Part 1 This paper is all too similar to a large proportion of the vision literature, in which fussy computations thinly veil a hollow theoretical core, comprised of indefensible hypotheses asserted as fact (and thus implicitly requiring no justification), sometimes supported by citations that only weakly support them, if at all. The casual yet effective (from a publication point of view) fashion in which many authors assert popular (even if long debunked) fallacies and conjure up other pretexts for what are, in fact, mere measurements without actual or potential theoretical value is well on display here.

      What is surprising in, perhaps, every case, is the willful empirical agnosia and lack of common sense, on every level – general purpose, method, data analysis - necessary to enable such studies to be conducted and published. A superficial computational complexity adds insult to injury, as many readers may wrongly feel they are not competent to understand and evaluate the validity of a study whose terms and procedures are so layered, opaque and jargony. However, the math is a distraction.

      Unjustified and/or empirically false assumptions and procedures occur, as mentioned, at every level. I discuss some of the more serious ones below (this is the first of a series of comments on this paper).

      1. Misleading, theoretically and practically untenable, definitions of “3D tilt” (and other variables).

      The terms slant and tilt naturally refer to a geometrical characteristic of a physical plane or volume (relative to a reference plane). The first sentence of Burge et al’s abstract gives the impression that we are talking about tilt of surfaces: “Estimating 3D surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues …should be combined to estimate 3D tilt in natural scenes.” As it turns out, the authors perform a semantic but theoretically pregnant sleight of hand in the switch from the phrase “3D surface orientation (slant and tilt)” to the phrase “3D tilt” (which is also used in the title).

      The obvious inference from the context is that the latter is a mere short-hand for the former. But it is not. In fact, as the authors’ finally reveal on p. 3 of their introduction, their procedure for estimating what they call “3D tilt” does not allow them to correlate their results to tilt of surfaces: “Our analysis does not distinguish between the tilt of surfaces belonging to individual objects and the tilt (i.e. orientation [which earlier was equated with “slant and tilt”]) of depth discontinuities…We therefore emphasize that our analysis is best thought of as 3D tilt rather than 3D surface tilt estimation.”

      “3D tilt” is, in effect, a conceptually incoherent term made up to coincide with the (unrationalised) procedure used to arrive at certain measures given this label. I find the description of the procedure opaque, but as I am able to understand it, small patches of images are selected, and processed to produce “3D tilt” values based on range values collected by a range finder within that region of space. The readings within the region can be from one, two, three, four, or any number of different surfaces or objects; the method does not discriminate among these cases. In other words, these local “3D tilt values” have no necessary relationship to tilt of surfaces (let alone tilt of objects, which is more relevant (to be discussed) and which the authors don’t address even nominally). We are talking about a paradoxically abstract, disembodied definition of “3D tilt.” As a reader, being asked to “think” of the measurements as representing “3D tilt” rather than “3D surface tilt” doesn’t help me understand either how this term relates, in any useful or principled way, to the actual physical structure of the world, nor to the visual process that represents this world. The idea that measuring this kind of “tilt” could be useful to forming a representation of the physical environment, and that the visual system might have evolved a way to estimate these intrinsically random and incidental values, is an idea that seems invalid on its face - and the authors make no case for it.

      They then proceed to measure 3 other home-cooked variables, in order to search for possible correlations between these and “3D tilt.” These variables are also chosen arbitrarily, i.e. in the absence of a theoretical rationale, based on: “simplicity, historical precedence, and plausibility given known processing in the early visual system.” (p. 2). Simplicity is not, by itself, a rationale – it has to have a rational basis. At first glance, at least the third of these reasons would seem to constitute a shadow of a theoretical rationale, but it is based on sparse, premature and over-interpreted physiological data primarily of V1 neuron activity. Furthermore, the authors’ definitions of their three putative cues: disparity gradient, luminance gradient, texture gradient, are very particular, assumption-laden, paradoxical, and unrationalised.

      For example, the measure of “texture orientation” involves the assumption that textures are generally composed of “isotropic [i.e. circular] elements” (p. 8). This assumption is unwarranted to begin with. Given, furthermore, that the authors’ measures at no point involve parsing the “locations” measured into figures and grounds, it is difficult to understand what they can mean by the term “texture element.” Like tilt, reference to an “isotropic texture element” implies a bounded, discrete area of space with certain geometric characteristics and relationships. It makes no sense to apply it to an arbitrary set of pixel luminances.

      Also, as in the case of “3D tilt” the definition of “texture gradient” is both arbitrary and superficially complex: “we define [the dominant orientation of the image texture] in the Fourier domain. First, we subtract the mean luminance and multiply by (window with) the Gaussian kernel above centered on (x, y). We then take the Fourier transform of the windowed image and comute the amplitude spectrum. Finally, we use singular value decomposition ….” One, two, three….but WHY did you make these choices? Simplicity, historical precedence, Hubel and Wiesel…?

      If, serendipitously, the authors’ choices of things to measure and compare had led to high correlations, they might have been justified in sharing them. But as it turns out, not surprisingly, the correlations between “cues” and “tilt” are “typically not very accurate.” Certain (unpredicted) particularities of the data which to which the authors speculatively attribute theoretical value (incidentally undermining one of their major premises) will be discussed later.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 27, James Yeh commented:

      Editor's Comment Obesity and Management of Weight Loss — Polling Results

      James Yeh, M.D., M.P.H., and Edward W. Campion, M.D.

      Obesity is increasingly prevalent worldwide, and about 40% of Americans meet the diagnostic criteria for obesity.[1] The goal of weight loss is to reduce the mortality and morbidity risks associated with obesity. Patients with a body-mass index (BMI) in the range that defines obesity (>30) have a risk of death that is more than twice that of persons with a normal BMI.[2] Obesity is also associated with increased risks of cardiovascular disease, diabetes, and several cancers. A recent study suggests that being overweight or obese during adolescence is strongly associated with increased cardiovascular mortality in adulthood.[3] Studies suggest that even a 5% weight loss may reduce the complications associated with obesity.[4]

      In September 2016, we presented the case of Ms. Chatham, a 29-year-old woman with class I obesity (BMI, 32) who leads a fairly sedentary lifestyle, with frequent reliance on takeout foods and with infrequent physical activity.[5] Readers were invited to vote on whether to recommend initiating treatment with one of the FDA-approved drugs for weight loss along with lifestyle modifications or to recommend only nonpharmacologic therapies and maximizing lifestyle changes. The patient has no coexisting medical conditions, but her blood pressure is slightly elevated (144/81 mm Hg). In the past, Ms. Chatham has tried to lose weight using various diets, each time losing 10 to 15 lb (4.5 to 6.8 kg), but she has never been able to successfully maintain weight loss.

      Over 85,000 readers viewed the Clinical Decisions vignette during the polling period, and 905 readers from 91 countries voted in the informal poll. The largest group of respondents (366) was from the United States or Canada, representing nearly 40% of the votes. A large majority of the readers (80%) voted against prescribing one of the FDA-approved medications for weight loss and instead recommended maximizing lifestyle modification and nonpharmacologic therapies first.

      A substantial proportion of the 64 Journal readers who submitted comments expressed concern about the absence of efficacy data on long-term follow-up and about the side effects associated with current FDA-approved medications for weight loss. Some suggested that simply treating obesity with a prescription medication is shortsighted and that it is important to uncover patients’ motivations for existing lifestyle choices and for weight loss. The commenters emphasized the need for a multifaceted approach to obesity management that includes nutritional and psychological support, as well as stress management, with the goal of long-lasting improvement in exercise and eating habits that will lead to weight reduction and maintenance of a healthier weight.

      Some commenters, noting the difficulty of lifestyle changes, felt that pharmacotherapy can be a complementary and reasonable part of a multidisciplinary treatment plan. Some wrote that obesity should be managed as a chronic disease is managed and that an inability to lose weight should not be seen as a disciplinary issue, especially given the importance of genetic and physiological factors. These commenters argued that the use of pharmacotherapy as part of the treatment plan to achieve weight loss should not be stigmatized.

      Overall, the results of this informal Clinical Decisions poll indicate that a majority of the respondents think physicians should not initially recommend the use of an FDA-approved drug as part of a weight-loss strategy, at least not for a patient such as Ms. Chatham, and that many respondents were troubled by the current uncertainties about the long-term efficacy and safety of weight-loss drugs.

      REFERENCES 1. Flegal KM, Kruzon-Moran D, Carroll MD, Fryar CD, Ogden CL. Trends in obesity among adults in the United States, 2005 to 2014. JAMA 2016;315:2284-91. 2. Global BMI Mortality Collaboration. Body-mass index and all-cause mortality: individual-participant-data meta-analysis of 239 prospective studies in four continents. Lancet 2016;388:776-86. 3. Twig G, Yaniv G, Levine H, et al. Body-mass index in 2.3 million adolescents and cardiovascular death in adulthood. N Engl J Med 2016;374:2430-40. 4. Kushner RF, Ryan DH. Assessment and lifestyle management of patients with obesity: clinical recommendations from systematic reviews. JAMA 2014;312:943-52. 5. Yeh JS, Kushner RF, Schiff GD. Obesity and management of weight loss. N Engl J Med 2016;375;1187-9.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Oct 13, Giorgio Casari commented:

      Increased or decreased calcium influx?

      In this elegant paper the authors propose that loss of m-AAA (i.e. the depletion of both SPG7 and AFG3L2) facilitates the formation of active MCU complexes through the increased availability of EMRE, thus (i) increasing calcium influx into mitochondria, (ii) triggering MPTP opening and (iii) causing the consequent increase of neuronal cytoplasmic calcium leading to neurodegeneration. We previously reported that loss or reduction of AFG3L2 causes (i) decreased mitochondrial potential and fission, thus (ii) decreased calcium entry and (iii) the consequent augmented neuronal cytoplasmic calcium leading to neurodegeneration. While the functional link of m-AAA with MAIP and MCU-EMRE represents a new milestone in the characterization of the roles this multifaceted protease complex, we would like to comment on the conclusions pertaining to the calcium dynamics. 1. In SPG7/AFG3L2 knock-down HeLa cells (Figure S6A) mitochondrial matrix calcium is dramatically reduced (approx. from 100 to 50 microM) following histamine stimulation, which triggers IP3-mediated calcium release from ER. This reduction is in complete agreement with the one we previously detected in Afg3l2 ko MEFs (Maltecca et al., 2012), and that we also confirmed in Afg3l2 knock-out primary Purkinje neurons (the cells that are primarily affected in SCA28) upon challenge with KCl (Maltecca et al., 2015). The decreased mitochondrial calcium uptake correlates with the 40% reduction of mitochondrial membrane potential in SPG7/AFG3L2 knock-down cells (Figure S6B), as expected since the mitochondrial potential is the major component of the driving force for calcium uptake by MCU. Accordingly, these data are in line with the decreased mitochondrial membrane potential observed in Afg3l2 knock-out Purkinje neurons (Maltecca et al., 2015). We think that this aspect is central, because the respiratory defect is the primary event associated to m-AAA deficiency and neurodegeneration. So, the data of König et al. agree with our own findings that mitochondrial matrix calcium is reduced after m-AAA depletion. 2. By a different protocol (SERCA pumps inhibition and ER calcium leakage; Figure 6 C-F), the authors detected a small increase of mitochondrial calcium concentration in SPG7/AFG3L2 knock-down HeLa cells (from approx. 3 to 6 microM). The huge difference in calcium concentration detected in the two experiments (100 to 50 microM in Figure S6A and 3 to 6 microM in Figure 6 C) possibly reflects the stimulated (histamine) vs. unstimulated (calcium leakage) conditions, this latter being more difficult to correlate to physiologic neuronal situation. 3. The authors show increased sensitivity to MPTP opening in the absence of m-AAA and they propose the consequent calcium release as the cause of calcium deregulation and neuronal cell death. ROS are strong sensitizers of MPTP to calcium and thus favor its opening. It is well known that m-AAA loss massively increases intramitochondrial ROS production. Thus, higher ROS levels, rather than high calcium concentrations, can be the trigger of MPTP opening. Taking all this into consideration, we think that mitochondrial depolarization (as shown in Figure S6B) and decreased mitochondrial calcium entry (Fig S6A), even in the presence of increased amount of MCU-EMRE complexes, may lead to inefficient mitochondrial calcium buffering and, finally, to cytoplasmic calcium deregulation. ROS dependent MPTP opening, which may occur irrespective of a low matrix calcium concentration, may additionally contribute to this final event.

      Minor comment At page 7 we read: “Notably, these experiments likely underestimate the effect on mitochondrial Ca2+ influx observed upon loss of the m-AAA protease, since the loss of the m-AAA protease also decreases ΔΨ (i.e., the main force driving mitochondrial Ca2+ influx), as revealed by the significant impairment of mitochondrial Ca2+ influx triggered by histamine stimulation (Maltecca et al., 2015) (Figures S6A–S6E)”. The reference is not appropriate, since in Maltecca et al., 2015 the reduced mitochondrial calcium uptake has been demonstrated in Afg3l2 knock-out Purkinje neurons upon challenge with KCl and not with histamine. We used histamine stimulation, which triggers IP3-mediated calcium release from ER, in Afg3l2 ko MEF in a previous publication (Maltecca F, De Stefani D, Cassina L, Consolato F, Wasilewski M, Scorrano L, Rizzuto R, Casari G. Respiratory dysfunction by AFG3L2 deficiency causes decreased mitochondrial calcium uptake via organellar network fragmentation. Hum Mol Genet. 2012, 21:3858-70. doi: 10.1093/hmg/dds214).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 06, GARRET STUBER commented:

      *This review was completed as part of a graduate level circuits and behavior course at UNC-Chapel Hill. The critique was written by students in the class and edited by the instructor, Garret Stuber.

      Comments and critique

      Written by Li et al., this paper investigated a class of oxytocin receptor interneurons (OxtrINs) on which the same group first characterized in 2014 [1]. OxtrINs are a subset of somatostatin positive interneurons in the medial prefrontal cortex (mPFC) that seem to be important for sociosexual behaviors in females, specifically during estrus and not diestrus. To complement their previous story, here the authors concluded that OxtrINs in males regulate anxiety-related behaviors through the release of corticotropin releasing hormone binding protein (Crhbp). While we agree that these neurons could be mediating sexually dimorphic behaviors, it is unclear how robust these differences really are.

      We had some technical issues with this paper. First, it is unclear exactly how many mice were allotted to each experimental group, and it would have been useful to see individual data in each of the behavioral experiments, so that we can better understand some of the variability in the authors’ graphs. Even among different experiments, there were variable sizes of n (e.g. Fig. 5F-H, “n = 8-14 mice per group”). There was also no mention of how many cells per animal were tested for each brain slice experiment; instead, we received total numbers of cells tested per group. This paper did not include the complementary female data to Fig. 4F-G and Fig. 5A-B, the experiments pairing blue light with Crhr1 antagonist or Crhbp antagonist. We would have appreciated seeing this data adjacent to that for the males. In addition, there was no mentioned control for the optogenetic experiments. The authors only compared responses between light on and light off trials. Typically in optogenetic approaches, a set of control mice are also implanted with optic fibers and flashed with blue light in the absence of virus to test whether the light alone influences behavior. Incidentally, there is evidence that blue light influences blood flow, which may affect neuronal activity [2]. It was also unclear during the sociosexual behavioral testing whether the males were exposed to females in estrus or diestrus. In all, lack of detailed sample sizes and controls made it difficult to assess how prominent these sex differences were.

      These issues aside, knocking out endogenous Oxtr in their targeted interneuron population was a key experiment, as it demonstrated that oxytocin signaling in OxtrINs is important in anxiety-related behaviors in males, but not in females regardless of the estrus stage. They did this using a floxed Oxtr mouse and deleted OxtR using a Cre-inducible virus, allowing for temporal and cell-type-specific control of this deletion, and subsequently measured the resulting phenotype using an elevated plus maze and open field task. The authors also validated that changes in exploration were not due to hyperactivity. We think these experiments are convincing.

      TRAP profiling, which the same research group pioneered in 2014 [3], provided a set of genes enriched in OxtrINs. TRAP targets RNAs while they are translated into proteins, so we think their results here are particularly relevant. Moreover, the authors provided a list of genes enriched in sex-specific OxtrINs, a useful resource for those interested in gene expression differences in males and females. Once they identified Crhbp, an inhibitor of Crh, they hypothesized that OxtrINs were releasing Crhbp to modulate anxiogenic behaviors in males. The authors next measured Crh levels in the paraventricular nucleus of the hypothalamus and found that Crh levels are higher in females than males. They thus concluded Crh levels were driving sex differences associated with OxtrINs. We wonder whether Crh levels are also higher in the female mPFC, but we agree here too.

      To demonstrate that Crhbp expressed by OxtrINs is important in modulating anxiety-like behaviors in males, the authors targeted Crhbp mRNA using Cre-inducible viral delivery of an shRNA construct and subsequently tested anxiety-related behaviors. They found that knocking down Crhbp was anxiogenic in males and not in females. This was a critical experiment, but the shRNA constructs targeting Crhbp were validated solely in a cell line. It would have been more appropriate to perform a western blot on mPFC punches of adult mice, showing whether this lentiviral construct knocked down Crhbp expression in the mouse brain prior to behavioral testing. In fact, it also would have been useful to see a quantification of the shRNA transfection rate, as well as its specificity in vivo. As stated above, we also do not know the distribution of behavioral responses here either. Without these pieces of information, it is difficult to assess how reliable or robust their knockdown was.

      The authors concluded that sexually dimorphic hormones act through the otherwise sexually monomorphic OxtrINs to regulate anxiety-related behaviors in males and sociosexual behaviors in females. We agree that OxtrINs interact with oxytocin and Crh to bring about sex-specific phenotypes, but we also think that using additional paradigms testing anxiety and social behaviors, such as a predator odor, novelty-suppressed feeding or social grooming, could shed more light on the nuances of mPFC circuitry. In addition, the authors suggested that OxtrINs are sexually monomorphic because they are equally abundant in males and females. The authors’ TRAP data however suggested that OxtrINs of males and females have different gene expression profiles (Table S2), thus indicating that these interneurons may form different connections in each sex that mediate the electrophysiological and behavioral differences we see in this study.

      It would be interesting to overexpress Crhbp in female mice, preferably in a cell-type-specific manner, to see whether female mice would demonstrate the anxiety-like behavior seen in males. If the Crh:Crhbp balance is in fact mediating this sexually dimorphic behavior through OxtrINs, we would expect that doing these manipulations may “masculinize” the females’ behavior. Regardless, we believe that this study opens opportunities for future work into how oxytocin and Crh release from the hypothalamus may act together to coordinate behavior. It will also be interesting to see if single-cell RNA sequencing could provide insight into whether OxtrINs can be further divided into sexually dimorphic subtypes. As the authors pointed out, understanding the dynamics of Crh and oxytocin in the mPFC will be important for gender-specific therapy and treatment.

      [1] Nakajima, M. et al. Oxytocin modulates female sociosexual behavior through a specific class of prefrontal cortical interneurons. Cell. 159, 295-305 (2014).

      [2] Rungta, R. L. et al. Light controls cerebral blood flow in naïve animals. Nature Communications. 8, 14191 (2017).

      [3] Heiman, M. et al. Cell-type-specific mRNA purification by translating ribosome affinity purification (TRAP). Nature Protocols. 9, 1282-1291 (2014).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Sep 16, Hilda Bastian commented:

      Thanks, John, for the reply - and for giving us all so much to think about, as usual!

      I agree that there are meta-analyses without systematic reviews, but the tagged meta-analyses are included in the filter you used: they are not additional (NLM, 2016). It also includes meta-analysis in the title, guidelines, validation studies, and multiple other terms that add non-systematic reviews, and even non-reviews, to the results.

      In Ebrahim S, 2016, 191 primary trials in only high impact journals were studied. Whether they are typical of all trials is not clear: it seems unlikely that they are. Either way, hundreds of reports for a single trial is far from common: half the trials in that sample had no secondary publications, only 8 had more more than 10, and none had more than 54. Multiple publications from a single trial can sometimes be on quite different questions, which might also need to be addressed in different systematic reviews.

      The number of trials has not been increasing as fast as the number of systematic reviews, but the number has not reached a definite ongoing plateau either. I have posted an October 2015 update to the data using multiple ways to assess these trends in the paper by me, Paul Glasziou, and Iain Chalmers from 2010 (Bastian H, 2010) here. Trials have tended to fluctuate a little from year to year, but the overall trend is growth. As the obligation to report trials grows more stringent, the trend in publication may be materially affected.

      Meanwhile, "systematic reviews" in the filter you used have not risen all that dramatically since February 2014. For the whole of 2014, there were 34,126 and in 2015 there were 36,017 (with 19,538 in the first half of 2016). It is not clear without detailed analysis what part of the collection of types of paper are responsible for that increase. The method used to support the conclusion here about systematic reviews of trials overtaking trials themselves was to restrict the systematic review filter to those mentioning trials or treatment - “trial* OR randomi* OR treatment*”. That does not mean the review is of randomized trials only: no randomized trial need be involved at all, and it doesn't have to be a review.

      Certainly, if you set the number of sizable randomized trials high, there will be fewer of them than of all possible types of systematic review: but then, there might not be all that many very sizable, genuinely systematic reviews either - and not all systematic reviews are influential (or even noticed). And yes, there are reviews that are called systematic that aren't: but there are RCTs called randomized that aren't as well. What's more, an important response to the arrival of a sizeable RCT may well be an updated systematic review.

      Double reports of systematic reviews are fairly common in the filter you used too, although far from half - and not more than 10. Still, the filter will be picking up protocols as well as their subsequent reviews, systematic reviews in both the article version and coverage in ACP Journal Club, the full text of systematic reviews via PubMed Health and their journal versions (and the ACP Journal Club coverage too), individual patient data analyses based on other systematic reviews, and splitting a single systematic review into multiple publications. The biggest issue remains, though, that as it is such a broad filter, casting its net so very wide across the evidence field, it's not an appropriate comparator for tagged sets, especially not in recent years.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Sep 16, John Ioannidis commented:

      Dear Hilda,

      thank you for the very nice and insightful commentary on my article. I think that my statement "Currently, probably more systematic reviews of trials than new randomized trials are published annually" is probably correct. The quote of 8,000 systematic reviews in the Page et al. 2016 article is using very conservative criteria for systematic reviews and there are many more systematic reviews and meta-analyses, e.g. there is a factory of meta-analyses (even meta-analyses of individual level data) done by the industry combining data of several trials but with no explicit mention of systematic literature search. While many papers may fail to satisfy stringent criteria of being systematic in their searches or other methods, they still carry the title of "systematic reviews" and most readers other than a few methodologists trust them as such. Moreover, the 8,000 quote was from February 2014, i.e. over 2.5 years ago, and systematic reviews' and meta-analyses' publication rates rise geometrically. Conversely, there is no such major increase in the annual rate of published randomized controlled trials. Furthermore, the quote of 38,000 trials in the Cochrane database is misleading, because it includes both randomized and non-randomized trials and the latter may be the majority. Moreover, each randomized controlled trial may have anywhere up to hundreds of secondary publications. On average within less than 5 years of a randomized trial publication, there are 2.5 other secondary publications from the same trial (Ebrahim et al. 2016). Thus the number of published new randomized trials per year is likely to be smaller than the number of published systematic reviews and meta-analyses of randomized trials. Actually, if we also consider the fact that the large majority of randomized trials are small/very small and have little or no impact, while most systematic reviews are routinely surrounded by the awe of the "highest level of evidence", one might even say that the number of systematic reviews of trials published in 2016 is likely to be several times larger than the number of sizable randomized trials published in the same time frame.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 25, Clive Bates commented:

      As I think Professor Daube's comment contains inappropriate innuendo about my motives, let me repeat the disclosure statement from my initial posting:

      Competing interests: I am a longstanding advocate for 'harm reduction' approaches to public health. I was director of Action on Smoking and Health UK from 1997-2003. I have no competing interests with respect to any of the relevant industries.

      My hope is that prominent academics and veterans of the struggles of the past will adopt an open mind towards the right strategy for reducing the burden of death and disease caused by smoking as we go forward. While he may not like the idea, Professor Daube can surely see that 'tobacco harm reduction' is a concept supported by many of the top scientists and policy thinkers in the field, including the Tobacco Advisory Group of the Royal College of Physicians. It is not the work of the tobacco industry and cannot be dismissed just by claiming it is in their interests.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 May 24, Jordan Anaya commented:

      I think readers of this article will be interested in a comment I posted at F1000Research, which reads:

      I would like to clarify and/or raise some issues with this article and accompanying comments.

      One: Reviewers Prachee Avasthi and Cynthia Wolberger both emphasized the importance of being able to sort by date, and in response the article was edited to say: "Currently, the search.bioPreprint default search results are ordered by relevance without any option to re-sort by date. The authors are aware of the pressing need for this added feature and if possible will incorporate it into the next version of the search tool."

      However, it has been nearly a year and this feature has not been added.

      Two: The article states: "Until the creation of search.bioPreprint there has been no simple and efficient way to identify biomedical research published in a preprint format..."

      This is simply not true as Google Scholar indexes preprints. This was pointed out by Prachee Avasthi and in response the authors edited the text to include an incorrect method for finding preprints with Google Scholar. In a previous comment I pointed out how to correctly search for preprints with Google Scholar, and it appears the authors read the comment given they utilize the method at this page on their site: http://www.hsls.pitt.edu/gspreprints

      Three: In his comment the author states: "We want to stress that the 'Sort by date' feature offered by Google Scholar (GS) is abysmal. It drastically drops the number of retrieved articles compared to the default search results."

      This feature of Google Scholar is indeed limited, as it restricts the results to articles which were published in the past year. However, if the goal is to find recent preprints then this limitation shouldn't be a problem and I don't know that I would classify the feature as "abysmal".

      Four: The article states: "As new preprint servers are introduced, search.bioPreprint will incorporate them and continue to provide a simple solution for finding preprint articles."

      New preprint servers have been introduced, such as preprints.org and Wellcome Open Research, but search.biopreprint has not incorporated them.

      Five: Prachee Avasthi pointed out that the search.biopreprint search engine cannot find this F1000Research article about search.biopreprint. It only finds the bioRxiv version. In response the author stated: "The Health Sciences Library System’s quality check team has investigated this issue and is working on a solution. We anticipate a quick fix of this problem."

      This problem has not been fixed.

      Competing Interests: I made and operate http://www.prepubmed.org/, which is another tool for searching for preprints.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 24, Ole Jakob Storebø commented:

      We think that the clinicians prescribing methylphenidate for ADHD and others have been insufficiently critical of the literature for decades, trusting that the quality of methylphenidate research was reasonable. In accordance, Shaw in an editorial accompanying our JAMA article Storebø OJ, 2016 stated that the Epstein et al. review on methylphenidate for adults with ADHD was an example of good assessments of quality Shaw P, 2016. It seems Shaw erred, as the Epstein et al. review has now been withdrawn from The Cochrane Library due methodological flaws Epstein T, 2016.

      Banaschewski et al. suggest that we included five trials in our analyses that should have been excluded. We think they are wrong. They highlight four trials which they cite as having used “active controls” whereas these are actually co-interventions, used in both the methylphenidate and the control group. Such trials are includable in accordance with our protocol Storebø OJ, 2015. Moreover, excluding these trials from our review would only have produced negligible changes in our results. Furthermore, the trial including children aged 3 to 6 years ought also to have be included in accordance with our protocol. Excluding all five trials would not have changed our conclusions at all. We concluded that methylphenidate might improve teacher reported symptoms of ADHD. However, the very low quality of the evidence, the magnitude of that effect size is uncertain. A change in the effect size of 0.12 points on the standardised mean difference of this outcome would not change anything.

      In a subgroup analysis comparing parallel trials and crossover trials, we did not find a significant difference either. However, we noted considerable heterogeneity between the two groups of trials. It is not recommended to pool cross-over trials which only have “end-of-trial data” with parallel group trials (http://handbook.cochrane.org/) and had we done so we would have would risked introducing a “unit-of-analysis error” as we only had “end-of-trial data” from these cross-over trials.

      We agree that the variability of the minimal relevant difference is important which is why we reported the 95% confidence interval of the transformed mean value in our review Storebø OJ, 2015. Banaschewski et al. also suggest that we have overlooked information in the Coghill 2007 trial and thereby wrongly assessed this as a trial with “high risk of bias”. We stated in our protocol that we would consider trials with one or more unclear or high risk of bias domain as trials with high risk of bias Storebø OJ, 2015.

      We did not overlook information from the Coghill 2007 trial but twice emailed the authors for additional information. They did not respond. The information required is not available in the published study. We presented the risk of bias assessments for the various domains of all 185 included trials. It is correct that we assessed seven cross-over trials as low risk of bias and not six as reported. Thank you for spotting this error. The seventh trial is reported, however, in our table in which the risk of bias assessments for all the domains is shown. All trials, irrespective of vested interest bias, were regarded as having a high risk of bias due to broken outcome assessor blinding given the easily recognisable, well-known adverse effects of methylphenidate. When adding this seventh cross-over trial to the subgroup analysis on the outcome “teacher-rated ADHD symptoms – cross-over trials”, we now find significant differences between the trials with “high” compared to “low” risk of bias (standardised mean difference (SMD) -0.96 [95% confidence interval -1.09 to -0.82] compared to -0.64 [-0.91 to -0.38]. Test for subgroup difference: Chi² = 4.27, df = 1 (P = 0.04), I² = 76.6%).

      Banaschewski et al. focus only on our assessment of risk of bias and do not mention the core instrument for assessing quality of meta-analyses namely the Grades of Recommendation Assessment Development and Evaluation (GRADE) approach Andrews J, 2013. Our assessment of the evidence as “very low quality” is not only based on the assessment of risk of bias, but also on other factors such as heterogeneity, imprecision, and indirectness of the evidence. This is clearly reported in our review.

      We downgraded the quality of the included trials in the meta-analysis for imprecision and for moderate heterogeneity. The durations of included trials were short, with an average of 75 days. Most patients receive methylphenidate treatment for substantially longer periods and the beneficial effects may diminish over time Jensen PS, 2007 Molina BS, 2009. The short trial duration could suggest the need for further downgrading for “indirectness” according to GRADE Andrews J, 2013. We did not downgrade for this, but we could have. This further underlines that the evidence for the benefits and harms for the use of methylphenidate for children and adolescents with ADHD is of very low quality.

      We have assessed 71 trials as having high risk of bias in the “vested interest” domain as they were funded by the industry and/or the authors were affiliated with the industry.

      It is not incorrect for us to state that none of the trials funded by the pharmaceutical industry showed a low risk of bias in all other areas as we considered all the trials as high risk of bias on the domain of blinding. This is clearly reported in our review.

      We have now conducted the requested subgroup analysis comparing those trials with high compared to low risk of vested interest bias on the teacher-rated ADHD symptoms outcome. The effect of methylphenidate in the 14 trials with high risk of vested interest bias was SMD -0.86 [-0.99 to -0.72] compared to SMD -0.50 [-0.69 to -0.31] in the 5 trials with low risk of vested interest bias. Test for subgroup differences is Chi² = 8.67, df = 1, P = 0.003. So even in this small sample we find a significant difference.

      We recommend Banaschewski et al. to read the essay by John P Ioannidis about vested interests Ioannidis JP, 2016.

      It is important to stress that the results of our review would have been the same had we disregarded the issue of vested interest.

      Had there been inconsistencies regarding one domain of bias in a few trials they would not change the fact that these trials are to be considered as trials at high risk of bias. For example, in two trials, Konrad 2004 and Konrad 2005, there is inconsistency in how our author teams assessed the randomisation process. However, both trials have several other domains at “unclear risk of bias” or “high risk of bias”. In the Ullman 2006 trial, three domains are assessed as “unclear risk of bias”. In Wallace 1994, five domains are assessed as being of “unclear risk of bias” and one as “high risk of bias”. In Wallander 1987, five domains are assessed as “unclear risk of bias”. Even if there was inconsistency between one or two items, these trials are high risk of bias trials. There may well be small differences in our judgements, but that does not change the fact that the trials included are, in general, trials at high risks of bias Storebø OJ, 2015. It is important to understand that we followed the Cochrane guidelines in every aspect of our review.

      Conclusions

      We have demonstrated that the trial selection in our review was not flawed and was undertaken with sufficient scientific justification The effect sizes are not too small. We have followed a sound methodology for assessing risk of bias and our conclusion is not misleading. We are concerned about the state of the academic literature and at the financial and academic waste that has occurred, given that more than 250 reviews and 3000 single works have been published on psychostimulants for ADHD treatment. Despite this, there is still no sound evidence regarding the benefits and harms of methylphenidate.

      Ole Jakob Storebø, Morris Zwi, and Christian Gluud.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Jul 03, P Jesper Sjöström commented:

      Thanks for your interest in our work. I would like to make the following points:

      1. We did not actually say 'no connections in mature cortex' -- that quote is certainly not lifted from Mizusaki et al Nat Neurosci 2016. We said "In fact, it was recently reported that, surprisingly, pyramidal cells in visual cortex of mature animals do not seem to interconnect at all, neither bidirectionally nor unidirectionally12," where 12 refers to Jiang et al. We thus say that Jiang et al report that PCs do not seem to interconnect, we do not say that there are no PC-PC connections in mature cortex. What Jiang et al state and what our opinion about that statement is, those are different things.

      2. The intention of that passage in Mizusaki et al is to point out that Brunel is using my data from Song et al as a gold standard, but this may or may not be appropriate, since my connectivity data was acquired from a developmental snapshot in time (just after eye opening, typically postnatal day 14-16), whereas Brunel is in fact focussing on the functioning of the mature brain, when circuits are wired up. Our intention was thus to acknowledge that my own data need not be the ground truth, and this has important implications for the validity of the Brunel study. The Tolias study provides an alternative view: "the most compelling and consistent difference across experiments is the age of the animals tested, suggesting that mature cortical circuits are not identical to developing circuits." Such a developmental difference would important in the context of the Brunel study. Again, this is not necessarily my opinion, but as scientists, we have to acknowledge this possibility.

      3. In the Tolias study, they report in Fig S14 that they found precisely zero L5 PC-PC connections even after 150 attempts, which is in stark contrast to my connectivity data presented in Song et al. Indeed, if you do a Chi-squared test for 931/8050 versus 0/150, you will find that this is a highly significant difference. We can debate the accuracy of the Tolias measurement (like they do in Barth et al Science 2016 353:1108, as you point out), but if we do so, we should also debate the accuracy of my measurements in juveniles, as presented in Song et al. While it is true that my data in Song et al is more in line with e.g. Thomson et al Cereb Cortex 2002 than with Jiang et al, the key point in the context of Brunel's theoretical study is that the ground truth is not necessarily well established.

      In summary, I certainly believe in my own connectivity data set, and I think Brunel's study provides a very compelling theoretical framework for explaining such connectivity patterns, but I feel obliged to point out a few possible caveats associated with my connectivity data set. Jiang et al provide one such key caveat. I hope this clarifies somewhat.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 24, Jeromy Anglim commented:

      Thanks for posting a comment. I just wanted to add a few thoughts on your points.

      Relationship between type D and outcomes were not moderated by illness group. For us, this was the important conclusion that we want researchers to think about. There is a lot of research done examining Type D in specific illness groups. To some extent implicit in such research is the idea that the effect of Type D might vary based on illness type. While this may be true, this study presents some evidence that at least for the illness groups and variables studied, this is not the case.

      The effect of negative affect on social support and the effect of social inhibition on health behaviours failed to reach statistical significance. I would not say that it failed. This is actually an important finding that supports the idea that the subscales of Type D provide different correlates (i.e., negative affectivity is related more to affective processes, and social inhibition is related more to social processes). Presumably, this is as we would expect. Although if you wanted to be critical, there is the idea that NA is merely neuroticism, and SI is a mix of neuroticism and introversion (see Horwood, Anglim, Tooley, 2015). Whether that is a problem probably depends on how primary you view Big 5 personality.

      Limitations: I think we note the limitations you mention in the limitations section. That said, those limitations only relate to certain points of the paper. For me personally, I think that the paper has two fairly important implications for researchers working with Type D personality. The first relates to the general lack of variation in effects by group as discussed above. Second, we did a comparative regression analysis comparing a range of different scoring systems for Type D personality. The results suggested that binary Type D is a poor predictor, and there was limited evidence for NA by SI interactions effects. Rather entering NA and SI as two separate predictors generally resulted in the best prediction of outcomes. This goes agains the implicit claims of Type D that there is an interactive effect and that cut-offs are appropriate for Type D. Importantly, all these analyses also speak to the novelty of the Type D construct and the rationale for choosing the two particular subscales for inclusion.

      Thus, for me, the paper provides a nuanced and critical assessment of the predictive validity of Type D personality. In particular, I'd encourage other researchers working with Type D personality (and some are doing this already) to run the comparative regression analyses in their samples.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 21, Evelina Tutucci commented:

      We have also recently discussed Nelles et al. Nelles DA, 2016. Since we are interested in developing new techniques for studying gene expression and mRNA localization at the single molecule level, a potential tag-less system to detect mRNAs in fixed and live cells would be a further advance. As pointed out by the Duke RNA Biology journal club we think that Nelles et al. represents an attempt to apply the Cas9 System to detect endogenous mRNA molecules. Unfortunately, no evidence is presented to demonstrate that this system is ready to be used to study gene expression at the single molecule level, as the MS2-MCP system allows. The RNA letter by Garcia and Parker Garcia JF, 2015 showed that in S. cerevisiae the binding of the MS2 coat protein to the MS2-loops diminished tagged mRNA degradation by the cytoplasmic exonuclease Xrn1. However, these observations were not extended to higher eukaryotes. Previous work from our lab described the generation of the beta-actin-MS2 mouse, whereby all the endogenous beta-actin mRNAs were tagged with 24 MS2 loops in the 3’UTR (Lionnet T, 2011, Park HY, 2014). This mouse is viable and no phenotypic defects are observed. In addition, control experiments were performed to show that the co-expression of the MS2 coat protein in the beta-actin-MS2 mouse allowed correct mRNA degradation and expression (Supplementary figure 1b, Lionnet T. et al 2011). Furthermore, multi-color FISH (Supplementary figure 6, Lionnet T. et al 2011) showed substantial co-localization between the ORF FISH probes and MS2 FISH probes, demonstrating the validity of this model. We think that the observations by Garcia and Parker are restricted to yeast because of the short half-life of their mRNAs, wherein the degradation of the MS2 becomes rate-limiting. Based on our extensive use of the MS2-MCP system, we think that higher eukaryotes may have more time to degrade the high affinity complexes formed between MS2-MCP, providing validation for this system to study multiple aspects of gene expression. In conclusion, we think that the MS2-MCP system remains to date the best method to follow mRNAs at the single molecule level in living cells. For the use of the MS2-MCP system in S. cerevisiae we have taken the necessary steps to improve it for the study of rapidly degrading mRNAs and are preparing this work for publication.<br> Evelina Tutucci and Maria Vera, Singerlab


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 17, Duke RNA Biology Journal Club commented:

      These comments were generated from a journal club discussion:

      We were excited to read and discuss this paper as many of us have questions pertaining to mRNA localization. This technique theoretically allows for imaging of mRNAs without genetic manipulation meaning mRNAs at native expression levels can be tracked in live cells. However, as with many cutting-edge papers, more work is needed before this will become commonplace in the lab.

      Most current methods to track mRNAs in live cells involve aptamer based methods which require genetic manipulation of mRNA PMCID: PMC2902723. Additionally, the most commonly used aptamer system, the MS2-MCP system, has become controversial in light of recent findings that the MS2 coat protein stabilizes the aptamer bearing constructs Garcia JF, 2015. In this paper, Fig 1F and 1G replicated these findings and also reassured us that the RCas9 technique would not have the same downfall. While this is certainly a good thing, we were unconvinced this technique was better than FISH (Fig 2), other than having the potential for live cell imaging.

      Unfortunately, we found the live cell imaging, which was limited to Fig 3B, to be disappointing. First, we observed that unless an mRNA is strictly localized, as in stress granules, live imaging shows a diffuse mass within the cytosol. Second, imaging was performed with ACTB mRNA which is highly abundant. We don’t think live cell imaging would work as well for low abundance mRNAs due to high background signal. Finally, while specialized imaging software can detect the pile-ups of mRNA in localized foci, we are concerned that tracking individual mRNA may prove a hurdle. Cell models for mRNA localization are large cells such as fibroblasts and neurons, we would be interested to see the ability of this system within these cell types.

      One major flaw with this system is the lack of ability to monitor nuclear localized RNA such as lncRNA or splicing machinery. Since since the RCas9 and sgRNA have to first be produced in the nucleus, the majority of the signal in all the figures came from the nucleus. There is a split-GFP-PUM-HD system that has been used to successfully track mRNA in mitochondria Ozawa T, 2007. Perhaps a similar concept could be used with the Cas9 system. This would prove an advantage to the Pumilio system since only the sgRNA needs to be modified instead of the entire protein.

      Overall, this is a great start towards a new, tagless, method of mRNA tracking. We look forward to future developments and improvements of this exciting technique.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 25, Michael R Blatt commented:

      Dear Boris,

      There’s nothing grand in these statements nor are they exempt from a cost-benefit analysis. It just happens that my measures of cost and benefit are (obviously) different from yours. Of course, it may be that we can still find common ground, and I would hope this is the case.

      As to your question “Is it a good thing to alert readers … to possible problems?”, clearly the answer is yes. I have said so repeatedly in my editorials and here on PubMed Commons. However, in my opinion, this needs to be done in a way that does not open the door to abuse, misrepresentation, sock-puppetry, and other antisocial or ethically unsound behaviours. I don’t think this is a particularly difficult concept, even if its solution is more complex in practice.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Mar 22, Michael R Blatt commented:

      Dear Boris,

      I really do not think that we are so far apart in our views. We both are dismayed by some of what we see in scientific publishing and communication today, and we both want the same for the scientific community as a whole. Where we differ is only in some details of the means to this end.

      The point you raise is of measures, ‘averages’, and quantity, rather than of principle. I do not doubt that there are many comments on PubPeer that are thoughtful and constructive. I certainly never suggested that all comments on PubPeer “abuse the system” (nor did I ever suggest coersion, so let us not confuse the issue here). The point on which we differ is whether the quantitative argument for anonymity that you pose outweighs the foundational arguments I have set out against it. I think not.

      You raise the analogy to the utility of cars and whether these should be banned. Of course analogies are poor vehicles (pun intended) for ideas, but let’s follow it for a moment. It would be virtually impossible to ban anonymous commenting from social media, just as it is impossible to ban reckless driving (I recall you had this discussion with Philip Moriarty previously). However, this is not to say that either should be actively encouraged. There are norms for interpersonal interaction that we generally follow and that protect civil society (e.g. accountability), just as there are rules of the road and legal requirements (e.g. the need for a driving license) that are there to protect us when we are on the road.

      I think it is always important to look for other ways to a solution. Answers sometimes come from taking an entirely different perspective rather than looking for the common denominator. So, to follow your analogy one step further, rather than banning cars (and anonymous commenting), would it not be better to make them less attractive as a whole while making the use of public transport (and of open, accountable commenting) more attractive? Are we not both in a position to influence the process of PPPR?

      I alluded, at the end of my March 2016 editorial, to what I hope will be an approach to such a ‘third solution.’ It comes straight out of discussions with Leonid Schneider who, I think you will recall, was originally one of my fiercest critics last October. I am convinced this alternative is worth a try and, at this point, have a number of my opposite numbers from other publishers on board. You may be convinced as well in due course. Again, I hope that I will have much more to say on this matter later this year.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Mar 20, Linda Z Holland commented:

      In my review, I did not intend to criticize the ability of ascidian development to say something about the role of gene subnetworks in developing systems in vivo—it is a fruitful approach worthy of vigorous pursuit. Ascidians are highly tractable for experimental embryology and have scaled-down genomes and morphologies (at least with respect to vertebrates). As a result, noteworthy progress is being made in elucidating the gene networks involved in ascidian notochord development (José-Edwards et al. 2013, Development 140: 2422-2433) and heart development (Kaplan et al. (2015. Cur Opin Gen Dev 32: 119-128). It is currently a useful working hypothesis to make close comparisons between gene subnetworks in ascidians and other animals (Ferrier 2011. BMC Biol 9: 3). At present, however, the genotype-to-phenotype relationship is an unsolved problem in the context of a single species, and to consider the problem across major groups of animals is to venture deep into terra incognita. Much more work on the development in the broadest range of major animal taxa will be required to determine how (or even if) genotypes can predict phenotypes in vivo in embryos and later life stages. Studies of this complex subject, which are likely to require a combination of experimental data and computational biology (Karr et al, 2012. Cell 150: 389-401) are still in their infancy. That said, when I consider the developmental biology of animals in general, I think it is very likely that the highly determinate embryogenesis and genomic simplifications of ascidians are evolutionarily derived states. It is possible that this ancestor may have been more vertebrate-like than tunicate-like. For example, it might have had definitive neural crest, and the situation in modern ascidian larvae, which apparently have part of the gene network for migratory neural crest, may represent a simplification from a more complex ancestor. In the absence of fossils that could represent the common ancestor of tunicates and vertebrates, we cannot reconstruct a reasonable facsimile of this ancestor. Given that tunicates are probably derived, it is not very likely that any amount of research on modern chordates will solve this problem.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Feb 04, Ivan Shatsky commented:

      It is a fruitful idea to use a high- throughput assay to fish out sequences that regulate translation initiation. I like this idea. It may result in very useful information provided that the experimental protocol is correctly designed to reach the goal of a study. However, while reading the text of the article I had an impression that the authors did not make a clear difference between the terms “IRES-driven translation” and “cap-independent” translation. In fact, cap-independent mechanisms may be of two kinds: a mechanism that absolutely requires the free 5’ end of mRNA (see e.g. Terenin et al. 2013. Nucleic Acids Res. 41(3):1807-16 and references therein and Meyer et al. 2015. Cell 163(4): 999-1010 ) and that which is based on internal initiation. Only in the latter case a 5’ UTR starts penetrating the mRNA binding channel of ribosomes with an internal segment of the mRNA rather than with a free 5’ end. Consequently, the experimental design should be distinct for these two modes of cap-independent translation. The method of bicistronic constructs used by the authors is suitable exclusively to identify IRES-elements. However, this approach is sufficiently reliable when it is employed in the format of bicistronic RNAs transfected into cultured cells. It is repeatedly shown that the initial format of bicistronic DNAs is extremely prone to almost unavoidable artifacts (for literature, see ref. 48 in the paper and the review by Jackson, R.J. The current status of vertebrate cellular mRNA IRESs. Cold Spring Harb Perspect Biol, 2013; 5). The control tests to reveal these artifacts which are still used (unfortunately!) by many researchers are not sensitive enough to detect formation of few percents of monocistronic mRNAs. (To this end, one should perform precise and laborious experiments which are not realistic in the case of high-throughput assays.). The capping of these aberrant mono- mRNAs can produce a dramatic stimulation of their translation activity (20-100 fold, depending on cell line). Therefore, even few percents of capped mono-mRNAs may result in a high activity of the reporter as compared to an almost zero activity of empty vector (see Andreev et al. 2009. Nucleic Acids Res.37(18):6135-47 and references therein). Real-time PCR assessment of mRNA integrity (Fig. S4) is an easy way to miss these few percents of aberrant transcripts. The other concern is genome-wide cDNA/gDNA estimation. The ratio for e.g. “c-myc IRES” is 2<sup>-1.6</sup> which is roughly 1/3 (Fig. S3). Does this mean that 2/3 of c-myc transcripts are monocistronic rather than bicistronic? I had a general impression that the authors were not aware of serious pitfalls inherent to the method of bicistronic DNA constructs and simply adapted this method to their high throughput assay. At least, I did not find citations of papers that discussed this important point.

       The data in section Supplementary materials (Figs. S5 and S6) give us expressive and compelling  evidence of such kind of artifacts: indeed, some  174 nt long fragments from the EMCV IRES possessed an IRES activity. Moreover, one of them with GNRA motif had the activity similar to that for the whole EMCV IRES (!?).  This result is in an absolute contradiction with our current knowledge on this picornaviral IRES, one the best studied IRES elements! Parts of the EMCV IRES are known to have no activity at all! Thus, the most plausible explanation is that the EMCV fragments harbor cryptic splice sites. The same is true for other picornavirus IRESs examined in these assays. The HCV IRES tested by the authors in the same experiments worked only as a whole structure (Fig.S6B), in a full agreement with data of literature. However, this result may not encourage us as it just means that the data obtained in this study may be a mixture of true regulatory sequences with artifacts.   
        We should keep in mind that the existence of viral IRES-elements is a firmly established fact. They have a complex and highly specific organization with well defined boundaries and THEY ARE ONLY ACTIVE AS INTEGRAL STRUCTURES. The minimal size of IRESs from RNAs of animal viruses is  >300 nts. Their shortening inactivates them and therefore, they cannot be studied with cDNA fragments of 200 nt long or less. Thus, I think it was a mistake to mix viral IRESs with cellular mRNA sequences. As to cellular IRESs, none of them has been characterized and hence we do not know what they are and whether they even exist. For none of them has been shown that they do not need a free 5’ end of mRNA to locate the initiation codon. Some of them have already been disproved (c-Myc, eIF4G, Apaf-1 etc.).  By the way, I do not know any commercial vector that employs a cellular IRES. Thus, I think that we should first find adequate tools to identify cellular IRESs, characterize several of them, and only afterwards we may proceed to transcriptome-wide  searching for cellular IRESs.
      


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jun 24, Lydia Maniatis commented:

      The authors do two things in this study:

      First, they point out that past studies on “constancy” have been hopelessly confounded due a. to condition-sensitivity of and ambiguity in what is actually percieved and b. questions that are confusing to observers because they are vague, ambiguous, unintelligible or unanswerable on the basis of the percept, thus forcing respondents to try to guess at the right answer. As a result, the designers of these studies have generated often incoherent data and proferred vague speculations as to the reasons for the randomness of the results.

      Second, as though teaching (what to avoid) by example, they produce yet another study embodying all of the problems they describe. Using arbitrary sets of stimuli, they ask an arbitrary set of questions of varying clarity/answerability-on-the-basis-of-the-percept, and generate the typically heterogeneous, highly variable set of outcomes, accompanied by the usual vague and non-committal discussion. (The conditions and questions are arbitrary in the sense that we could easily produce a very different set of outcomes by (especially) changing the colors of the stimuli or (less importantly) changing the questions asked.) Thus, the only possible value of these experiments would be to show the condition-dependence of the outcomes. But this was an already established fact, and it is, furthermore, a fact that any experimenter in any field should be aware of. It's the reason that planning an experiment requires careful, theory-guided control of conditions.

      The authors make no attempt to hide the fact that some of the questions they ask participants cannot be anwered by referring to the percept. For example,, they are asked about some physical characteristic of the simulus, which, of course, is inaccessible to either the human visual system and unavailable in the conscious percept. In these cases, we are not studying perception of the color of surfaces, but a different kind of problem-solving. The authors refer to answers “based on reasoning.” If we're interested in studying color perception, then the simple answer would be not to use this type of question. The authors seem to agree: “Although we believe that the question of how subjects can reason from their percepts is interesting in its own right, we think it is a different question from how objects appear. Our view is that instructional effects are telling us about the former, and that to get at the latter neutral instructions are the most likely to succeed...In summary, our results suggest that certain types of instructions cause subjects to employ strategies based on explicit reasoning— which are grounded in their perceptions and developed using the information provided in the instructions and training—to achieve the response they believe is requested by the experimenter.” This was all clearly known on the basis of prior experimence, as described in the introduction.

      So, at any rate, the investigators express an interest in what is actually perceived by observers. But what is the question they're interested in answering? This is the real problem. The question, or goal, seems to be, “How do we measure color constancy?” But we don't measure things for measurement's sake. The natural follow-up is “Why do we want to measure color constancy?” What is the theoretical goal, or question we want to answer? This question matters because we can never, ever, arrive at some kind of universal, general, number for this phenomenon, which is totally condition-dependent. But I'm not able to discern, in these authors' work, any indication of their purpose in making these highly unreliable measurements.

      Color constancy refers to the fact that sometimes, a surface “x” will continue to appear the same color even as the kind and intensity of the light it is projecting to the eye changes. On the other hand, it is equally possible for that same surface to appear to change color, even as the kind and intensity of the light it is reflecting to the eye remains the same. In both cases – constancy and inconstancy – the outcome depends on the total light projecting to the eye, and the way the visual system organizes it. In both cases – constancy and inconstancy – the visual principles mediating the outcome are the same.

      The authors, in this and in previous studies, “measure constancy.” Sometimes it's higher, sometimes it's lower. It's condition-dependent. Even if they were actually measuring “constancy” in the sense of testing how an actually stable surface behaves under varying conditions, what would be the value of this data? We already know that constancy is condition-dependent, that it is often good or good enough, and that it can fail under certain well-understood conditions. (That these conditions are fairly well-understood is the reason the authors possess a graphics program for simulating “constancy” effects). How does simply measuring this rise and fall under random conditions (random because not guided by theory, meaning that the results won't help clarify any particular theoretical question) provide any useful information? What is, in short, the point?

      Yet another twist in the plot is that in their experiments, the authors aren't actually measuring constancy. Because we are talking about simulations, in order to exhibit “constancy,” observes need (often) to actually judge two surface with different spectral characteristics as being the same. This criterion is based on assumptions made by the investigators as to what surfaces should look the same under different conditions/spectral properties. But this doesn't make sense. What does it mean, for example, if an observer returns “low constancy” results? It means that the conditions required for two actually spectrally different surfaces to appear the same simply didn't hold, in other words, that the investigators' assumptions as to the conditions that should produce this “constancy” result didn't hold. If the different stimuli were designed to actually test original assumptions about the specific conditions that do or do not produce constancy, fine. But this is not the case. The stimuli are simply and crudely labelled “simplified” and “more realistic.” This means nothing with respect to constancy-inducing conditions. Both of these kinds of stimuli can produce any degree of “constancy” or “inconstancy” you want.

      In short, we know that color perception is condition-sensitive, and that some questions may fail to tap percepts; illustrating this this yet again is the most that this experiment can be said to have accomplished.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 27, Lydia Maniatis commented:

      A little way into their introduction, the authors of this article make the following clear and unequivocal assertion:

      “These findings underscore the idea that the encoding of objects and shapes is accomplished by a hierarchical feedforward process along the ventral pathway (Cadieu et al., 2007; Serre, Kouh, Cadieu, Knoblich, & Krei- man, 2005; Serre, Oliva, & Poggio, 2007; Van Essen, Anderson, & Felleman, 1992). The question that arises is how local information detected in the early visual areas is integrated to encode more complex stimuli at subsequent stages.”

      As all vision scientists are aware, the processes involved at every level of vision are both hierarchical and parallel, feedforward and feedback. These processes do not consist of summing “local information” to produce more complex percepts; a small local stimulus change can reconfigure the entire percept, even if the remaining “local information” is unchanged. This has been solidly established, on a perceptual and neural level, on the basis of experiment and logical argument, for many decades. (The authors' use of the term “complex stimuli,” rather than “complex percepts” is also misjudged, as all stimuli are simple in the sense that they stimulate individual, retinal photoreceptors in the same, simple way. Complexity arises as a result of processing - it is not a feature of the retinal (i.e. proximal) stimulus).

      The inaccurate description of the visual process aligns with the authors' attempt to frame the problem of vision as a “summation” problem (using assumptions of signal detection theory), which, again, it decidedly is not. If the theoretical relevance of this study hinges on this inaccurate description, then it has no relevance. Even on its own terms, methodological problems render it without merit.

      In order to apply their paradigm, the authors have constructed an unnatural task, highly challenging because of unnatural conditions - very brief exposures resulting in high levels of uncertainty by design, resulting in many errors, and employing unnaturally ambiguous stimuli. The task demands cut across detection, form perception, attention, and cognition (at the limit, where the subjects are instructed to guess, it is purely cognitive). (Such procedures may be common and old (“popular” according to the authors), but this on its own doesn't lend them theoretical merit).

      On this basis, the investigators generate a dataset reflecting declining performance in the evermore difficult task. The prediction of their particular model seems to be generic: In terms of the type of models the authors are comparing, the probability of success appears to be 50/50; either a particular exponent (“beta”) in their psychometric function will decline, or it will be flat. (In a personal communication, one of the authors notes that no alternative model would predict a rising beta). The fitting is highly motivated and the criteria for success permissive. Half of the conditions produced non-significant results. Muscular and theory-neutral attempts to fit the data couldn't discover a value of “Q” to fit the model, so the authors “have chosen different values for each experiment,” ranging from 75 to 1,500. The data of one of five subjects were “extreme.” In addition, the results were “approximately half as strong as some previous reports, but “It ...remains somewhat of a mystery as to why the threshold versus signal area slopes found here are shallower than in previous studies, and why there is no difference in our study between the thresholds for Glass patterns and Gabor textures.” In other words, it is not known whether such results are replicable, and what mysterious forces are responsible for this lack of replicability.

      It is not clear (to me) how a rough fit to a particular dataset, generated from an unnaturally challenging task implicating multiple, complex, methodologically/theoretically undifferentiated visual processes, of a model that makes such general, low-risk predictions (such as can be virtually assured by a-theoretical methodological choices) can elucidate questions of physiology or principle of the visual, or any, system.

      Finally, although the authors state as their goal to decide whether their model “could be rejected as a model of signal integration in Glass pattern and Glass-pattern-like textures” (does this mean they think there are special mechanisms for such patterns?)” they do not claim to reject the only alternative that they compare (“linear summation”), only that “probability and not linear summation is the most likely basis for the detection of circular, orientation-defined textures.”

      It is not clear what the “most likely” term means here. Most likely that their hypothesis about the visual system is true (what is the hypothesis)? Most likely to have fit their data better than the alternative? (If we take their analysis at face value, then this is 100% true). Is there a critical experiment that could allow us to reject one or the other? If no alternatives can be rejected, then what is the point of such exercises? If some can be, what would be the theoretical implications? Is there a value in simply knowing that a particular method can produce datasets that can be fit (more or less) to a particular algorithm?

      The "summation" approach seen here is typical of an active and productive (in a manner of speaking) subdiscipline (e.g. Kingdom, F. A. A., Baldwin, A. S., & Schmidtmann, G. (2015). Modeling probability and additive summation for detection across multiple mecha- nisms under the assumptions of signal detection theory. Journal of Vision, 15(5):1, 1–16; Meese, T. S., & Summers, R. J. (2012). Theory and data for area summation of contrast with and without uncertainty: Evidence for a noisy energy model. Journal of Vision, 12(11):9, 1–28; Tyler, C. W., & Chen, C.-C. (2000). Signal detection theory in the 2AFC paradigm: Attention, channel uncertainty and probability summation. Vision Research, 40, 3121–3144.)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 10, Lydia Maniatis commented:

      This seems rather an odd choice of theme for a "Research Topic" in that it inspired a collection of papers with no conceptual coherence, out of many others that could have been selected on the basis of a common technique. It is somewhat like saying we'll put together an issue of physics papers that all used spectrometers. It's not really a "topic" unless you're specifically focussing on, e.g., pros and cons of the method.

      The editors summary expresses the situation well: "In sum, this Research Topic issue shows several ways to use diverse kinds of noise to probe visual processing." As discussed in their exposition, noise has been historically used in multifarious ways for multifarious purposes.

      I think the emphasis on technique rather than on theoretical problems is symptomatic of the conceptual impoverishment of the field. The use of the term "probe" has become common in this field, at least, indicating that a study is an exercise in a-theoretical data collection, rather than a methodic attempt to answer a question.

      I would also add that noise as a technique to probe normal perception in normal conditions should be employed with caution, since it does not characterise normal scenes, but rather places unusual stress on the system which may respond in unusual ways.

      Predictably, the results of the articles described seem undigested and of unclear value: E.g. "Hall et al. (2014) find that adding white noise increased the center spatial frequency of the letter-identification channel for large but not small letters;" (so...? how large is large...?) "Gold (2014) use pixel noise to investigate the visual information used by the observer during a size-contrast illusion. By correlating the observers頣lassification decision with each pixel of the noise stimuli, they find that the spatial region used to estimate the size of the target is influenced by the size of surrounding irrelevant elements" (or your theoretical definition of "irrelevant" needs adjustment).

      If the goal of this issue was to show that you can make noise and get published, then it's a big success.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 20, David Keller commented:

      Why the blinding of experimental subjects should be tracked during a study, from start to finish

      I wish to address the points raised by Folmer and Theodoroff in their reply [1] to my letter to the editor of JAMA Otolaryngology [2] concerning issues they encountered with unblinding of subjects in their trial of therapeutic MRI for tinnitus. These points are important to discuss, in order to help future investigators optimize the design of future studies of therapies for tinnitus, which are highly subject to the placebo, nocebo, Pygmalion and other expectation effects.

      First, Folmer and Theodoroff object to my suggestion of asking the experimental subjects after each and every therapy session whether they think they have received active or sham placebo therapy in the trial so far (the "blinding question"). They quote an editorial by Park et al [3] which states that such frequent repetition of the blinding question might increase "non-compliance and dropout" by subjects. Park's statement is made without any supportive data, and appears to be based on pure conjecture, as is his recommendation that subjects be asked the blinding question only at the end of a clinical trial. I offer the following equally plausible conjecture: if you ask a subject the blinding question after each session, it will soon become a familiar part of the experimental routine, and will have no more effect on the subject's behavior than did his informed consent to be randomized to active treatment or placebo in the first place. Moreover, the experimenters will obtain valuable information about the evolution of the subjects' state of mind as the study progresses. We have no such data for the present study, which impairs our ability to interpret the subjects' answers to the blinding question, when it is asked only once at the end of the study.

      Second, Folmer and Theodoroff state that I "misinterpreted" their explanation of why so many of their subjects guessed they had received placebo, even if they had experienced "significant improvement" in their tinnitus score. They object to my characterization of this phenomenon as due to the "smallness of the therapeutic benefit" of their intervention, but my wording summarizes their lengthier explanation, that their subjects had a prior expectation of much greater benefit, so subjects incorrectly guessed they had been randomized to sham therapy even if they exhibited a small but significant benefit from the active treatment. In other words, the "benefit" these subjects experienced was imperceptible to them, truly a distinction without a difference.

      A therapeutic trial hopes for the opposite form of unblinding of subjects, which is when the treatment is so dramatically effective that the subjects who were randomized to active therapy are able to answer the blinding question with 100% accuracy.

      Folmer and Theodoroff state that, in their experience, even if subjects with tinnitus "improve in several ways" due to treatment, some will still be disappointed if their tinnitus is not cured. Do these subjects then answer the blinding question by guessing they received placebo because their benefit was disappointing to them, imperceptible to them, as revenge against the trial itself, or for some other reason? Regardless, if you want to know how well they were blinded, independent of treatment effects and of treatment expectation effects, then you must ask them early in the trial, before treatment expectations have time to take hold. Ask the blinding question early and often. Clinical trials should not be afraid to collect data. Data are good; more data are better.

      References:

      1: Folmer RL, Theodoroff SM. Assessment of Blinding in a Tinnitus Treatment Trial-Reply. JAMA Otolaryngol Head Neck Surg. 2015 Nov 1;141(11):1031-1032. doi: 10.1001/jamaoto.2015.2422. PubMed PMID: 26583514.

      2: Keller DL. Assessment of Blinding in a Tinnitus Treatment Trial. JAMA Otolaryngol Head Neck Surg. 2015 Nov 1;141(11):1031. doi: 10.1001/jamaoto.2015.2425. PubMed PMID: 26583513.

      3: Park J, Bang H, Cañette I. Blinding in clinical trials, time to do it better. Complement Ther Med. 2008 Jun;16(3):121-3. doi: 10.1016/j.ctim.2008.05.001. Epub 2008 May 29. PubMed PMID: 18534323.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Apr 07, Lydia Maniatis commented:

      Readers should read the last paragraphs of this article first. It indicates that the current results contradict the authors' previous results, and that they have no idea why that is. Nevertheless, they assume that one of the two must be right, and use a crude rule of thumb (the supposedly "simpler" explanation) to make their choice. I would take the third option.

      "We see two possible explanations for the inconsistency between our previous work and that here:

      The correct conclusions about the extent of contrast integration are drawn in our current work, with previous work being compromised by the loss of sensitivity with retinal eccentricity. For example, Baker and Meese (2011) built witch's hat compensation into their modeling, but not their stimuli (in which they manipulated carrier and modulator spatial frequencies, not diameter). A loss of experimental effect in the results (such as that in Figures 2a and 3a here) limits what the analysis can be expected to reveal. Indeed, Baker and Meese (2011) found it difficult to put a precise figure on the range of contrast integration, and aspects of their analysis hinted at a range of >20 cycles for two of their three observers. Baker and Meese (2014) made no allowance for eccentricity effects in their reverse correlation study. The contrast jitter applied to their target elements ensured they were above threshold, and so the effects of contrast constancy should come into play (Georgeson, 1991); however, we cannot rule out the possibilities that either (a) the contrast constancy process was incomplete or (b) internal noise effects not evident at detection threshold (e.g., signal dependent noise) compromised the conclusions.

      The correct conclusions about the extent of contrast integration come from our previous work. Our current work points to lawful fourth-root summation, but not necessarily signal integration across the full range. On this account, signal integration takes place up to a diameter of about 12 cycles and a different fourth-root summation processes take place beyond that point. For example, from our results here we cannot rule out the following possibility: Beyond an eccentricity of ∼1.5° the transducer becomes linear and overall sensitivity improves by probability summation (Tyler & Chen, 2000), but uncertainty (Pelli, 1985; Meese & Summers, 2012) for more peripheral targets causes the slope of the psychometric function to remain steeper than β = 1.3 (May & Solomon, 2013).

      We think Occam's razor would favor the first account over the second."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 25, Lydia Maniatis commented:

      When Todd et al say that understanding their demonstrations requires “a broader theoretical analysis of shape from shading that is more firmly grounded in ecological optics,” do they mean that there are things about the physics of how light interacts with surfaces that we don't understand? What kind of empirical investigations are they suggesting need to be performed? What kind of information do they think is missing, optically-speaking?

      The fundamental issues are formal (having to do with form) not the details of optics and probabilities of illumination structure - as these authors have shown.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 17, Johan van Schalkwyk commented:

      SPRINT strikes me as a work of pure genius. Conceive the following scenario: Take a carefully selected mixture of high-risk patients with a variety of blood pressures and risk factors, making sure that the low threshold for selection into the study is below the current sytolic blood pressure 'standard' of 140 mmHg. Apply two protocols, one that will aggressively reduce blood pressure in the intermediate term, another that's far more conservative. It's not beyond the bounds of possibility that a group of smart statisticians using current simulation methods and access to large hypertension databases might even predict the intermediate-term outcomes with a fair degree of confidence.

      The fact that this trial is exactly what every manufacturer of anti-hypertensives needs at this point, that it was stopped very early, and that it seems to contradict the prior evidence that has informed treatment guidelines to date should make us pause and think. Particularly as the results from a highly selected group, treated for a few years, may well be extrapolated to lifetime treatment of many or even most people with a systolic blood pressure of over 130 mmHg. Let's see how this is marketed.

      You may well choose to ignore the fact that one of the principal authors has received "personal fees" and/or grants from Bayer, Boeringer Ingelheim, GSK, Merck, AstraZeneca, Novartis, Arbor, Amgen, Medtronic and Forest. Your choice. Everyone has to make a living.

      Of greater concern might be the near tripling of the rate of acute kidney injury or acute renal failure in the intensive-treatment group, as these conditions are not cheap to manage. We might also be a bit puzzled that almost half of the "extra deaths" in the standard therapy group were NOT from cardiovascular causes. How on earth does this work?

      But anyone who understands a bit about MCMC methods should be in awe of the way this study was put together.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Aug 27, Robert Goulden commented:

      Hi David

      Many thanks for your comments. Your key objection, in the comment here, your letter to Am J Med, and your PubMed commons comment on my response to that letter, is around the use of occasional drinkers as the reference category. You are completely right that this use of a reference category is the principal (but not sole) reason that I don’t find evidence of a benefit to moderate alcohol consumption. I had hoped that the use of this reference category was adequately explained in the paper, but it’s an important point so I’m happy to discuss it further.

      Studies of the association between alcohol consumption and health are plagued with the problem of confounding. Non-drinkers and moderate drinkers differ in myriad important ways which conventional regression analyses cannot adequately adjust for (1). This leaves us with the difficult question of how to isolate the effects of alcohol on health, as opposed to the effect of all the other health-related variables which differ between non-drinkers and drinkers. One proposed solution is to use occasional drinkers as the reference category – not a novel “statistical maneuver” developed by me – but an approach used in the largest ever study of this question (2) (they were called light drinkers in that study, but the volume of alcohol consumed [0-2 g/day] meant they were occasional drinkers, as the accompanying editorial noted (3)). As I say in my paper, occasional drinkers “drink at levels for which a physiologic effect of alcohol is not plausible, but are likely to be more similar in other characteristics to moderate drinkers than long-term abstainers, thus reducing confounding”.

      When this approach to addressing confounding is taken, my results are actually consistent with the wider literature, as Stockwell and Naimi note in their commentary on my paper (4). Their systematic review (5) reported “similar findings” and had my paper been included in their meta-analysis (it was published after their search window) it would have been coded as “high quality”. Other approaches which try to isolate the causal effect of alcohol and minimize confounding, such as mendelian randomization, also find no evidence of a benefit of moderate alcohol consumption (6).

      Making sense of all the evidence is tricky, but my gut instinct (for what it's worth!), based on my paper and the wider literature, is that moderate alcohol consumption (up to 21 drinks per week) likely has very little effect (positive or negative) on health. My paper only finds unambiguous evidence of harm for those drinking over 21 drinks per week; of course, whether that association is driven by residual confounding is hard to say, but it’s certainly consistent with well-established links between heavy alcohol use and adverse outcomes such as liver disease, certain cancers, and trauma.

      I think the claim that my abstract is ‘misleading’ isn’t warranted; I explicitly state in the abstract that occasional drinkers are the reference category, but by necessity this choice can only be fully explained in the main text.

      Finally, you make some explicit causal claims about alcohol’s effects on health which I think go beyond what the observational literature can tell us. You state “the non-drinker can lower his Hazard Ratio for all-cause mortality…by starting the light consumption of alcohol” and “an average non-drinker can significantly lower their risk of call-cause mortality by adding one standard 14 gram serving of ethanol per day”. Until we have an RCT to demonstrate this, I think claims about what alcohol can and cannot do should be made more cautiously. If such a study were performed, and indeed showed benefit, I’d be the first to raise a glass to the results!

      Regards,

      Rob

      References

      1: Naimi TS, Brown DW, Brewer RD, Giles WH, Mensah G, Serdula MK, et al. Cardiovascular risk factors and confounders among nondrinking and moderate-drinking U.S. adults. Am J Prev Med. 2005 May;28(4):369–73.

      2: Bergmann MM, Rehm J, Klipstein-Grobusch K, Boeing H, Schütze M, Drogan D, et al. The association of pattern of lifetime alcohol use and cause of death in the European Prospective Investigation into Cancer and Nutrition (EPIC) study. Int J Epidemiol. 2013 Dec 1;42(6):1772–90.

      3: Stockwell T, Chikritzhs T. Commentary: Another serious challenge to the hypothesis that moderate drinking is good for health? Int J Epidemiol. 2013 Dec 1;42(6):1792–4.

      4: Stockwell T, Naimi T. Study raises new doubts regarding the hypothesised health benefits of “moderate” alcohol use. Evid Based Med. 2016 Jul 7;ebmed-2016-110407.

      5: Stockwell T, Zhao J, Panwar S, Roemer A, Naimi T, Chikritzhs T. Do “Moderate” Drinkers Have Reduced Mortality Risk? A Systematic Review and Meta-Analysis of Alcohol Consumption and All-Cause Mortality. J Stud Alcohol Drugs. 2016 Mar;77(2):185–98.

      6: Holmes MV, Dale CE, Zuccolo L, Silverwood RJ, Guo Y, Ye Z, et al. Association between alcohol and cardiovascular disease: Mendelian randomisation analysis based on individual participant data. BMJ. 2014;349:g4164.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Oct 09, Lydia Maniatis commented:

      Dear Mike,

      Take your time. I think our conversation has run its course. You believe that Taylor and Francis may have acted properly, that it is sometimes OK for a publisher to ban an author. I believe the opposite, to the point that I find it difficult to believe they were within their legal rights. If and when you find the time to inform yourself to your satisfaction about this particular case, we can continue the conversation.

      If I had been particularly concerned with guarding my anonymity on PubPeer, you would have needed to be far more clever to unmask me. Your attempt to make this conversation about me (to label me as "highly emotional" - another evasive tactic on your part) and your triumph in "unmasking" me as Peer 14, as though this mattered (did I make any comments there I should be ashamed of?) is one of the reasons I support anonymity. Fortunately, my participation in future PubPeer discussions, when I choose to remain anonymous, won't be subject to such unworthy, ad hominem tactics.

      Regards, Lydia


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Oct 08, Michael R Blatt commented:

      Dear Lydia:-

      I can see that this is a highly emotionally-charged issue for you, whereas it appears less so for Jaime (though I can fully imagine why it might be otherwise). So, I do think it important to step back for a moment.

      Let me relate another matter to you. This pertains to a case that goes back more than a quarter of a century and took place in a university in the mid-West of the United States. I was peripherally associated with the case, primarily because of my knowledge of the professor involved (a good friend, as it happens, and we remain so still). The professor, let’s call him Fred for now, became embroiled in an argument with his head of institute. The details of the argument are less important than the consequences. Fred was so aggrieved by the way he felt he had been treated, that he became disruptive, aggressive and threatening to other academic staff and students alike. In the end, following disciplinary proceedings, Fred was barred from the institute and took ‘early retirement.’ From my own perspective, I could understand why Fred was aggrieved – I, too, felt that the initial handling of the argument was problematic – but I could also see that his reaction was inappropriate and disproportionate, and that the institute had no choice but to bar him. In effect, Fred was within his rights, but was unwilling to accept responsibility for his actions and their consequences.

      I am not suggesting that there is a parallel here, but I recall the story to point out that there are always two sides to an argument. In Fred’s case, I was close enough to the events that it was easy to see what was going on, from both sides. In Jaime’s case, I have gathered what information I can from RetractionWatch, but I note that the information is presented almost entirely from his perspective. In your insistence that I choose a side, do you really mean to deny me the right to hear both sides of the argument?

      Bests,

      Mike

      p.s. I’ll need to attend to my own day job now, so it may be a few days before I respond to any more comments.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 19, Lydia Maniatis commented:

      Below is my reply to the author. My polemic tone notwithstanding, I appreciate his taking the time to respond. I've interspersed my responses with his text.

      Author: A researcher I highly respect once told me that a good review paper is one that engages and stimulates the reader to think critically and broadly about a particular phenomenon. In this sense I appreciate the commentary by Prof. Maniatis, which suggests the review at least succeeded in stimulating critical thought in at least one distinguished reader. And I will add that, though my initial reaction was that Prof. Maniatis' commentary is a polemic, it is clear that my critic takes the issues very seriously and raises some important research questions suggesting future experimental work.

      Me: My commentary is a polemic, if by that you mean it raises serious objections. I'm not recommending future work along the same lines; I'm saying the rationale for such work is vague to non-existent, not least because it conflicts with known facts. (My (ongoing) comments on Ariely (2001), which this article and many others treat as as “seminal,” as well as the other comments I've cited here, may make this clearer. Disagreement with the facts is supposed to be disqualifying in science, unless and until theoretical alignment can be achieved. Avoiding the (easy) possibility of falsification by choosing the route that Runeson describes (quoted in my second comment on Dube and Sekuler) is not the same thing as subjecting a hypotheses to serious tests. Inconclusive tests and an avoidance of critical discussion to point out logical inconsistencies/inconsistencies with the phenomena ensures that more work is always needed.

      Author: Nonetheless, the response, roughly a third of which seems to revolve around a passing reference to work by Koffka that has little to no bearing on the main points and conclusions of the review (and which misses the point of the reference to Koffka)...

      Me: If I've missed the point, then please let me know what I've missed. I consider the mistake that I flagged serious because it implies that the work of the Gestaltists supports the work being discussed here, when in fact the opposite is true.

      Author:...contains a number of misintepretations of the points made in the review. I take responsibility for any lack of clarity that may have produced this. I will detail a couple of examples that seem most directly related to the review (discussion of modeling methods, which don't fit algorithms as Prof. Maniatis stated but use algorithms to fit models, has to do with standard practice in the field itself and not the review).

      Me: Standard practice isn't necessarily good practice.

      Author: Encoding and retrieval of statistical information about stimuli, such as the average diameter of circles in a set of circles with different diameters, may or may not involve direct "perception" of the average in the sense used by Prof. Maniatis. The relevant experiments, I suspect, have yet to be conducted.

      Me: Encoding and retrieval of statistical information about stimuli, such as the average diameter of circles, may or may not actually happen. Scientific hypotheses are indeed guesses, but to be worthy of testing there needs to be a rationale and a clear articulation of associated assumptions. Relevant experiments pre-suppose that the idea has been developed enough to specify, for the purpose of testing, what these assumptions are. If investigators, after decades, haven't even decided whether direct perception is involved (which it clearly isn't – it's the nature of direct perception to be self-evident), then what have they been doing?

      Author: For this reason, "perceptual" may not be the best term and several different terms for the effects we have described are in in use (ensemble representation, statistical summary representation, etc). In my prior work (Dubé et al., 2014) I have discussed conceptual difficulties related to this term, and in my current work I favor "statistical summary representation" for this reason. However, the findings detailed in the review are indisputable.

      Me: I dispute them, partly along the lines of Runeson. I think when we look on a case by case basis, we find serious problems of method and/or misrepresentations in the interpretation.

      Author: There is a clear consensus in the literature that participants can accurately recall the average.

      Me: It's interesting that experimenters jump to the recall stage but skip the (presumably less challenging) perception stage. Why are observers being forced to recall what they are supposed to be perceiving?

      Author: If they can accurately recall it, they must have encoded and stored it. There is no question as to whether such memories exist. I just returned from VSS at which there were around 50 presentations on the topic of summary statistical representation, according to one talk, and the special issue of JoV in which our review appeared was devoted entirely to summary statistical representation. Clearly a decent number of scientists remains convinced that the effects exist!

      Me: Numbers of proponents is not an argument. I've criticized some of these authors' work, and when there's a response its not very convincing.

      Author: The final comment in the review, which Prof. Maniatis takes as our own admission that the existence of statistical representation is questionable, was meant to be somewhat tongue-in-cheek. How can the effects that have been attributed to remembered averages be due to memory for fine details of individual items when several studies, including the seminal one by Ariely (2001), demonstrate memory for the average despite chance performance on memory tests of the individual items from which the average was computed?

      Me: I'm in the process of commenting on Ariely (2001). His methods and interpretations are questionable and his arguments are full of inaccuracies and inconsistencies. It is an extremely casual, not a seminal, study.

      Author: It is in no way a statement that the effects don't exist (or even that we suspect they don't), even if taken at face value, and as I have detailed there is a quite large amount of empirical evidence to contradict the philosophical position of Prof. Maniatis. I will not detail all of these studies here, since a review detailing them already exists: Dubé and Sekuler (2015).

      Me: There are often other ways to interpret performances that have been attributed to some kind of mental calculation. The brain can use rules of thumb, as Gigerenzer has discussed. One example is how baseball players can catch a ball without subconsciously doing the complex math that some thought was required. When you a. ignore falsifying facts and b. don't consider alternative interpretations, then you have no doubts.

      Author: In my view, the conceptual nuances involved in discussion of summary statistical representation are suggestive of a need for more concrete, computational modeling, less verbal theorizing, and more neural data in this area."

      Me: If by verbal theorizing you mean critical discussion and exchange of ideas, I would say that more is desperately needed. The conceptual problems aren't “nuances,” they're huge. Useful data collection presupposes clear theories; otherwise its a waste of time, money and people. (As Darwin said, if you don't have a hypothesis, you might as well count the stones on Brighton Beach). The normalized view (espoused by journal editors) that critical discussion is the enemy of progress is convenient, but unscientific and wasteful.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 May 18, Chad Dube commented:

      A researcher I highly respect once told me that a good review paper is one that engages and stimulates the reader to think critically and broadly about a particular phenomenon. In this sense I appreciate the commentary by Prof. Maniatis, which suggests the review at least succeeded in stimulating critical thought in at least one distinguished reader. And I will add that, though my initial reaction was that Prof. Maniatis' commentary is a polemic, it is clear that my critic takes the issues very seriously and raises some important research questions suggesting future experimental work. Nonetheless, the response, roughly a third of which seems to revolve around a passing reference to work by Koffka that has little to no bearing on the main points and conclusions of the review (and which misses the point of the reference to Koffka), contains a number of misintepretations of the points made in the review. I take responsibility for any lack of clarity that may have produced this.

      I will detail a couple of examples that seem most directly related to the review (discussion of modeling methods, which don't fit algorithms as Prof. Maniatis stated but use algorithms to fit models, has to do with standard practice in the field itself and not the review).

      Encoding and retrieval of statistical information about stimuli, such as the average diameter of circles in a set of circles with different diameters, may or may not involve direct "perception" of the average in the sense used by Prof. Maniatis. The relevant experiments, I suspect, have yet to be conducted. For this reason, "perceptual" may not be the best term and several different terms for the effects we have described are in in use (ensemble representation, statistical summary representation, etc). In my prior work (Dubé et al., 2014) I have discussed conceptual difficulties related to this term, and in my current work I favor "statistical summary representation" for this reason. However, the findings detailed in the review are indisputable. There is a clear consensus in the literature that participants can accurately recall the average. If they can accurately recall it, they must have encoded and stored it. There is no question as to whether such memories exist. I just returned from VSS at which there were around 50 presentations on the topic of summary statistical representation, according to one talk, and the special issue of JoV in which our review appeared was devoted entirely to summary statistical representation. Clearly a decent number of scientists remains convinced that the effects exist!

      The final comment in the review, which Prof. Maniatis takes as our own admission that the existence of statistical representation is questionable, was meant to be somewhat tongue-in-cheek. How can the effects that have been attributed to remembered averages be due to memory for fine details of individual items when several studies, including the seminal one by Ariely (2001), demonstrate memory for the average despite chance performance on memory tests of the individual items from which the average was computed? It is in no way a statement that the effects don't exist (or even that we suspect they don't), even if taken at face value, and as I have detailed there is a quite large amount of empirical evidence to contradict the philosophical position of Prof. Maniatis. I will not detail all of these studies here, since a review detailing them already exists: Dubé and Sekuler (2015).

      In my view, the conceptual nuances involved in discussion of summary statistical representation are suggestive of a need for more concrete, computational modeling, less verbal theorizing, and more neural data in this area.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 16, Morgan E Levine commented:

      Daniel Himmelstein, thank you for your comments. I will try to address them to the best of my ability.

      We acknowledge that the sample size is very small, which we mention in our limitations section of the paper. Because we are studying such a rare phenotype, there is not much that can be done about this. “Long-lived smokers” is a phenotype that has been the subject of a number of our papers, and that we think has strong genetic underpinnings. Despite the small sample size, we decided to go ahead and see if we could detect a signal, since there is evidence to suggest that the genetic influence may be larger for this phenotype compared to many others—something we discuss at length in our introduction section.

      To the best of our knowledge the finding that highly connected genes contain more SNPs, has not been published in a peer-reviewed journal. Therefore, we had no way of knowing or evaluating the importance of this for our study. Similarly, we used commonly used networks and do acknowledge the limitations of these networks in our discussion section. The network you link to was not available at the time this manuscript was accepted.

      We acknowledge the likelihood of over-fitting in our PRS, which is probably due to our sample size. This score did validate in two independent samples. Therefore, while it is likely not perfect, we feel that it may still capture some of the true underlying signal. We followed standard protocol for calculating our score (which we reference). In the literature there are many examples of scores that have been generated by linearly combining information from SNPs that are below a given p-value threshold in a GWAS. While, not all of these replicate, many do. Our study used very similar methods, but just introduced one additional SNP selection criteria—SNPs had to also be in genes that were part of an FI network. I don't think this last criteria would introduce additional bias that would cause a type I error in the replication analysis. However, we still recognize and mention some of the limitations of our PRS. We make no claim that the score is free from error/noise or that it should be used in a clinical setting. In fact, in the paper we suggest future methods that can be used to generate better scores.

      We feel we have provided sufficient information for replication of our study. The minor alleles we used are consistent with those reported for CEU populations, which is information that is readily available. Thus, the only information we provide in Table S2 pertain to things specific to our study, that can't be found elsewhere. Lastly, the binning of ages is not 'bizarre' from a biogerontology and longevity research perspective. A number of leaders in the filed have hypothesized that the association between genes and lifespan is not linear (variants that influence survival to age 100+ are not the same as variants that influence survival to age 80+). Thus, using a linear model would not be appropriate in this case and instead we selected to look at survival by age group.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 18, Clive Bates commented:

      Erroneous interpretations have been placed on these results and overly confident conclusions drawn from very small numbers of imperfectly characterised teenagers. The headline recommendations were based on the behaviour of six out of 16 baseline e-cigarette users in a sample of 694 adolescents deemed not to be susceptible to smoking. Large conclusions drawn from small numbers should always be a cause for caution, as discussed in this article about this study by Five Thirty-Eight:

      Ignore The Headlines: We Don’t Know If E-Cigs Lead Kids To Real Cigs by Christie Ashwandan, 11 September 2015

      One should expect the inclination to use e-cigarettes to be caused by the same things that cause an inclination to smoke - they are similar habits (the former much less risky) and it is quite likely that those who used e-cigarettes first would have become smokers first in the absence of e-cigarettes - a concept known as shared liability. A range of independent factors that create a common propensity to smoke or vape, such as parental smoking, rebellious nature, delinquency etc. explain the association between vaping and smoking incidence but without this relationship being causal.

      The authors try to address this by characterising teenagers non-susceptible to smoking if they answer “definitely no” when asked the following: “If one of your friends offered you a cigarette, would you try it?” and “Do you think you will smoke a cigarette sometime in the next year?”. The study concentrates on this group.

      This is not a foolproof way of characterising susceptibility to smoking, which in any case is not a binary construct but a probability distribution. Nor is susceptibility a permanent condition for any young person - for example, if a teenage girl starts seeing a new boyfriend who smokes that will materially changes her susceptibility to smoking. The fact that some were deemed unsusceptible to smoking but were already e-cigarette users is grounds for further unease - these would be more likely to be the teens where the crude characterisation failed.

      It is a near-universal feature of tobacco control research that the study presented is a wholly inadequate basis for any policy recommendation drawn in the conclusion, and this study is no exception:

      These findings support regulations to limit sales and decrease the appeal of e-cigarettes to adolescents and young adults.

      The findings do not support this recommendation, not least because the paper is concerned exclusively with the behaviour of young people deemed not susceptible to smoking and, within that group, a tiny fraction who progressed from vaping to smoking. Even for this group (6 of 16) the authors cannot be sure this isn't a result of mischaracterisation and that they would not have smoked in the absence of e-cigarettes. The approach to characterising non-susceptibility is far too crude and the numbers involved far too small to draw any policy-relevant conclusions.

      But this isn't the main limitation. Much more troubling is that the authors made this policy recommendation without considering the transitions among young people who are susceptible to smoking - i.e. those more likely to smoke, and also those much more likely to use e-cigarettes as well as or instead of smoking. This group is much more likely to benefit from using e-cigarettes as an alternative to smoking initiation, to quit smoking or cut down or as a later transition as they approach adult life.

      There are already findings (Friedman AS, 2015, Pesko MF, 2016, Pesko MF, Currie JM, 2016) that regulation of the type proposed by the authors designed to reduce access to e-cigarettes by young people has had unintended consequences in the form of increased smoking - something that should not be a surprise given these products are substitutes. While one may debate these findings, the current study makes no assessment of such effects and does not even cover the population that would be harmed by them. With these limitations, it cannot underpin its own over-confident and sweeping policy recommendation.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 17, Annika Hoyer commented:

      With great interest we noticed this paper by Nikoloulopoulos. The author proposes an approach for the meta-analysis of diagnostic accuracy studies modelling random effects while using copulas. In his work, he compares his model to the copula approach presented by Kuss et al. [1], referred to henceforth as KHS model. We appreciate a lot Nikoloulopoulos referring to our work, but we feel there are some open questions.

      The author shows in the appendix that the association parameter from the copula is estimated with large biases from the KHS model, and this is what we also saw in our simulation study. However, the association parameter is not the parameter of main interest which are the overall sensitivities and specificities. They were estimated well in the KHS model, and we considered the copula parameter more as a nuisance parameter. This was also pointed out by Nikoloulopoulos in his paper. As a consequence, we are thus surprised that the bad performance in terms of the association parameter led the author to the verdict that the KHS method is 'inefficient' and 'flawed' and should no longer be used. We do not agree here, because our simulation as well as your theoretical results do clearly show that the KHS estimates the parameters of actual interest very well. Just aside, we saw compromised results for the association parameter also for the GLMM model in our simulation.

      Nikoloulopoulos also wrote that the KHS approximation can only be used if the 'number of observations in the respective study group of healthy and diseased probands is the same for each study'. This claim is done at least 3 times in the article. But, unfortunately, there is no proof or reference or at least an example which supports this statement. Without a mathematical proof, we think there could be a misunderstanding in the model. In our model, we assume beta-binomial distributions for the true positives and the true negatives of the i-th study. They were linked using a copula. This happens on the individual study level because we wanted to account for different study sizes. For estimating the meta-analytic parameters of interest we assume that the shape and scale parameters of the beta-binomial distributions as well as the copula parameter are the same across studies, so that the expectation values of the marginal distributions can be treated as the meta-analytic sensitivities and specificities. Of course, it is true that we used equal sample sizes in our simulation [1], however, we see no theoretical reason why different sample sizes should not work. In a recently accepted follow up paper on trivariate copulas [2] we used differing sample sizes in the simulation and we also saw a superior performance of the KHS model as compared to the GLMM. In a follow-up paper of Nikoloulopoulos [3], he repeats this issue with equal group sizes, but, unfortunately, did not answer our question [4,5] with respect to that point.

      As the main advantage of the KHS over the GLMM model we see its robustness. Our SAS NLMIXED code for the copula models converged better than PQL estimation (SAS PROC GLIMMIX) and much better that Gauss-Hermite-Quadrature estimation for the GLMM model (SAS PROC NLMIXED). This was true for the original bivariate KHS model, but also for the recent trivariate update. This is certainly to be expected because fitting the KHS model reduces essentially to the fit of a bivariate distribution, but without the complicated computations or approximations for the random effects as it is required for the GLMM and the model of Nikoloulopoulos given here. Numerical problems are also frequently observed if one uses the already existing methods for copula models with non-normal random effects from Liu and Yu [6]. It would be thus very interesting to learn how the authors’ model performs in terms of robustness.

      Annika Hoyer, Oliver Kuss

      References

      [1] Kuss O, Hoyer A, Solms A. Meta-analysis for diagnostic accuracy studies: A new statistical model using beta-binomial distributions and bivariate copulas. Statistics in Medicine 2014; 33(1):17-30. DOI: 10.1002/sim.5909

      [2] Hoyer A, Kuss O. Statistical methods for meta-analysis of diagnostic tests accounting for prevalence - A new model using trivariate copulas. Statistics in Medicine 2015; 34(11):1912-24. DOI: 10.1002/sim.6463

      [3] Nikoloulopoulos AK. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence. Statistical Methods in Medical Research 2015 11 Aug; Epub ahead of print

      [4] Hoyer A, Kuss O. Comment on 'A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence' by Aristidis K Nikoloulopoulos. Statistical Methods in Medical Research 2016; 25(2):985-7. DOI: 10.1177/0962280216640628

      [5] Nikoloulopoulos AK. Comment on 'A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence'. Statistical Methods in Medical Research 2016; 25(2):988-91. DOI: 10.1177/0962280216630190

      [6] Liu L, Yu Z. A likelihood reformulation method in non-normal random effects models. Statistics in Medicine 2008; 27(16):3105-3124.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Aug 12, Liam McKeever commented:

      Dear Ms. Gluck,

      Thank you for your comments. Training in the use of multiple databases was not the point of this paper. This paper was written in response to a recognition that many of the methods researchers use to perform their systematic reviews are not in fact systematic. A systematic review is meant to bring scientific methods into the process of writing a review. This means the methods of the review must be reproducible. Currently, many reviews that attempt to be truly systematic employ only the MEDLINE database because of its organized system of medical subject headings. If they do this correctly, they can perform an exhaustive search of the MEDLINE database. Our paper provided a technique for taking this systematic approach into an exhaustive search of both the MEDLINE and the PubMed databases, leading to a master formula, which could then be picked apart and improved upon by the scientific community. The techniques translate well to other databases and have recently been translated to EMBASE.

      The argument that a complete systematic review should include an attempt to collect all relevant articles from multiple databases is well taken and commonly accepted. Preventing publication bias however is a much bigger picture than including multiple databases in a search strategy and was beyond the scope of this paper. To get all the null findings necessary to overcome publication bias would also mean including studies that either never entered or did not survive the peer review process. While such attempts should be made, a more achievable goal would be the thorough analysis of the publication bias present in a review where the search methodology is both explicit and reproducible.

      The selection of appropriate databases for a systematic review, as you implied, varies greatly by profession. It was therefore also not in the scope of this paper. I do think there is some value in considering what it actually means than not all databases contain all journals. I find it highly unlikely that a bio-medically relevant journal would not be indexed in MEDLINE simply because they neglected to apply. It is much more likely that they applied and were rejected. Just as a systematic review has inclusion criteria at the level of the articles selected, databases have inclusion criteria at the level of the journals selected for cataloging. The degree of research quality and scope of topic areas are considered and determined to either meet or not meet the standards of the database. When we select a database for a systematic review, we are defining our inclusion criteria at the level of the journal. With this in mind, assuming adequate search methods were used and provided a thorough analysis of publication bias has been performed, one could make the argument that a properly selected major database pairing, like MEDLINE and PubMed may be acceptable for a systematic review. From a scientific methods perspective, we feel what is most important is that the inclusion criteria at all levels are explicit and that is what this paper attempts to facilitate.

      Sincerely, Liam McKeever, MS, RDN


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 May 23, Stanton A Glantz commented:

      In June 2015 we published our paper “The smoking population in the USA and EU is softening not hardening” in the journal Tobacco Control. We showed that as smoking prevalence has declined over time, quit attempts increased in the USA and remained stable in Europe, US quit ratios increased (no data for EU), and consumption dropped in the USA and Europe. These results contradict the hardening hypothesis which is often used as part of the tobacco industry’s strategy to avoid meaningful regulation and protect its political agenda and markets, claiming that there is a need for harm reduction among those smokers who “cannot or will not quit.” Indeed, rather than “hardening” the remaining smoking population is “softening.”

      In February 2016 we received an email from Robert West, editor of the journal Addiction, informing us that Addiction was about to publish an article by Plurphanswat and Rodu entitled “A Critique of Kulik and Glantz: Is the smoking population in the US really softening?” whose sole purpose was to critique our Tobacco Control paper, and offered to let us respond to the criticism.

      The fact that Plurphanswat and Rodu sent their paper to Addiction was unusual because normal scientific procedure would have had them sending a letter to the editor of the journal that originally published the work (Tobacco Control).

      As detailed below, we did respond, noting that Plurphanswat and Rodu’s paper followed the well-established pattern of tobacco industry-funded researchers trying to create controversy about research inconsistent with industry interests, the fact that Rodu had understated his financial ties to the industry, and, of course, showing how their criticism was based on statistical error that they made.

      Addiction rejected our response because we would not delete the first two points and limit our response only to the statistical issue.

      This blog post includes the response that Addiction rejected so that readers of Plurphanswat and Rodu’s critique do not think we did not have a response. We also include a summary of our interactions with the journal and the related email correspondence.

      THE REJECTED RESPONSE

      Consider the Source

      “Harm reduction” is a key part of the tobacco industry’s strategy to avoid meaningful regulation and protect its political agenda and markets.[1] This agenda is premised on the existence of “hard core” smokers who “cannot or will not” quit.[2-4] Our paper, “The smoking population in the USA and EU is softening not hardening”,[5] undermined this agenda because it showed that, contrary to the hardening hypothesis, as smoking prevalence has declined over time, quit attempts increased in the USA and remained stable in Europe, US quit ratios increased (no data for EU), and consumption dropped in the USA and Europe.

      There is a longstanding pattern of tobacco industry-funded experts writing letters criticizing work that threatens the industry’s position, first described in 1993 by then-JAMA Deputy Editor Drummond Rennie.[6] Rodu and various co-authors have written several such letters.[7-10] Another similarity to past efforts is industry-linked experts submitting critiques of a paper published in one journal to another,[11-15] which is also the case here, with this critique of our paper published in Tobacco Control being published in Addiction. One would have expected any criticism to have been published as a letter in Tobacco Control.

      Addiction requires “full disclosure of potential conflicts of interest, including any fees, expenses, funding or other benefits received from any interested party or organisation connected with that party, whether or not connected with the letter or the article that is the subject of discussion.” As with another investigator supported by the tobacco industry,[16] the conflict of interest statement Plurphanswat and Rodu provide may not truly reflect the extent of Rodu’s involvement with the tobacco industry. For example:

      • Rodu’s Endowed Chair in Tobacco Harm Reduction Research at the University of Louisville is funded by the U.S. Smokeless Tobacco Company (US Tobacco) and Swedish Match North America, Inc.[17]

      • Rodu is a Senior Fellow at the Heartland Institute, which has received tobacco industry funding.[18-20]

      • Rodu is a Member and Contributor to the R Street Institute, which has received tobacco industry funding.[19,21]

      • Before moving to Louisville, Dr. Rodu was supported in part by an unrestricted gift from the United States Smokeless Tobacco Company to the Tobacco Research Fund of the University of Alabama at Birmingham.[8]

      • Rodu was a keynote speaker at the 2013 Tobacco Plus Expo International, a tobacco industry trade fair to discuss “How has the tobacco retail business evolved; where was it fifteen years ago, where is it today and where is it going”.[22]

      • Rodu has worked with RJ Reynolds executives between at least 2000 and 2009 to help promote industry positions on harm reduction, including specific products.[23-26]

      The substance of Plurphanswat and Rodu’s criticism is that the statistically significant negative association between smoking prevalence and quit attempts and the positive association between prevalence and cigarettes smoked per day both become non-significant when more tobacco control variables are included in the model (state fixed effects, cigarette excise taxes, workplace smoking bans and home smoking bans). The problem with including all these variables is that it results in a seriously overspecified model, which splits any actual effects between so many variables that all the results become nonsignificant. The regression diagnostic for this multicollinearity is the Variance Inflation Factor (VIF); values of the VIF above 4 indicate serious multicollinearity. For the United States, adding all the other variables increases the VIF for the effect of changes in smoking prevalence from 1.8 in our model for quit attempts to 16.7, and from 1.8 in our model to 17.9 for cigarettes per day, respectively. Plurphanswat and Rodu’s model is a textbook case of why one has to be careful not to put too many variables in a multiple regression.

      The Plurphanswat and Rodu criticism misrepresents our conclusions. We did not argue that drops in prevalence caused increased quit attempts and reduced consumption; we simply present the observation that, as prevalence falls, quit attempts increase and consumption fall or remain constant, which is the exact opposite of what the hardening hypothesis predicts.

      The references and the full email correspondence with Addiction is available at http://tobacco.ucsf.edu/addiction-refuses-allow-discussion-industry-ties-criticism-our-“softening-paper”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 10, Toni Schneider commented:

      The comment of Marco Weiergräber is full of speculation. But scientific progress depends on careful control of novel hypotheses, especially when results of a similar research project are opposite. Scientific reports must mention opposite results, when new data are published. Siwek et al (Sleep 2014 May 1;37(5):881-92) did not refer to our results published a year earlier (Somnologie, September 2013, Volume 17, Issue 3, pp 185-192) but they speculate since then in an unscientific manner about our data, which were presented in our publication in an absolute transparent way (single data, in parallel to the resulting mean values). Our data are as reliable as the data from Siwek et al (2014). Taking this premise serious, one has to think about reasons for differences of results in an objective way. Two different mouse models were used in the two sleep studies mentioned. Logically, one must look for differences in these two mouse models, which we have discussed in an objective and fair way, without questioning the careful investigation done by Siwek et al (2014). To think about the different remnants left in the two different Cav2.3-knockout mouse lines should generate new hypotheses instead of condemning the results of a competing laboratory. We estimate the risk of an aminoterminal Cav2.3-peptide (resulting from the expression of exon 1) lower to contribute to calcium current disturbances as the risk of a "hemichannel".

      The last chapter of Marco Weiergräber's comments also stays speculative, as long as he has not tested the transfer capacity of the transmitter device under discussion (a F20EET radiotransmitter from DSI). We tested the frequency bands under standardized conditions and can confirm that the bandwidth is broader than mentioned by him. Instead of repeating again and again the same critisism without mentioning e.g. a correction published by us (Schneider T. and Dibué M., 2015 in Somnologie 17, 307-308) and without presenting new proofing data, it seems to be a fight for something else but not for progress in Science.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 06, Lydia Maniatis commented:

      The claims of Blakeslee and McCourt are flawed on logical, methodological, theoretical, and empirical grounds.

      Perhaps the core error is the denial of perceptual facts, as described below (and as noted also by Gilchrist in his commentary on this article).

      In the Adelson checker-shadow illusion (to take one example), a check in apparent shadow looks white, while an equiluminant check, apparently in plain view, looks black. Aside from the additional presence of the apparent shadow, the experience of the two surfaces is similar to looking at a white check and at a black check under homogeneous illumination. Kohler (1935), describes a “real-world” version wherein a white and a black paper appear white and black, respectively, even if the illumination is adjusted so that the two surfaces are actually equiluminant.

      Imagine, now, what would happen if we asked a naive observer to report on “the intensity of the light coming from the surface of each of the two checks.” First, he or she would likely assume the question referred to the apparent illumination of the surfaces. In this case, the white, “shadowed” check should receive a lower rating than the black “plain-view” one. If we then tried to clarify that we want the observer to make matches based on “the amount of light each surface is sending to the eye,” (its luminance) I think the observer would have trouble a. understanding what we are asking and b. performing the required task. And I'm not sure anyone would be able to judge, with confidence, whether the two checks in the Adelson checkerboard are, or are not, in fact equiluminant. That's what makes the demo so impressive. Even if observers could estimate this value, the task would be difficult and the results unreliable. To achieve it, they would have to focus narrowly on each square, isolating it from the surround.

      Yet, Blakeslee and McCourt maintain that the latter task, which requires viewers to overide their spontaneous, salient perceptual experience, is “strictly based on appearance,” while the former, effortless experience is “based on an inferential judgment.” If, however, we define “appearance” as “what something looks like,” then the authors' arguments are obviously false, as can be confirmed by any observer.

      The authors' argue that the experience which is quite literally based on appearance, is actually a product of learning. This is a major claim (implying that learning can actually alter a percept from black to white), but the authors offer no evidence for it. All of the arguments and available evidence is against. As Gilchrist (2015) points out, even fish seem to naturally achieve this kind of learning. A child can see the Adelson checker-shadow effect as effortlessly as an adult. Our perceptions aren't affected by what we learn about the nature of light and the properties of surfaces, we don't have to learn how to judge when a surface is in shadow or merely darker than its neighbor, or when it is covered by various types of transparency, we don't even have to learn that at night things don't actually change color. Given the difficulty scientists have in analyzing and modelling percieved lightness, and given that massive, early exposure to artificial images mimicking illumination variation has no discernible effect on our perception of the “real” world, the claim that people go through a process of learning to make the inferences necessary to achieve veridical percepts in natural conditions does not seem credible. B and M have certainly not tested it.

      The quality that the authors argue is “strictly based on appearance” is a quality that they term “brightness,” defined as the perceptual correlate of luminance. The view that there is a perceptual correlate of luminance seems to be uncontroversial among lightness researchers (e.g. Kingdom, 2011; Gilchrist, 1999)), although Anderson (2014) seems to define brightness (more properly, in my opinion) as apparent illumination. The claim seems easy to refute.

      Suppose we observe a set of surfaces lacking cues to differential illumination, and that some appear white, some gray, some black. Suppose, then, that we observe the same set of surfaces, at a different time, under a different degree of illumination. Assume that have completely forgotten the previous experience with the surfaces, and are again asked to judge their white/gray/black character. Our responses will typically be similar to those we gave in the first (now forgotten) instance. In other words, even though the illumination (and consequently the luminance) of the surfaces will have changed, our responses will remain the same. If the range of luminances is complete enough, they will, in both cases, be correlated with reflectance and not with luminance. Thus, even under homogeneous illumination, we cannot say that we are perceiving luminance, or that perception is more direct than in other situations. As always, the percept is the product of a complex visual process based on luminance values and structural assumptions.

      When it comes to the case of non-homogenous (apparent) illumination, the authors seem to be treating illumination boundaries as though they were directly-perceived facts serving to support “inferentially” perceived lightness judgments. They say, for example, that “when the illumination component is clearly visible” the observer can use “brightness contrast” at the boundary to infer the magnitude of the illumination. There are a number of problems with this description.

      First, if a shadow is perceived – is “clearly visible” - as the cause of the luminance boundary, then viewers are perceiving a double-layer – a surface with lightness x and an apparent shadow of darkness y lying on top of it. They are not perceiving a single “brightness” value. The authors are using the term “brightness” when they actually mean luminance.

      Relatedly, the illumination boundary only becomes “clearly visible” after the visual process has inferred its presence based on the luminance structure – including the relative luminances at luminance boundaries - of the image. Whether the darker side of an edge will be perceived as being similar in reflectance, but lower in illumination than its neighbor, or as darker than its neighbor due to a lower reflectance, or any other combination of possibilities, depends on the global structure of the image. Given certain conditions, even a non-existent luminance edge may produce an apparent lightness difference, as in the case of illusory surfaces. So the argument that perceiving a surface as continuing beneath a “shadow” boundary is more inferential than perceiving a particular surface as gray due to its luminance relative to other surfaces in an image is naive. If “appearance-based” means “based on luminances”, then all perception is appearance-based. If it means that surfaces are perceived based on local luminance conditions, then it never is. Local conditions do not even determine photoreceptor activity in the horseshoe crab.

      As corroborating evidence, the authors point to a few references, including Blakeslee and McCourt (2008), which is supposed to prove the existence of “brightness” judgments. Their stimuli consist of the classic simultaneous contrast demo plus variations that create weak impressions of differential illumination. The “brightness” judgments are defined as those that arise when observers are instructed to focus narrowly on the targets. This is similar to applying a mask. Effectively, we are talking about the same visual process acting on a different stimulus, not about a different type of judgment. Due to the weakness of the structural cues to differential illumination in B and M's (2008) stimuli, the ability of observers to isolate the target in this way is very easy. The demand would be much more difficult, and the results surely very different, if the stimulus had been the Adelson checkerboard.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jan 29, Martin Pusic commented:

      Thank you for this insightful review - we're glad that the article created such a rich discussion. Here are a couple of other thoughts:

      . "different tracks for different learners" - what a learning curve makes manifest is the time component of an assessment. As a medical educators, we have the privilege of teaching highly motivated learners who almost always get over whatever bar we set for them. If we grade ourselves as teachers by counting how many learners get over the bar, it is easy to perceive ourselves as successful; however, if instead we grade ourselves on the SLOPE of the learning curve, now we have a metric that challenges us to grade our efforts in terms of learning efficiency, which is amount of learning per unit of learning effort expended. This does three good things: 1) it orients us towards maximizing the most precious student commodity - time; 2) it prompts educators to consider more closely the PROCESS of learning as that's how you improve the slope and 3) it allows us to use the variability in paths/slopes to learn the best ways of teaching and learning. So it may be that we do not need customized learner development charts, as well as those work for pediatrics, but rather to learn from those outliers who fall away from the average curve so as to feed that back into the system to improve the learning for everyone.

      .in the "life-cycle of clinical education" we would also encourage you consider the asymptote. The asymptote defines the "potential" of a learning system. "How good can we possibly be, if we used this system an infinite number of times?" Improving the slope means we get people up to competence more efficiently. Improving the asymptote means we get even better competence. In some cases we only need "x" amount of minimum competence and we're fine (think hand washing); but in most areas of medicine, we can always do better. The path to competence is all-important, but our learning systems would do well to also map out the path from competence to excellence, defined as being the very best any of us can be. The asymptote, along with the very shallow slope of the learning curve as it approaches it, gives us an idea of what excellence takes.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Jan 22, Yuriy Pankratov commented:

      This discussion could be more enlightening and even reach a consensus of some sort if our respected opponents, instead of making ungrounded accusations and avoiding inconvenient facts, tried to address the most serious issues, raised in our comments at PubMed Common and JNS website. These issues include: stark contradiction between the EGFP and LacZ expression phenotypes shown in Fujita et al. and data shown in previous publications (all of which went through rigorous peer review by the way); lack of direct evidence of notable impairment of synaptic transmission in dnSNARE mice and existence of clear evidence of the opposite; large pool of evidence supporting physiological role of astroglial exocytosis which does not rely on the dnSNARE mice at all. Neither paper itself nor Authors’ responses to comments (which basically repeats what was said in the article) address these issues.

      Still, we think that some consensus might be found. Before going to that point, we would like to clarify points raised by our opponents in their last post. 1) For the sake of unbiased discussion, citing one paper showing a lack of VAMP2 expression in astrocytes (Schubert et al.2011) one might mention at least one paper from the large pool showing the opposite (Martineau et al 2013).

      More importantly, one should not swap between quantitative and “all-or-none” kind of reasoning to one’s convenience. If we assume that level of astrocytic expression of VAMP2 of tenth of that in neurons is low enough to make VAMP2 non important for function of astrocyte, than we have to assume the existence of certain level of expression below which dnSNARE transgene will not significantly affect neuronal function as compared to astrocytes. OK, it may be not tenth but hundredth fraction, dose not matter.

      We thankful to our opponents for bringing up an example of tetanus and botulinum toxins. Even theses deadliest toxins act in dose-dependent manner. Both on the levels of whole organism and single presynaptic terminals, smaller doses of these toxins (as compared to LD50 and IC50) have milder effects. So, it is very likely that effects of dnSNARE expression are dose-dependent (if not to believe in homeopathy, of course).

      The same is applicable to the action of doxycycline, which is also dose-dependent. So one could not expect 100% inhibition of transgenes, especially at oral administration of Dox. To answer first part of opponents comment 2), the Figure 1O-R from Halassa et al. shows efficient, but incomplete suppression by Dox, rather than “leaky” EGFP expression. To what extent the same is applicable to Fig.2C of Fujita et al, let the reader to decide. Of course, non-complete suppression by Dox is a downside of tetO/tetA system but this can be easily remedied by comparing On-Dox and Off-Dox data.

      2) Theoretically speaking, concern that “neurons express the dnSNARE transgene at all “ may be applicable to any glia-specific transgenic mice. One could not a priori expect an absolute specificity of expression of neuronal and glial genes, the data of Cahoy et al. 2008 are the good illustration. This, rather philosophical, question goes far beyond the current discussion. There is no molecular genetic tool to ensure 100% glial specificity. On practice, one could only expect to obtain a negligible (again, in relative sense) level of neuronal transgene expression and verify the lack of significant impact on neuronal function.

      3) Regarding the putative “dramatic and unpredictable “ effects of neuronal dnSNARE expression, the TeNTx and BoNT give a good indication of what to expect. However, dnSNARE mice do not show any notable deficit of motor or respiratory function. On a level of synapses, there was no evidence of any significant decrease (not saying about complete inhibition) of vesicular release of main neurotransmitters (Pascual et al. 2005; Lalo et al. 2014). On contrary, our data show an impairment of signals triggered by activation of Ca2+-signalling selectively in astrocytes (Lalo et al. 2014; Rasooli-Nejad et al. 2014). Let it to the reader to decide, to what extent available functional data support the opponents’ notion that “synaptic transmission may directly be suppressed by dnSNARE expression in neurons “ and that “Even very low levels of expression of dnSNARE in neurons invalidate any conclusion based on this transgenic mouse “.

      One might argue that dnSNARE transgene could be expressed only in the certain subset of neurons or in some specific brain region thus strongly affecting some specific function rather than causing general, milder, functional deficit. However, this is unlikely for the supposed basal leakiness of the tet-off system and further experiments would be required to identify such regions/neuronal subsets.

      4) Addressing the second half of the point 2) – One can only wonder why, in 2012, already knowing that their results contradict to data presented by that time by several studies, our respected opponents did not contact authors of those publications to request mice from them? Again, one might only wonder why PCR data generated from 2 batches of mice have sample size of n = 3 – 4 (meaning 1-2 tissues per batch) ?

      5) Regarding the intrinsic limitations of dnSNARE mice, anyone working with them is aware of fact that EGFP, LacZ, and dnSNARE genes were inserted independently. However, their expression is controlled by the same factors so their expression probabilities depend on the same set of parameters and therefore are not truly independent, from mathematical point of view. The correlation in expression of these transgenes is supported by the co-inheritance. Furthermore, data of Halassa et al. show that 97% of cells expressing the dnSNARE, also express EGFP. We would like to emphasize that the opposite - the presence of true mosaic expression pattern in dnSNARE mice, i.e. existence of number of individual cells expressing dnSNARE and not expressing EGFP and number of EGFP-only cells, has not be shown so far; Fig.3 from Fujita et. al 2014 does not show this either.

      Thus, even assuming the leakiness of the “tet-off” system, one might expect probability of EGFP expression to be of the same order of magnitude as that of dnSNARE, this is also agrees with data of Fujita et al. So, in case of absence of EGFP expression in a large population of neurons, the presence of even small fraction of neurons expressing dnSNARE is very unlikely. From mathematical point of view, the probability of certain population of neurons to express only dnSNARE will fall exponentially with the expected size of population.

      Finally, one could hardly deny the large difference in the phenotype of the cohort of dnSNARE mice, described by Fujita et al. and the cohort of mice used by other groups. The point of some consensus could be that in some, still unidentified conditions, the tetA/tetO system may suddenly became leaky, causing some level of expression GFAP-driven transgenes dnSNARE, EGFP and lacZ genes in neurons. So, in experiments with dnSNARE mice extra care should be done to verify the lack of neuronal dnSNARE expression. This can be done by showing the absence of surrogate reporters EGFP or lacZ in neuronal populations of interest combined with electrophysiological data showing the lack of deficit of synaptic neurotransmitter release. This could be a good practice for any glia-specific inducible transgene.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Jan 24, Todd Lee commented:

      As stated in our paper, we need the data from IMPROVE-IT to better inform us on the use of ezetimibe. However, rather than relying on a conference presentation and press releases we will also need to await the peer-reviewed and sponsor-independent analysis of the IMPROVE-IT trial to judge the quality of the results and evaluate their impact on general practice. Given interim analysis was performed (and the study was subsequently re-sized) any multiple comparisons performed will require expert statistical review.

      Furthermore, given no other positive studies exist, it may be prudent to perform an independent individual patient data analysis of all similar studies to better refine or confirm the estimate of effect prior to making any final conclusions from one trial.

      The net conclusion of this study, if the presented data is taken at face value, is a number needed to treat (NNT) of 50 over 7 years for the composite outcome. Overall mortality was not reduced. At generic Canadian prices it would cost approximately $58,765 to prevent 1 event over 7 years (Ontario formulary price as of January 2015). However, at US brand name prices of approximately $8.50 per day (Lexicomp 2015) the cost of preventing one composite outcome would be more than $1,000,000.

      It is also important to note that IMPROVE-IT was a secondary prevention study (acute event within 10 days) and not primary prevention. In primary prevention, the NNT is likely much higher and the corresponding costs per event prevented would increase proportionately and likely be substantial even at generic prices. In our cohort 6/17 (35%) were receiving ezetimibe for primary prevention.

      Whether the drug lowers event rates in the absence of a statin remains unproven and cannot be inferred from this study. Nonetheless, it will be interesting to see the effects on monotherapy uptake given the publicity around this study and also when IMPROVE-IT is ultimately published.

      The impact of this study on the uptake of other drugs approved on the basis of LDL as a surrogate marker is also not to be underestimated. The issue of treating to specific LDL targets is currently being debated amongst experts after recent changes to the guidelines. It would be somewhat naive to think that there isn't a substantial market pressure behind bringing back targets to be measured and obtained through additional medications.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Nov 30, John P A Ioannidis commented:

      Dear Joshua,

      thank you again for your comments. I am worried that you continue to cut and paste to distort my sentences.

      1. The headline over my text was written by the Nature editors as their introduction to the paper, so perhaps you should blame them and ask them to replace it with "Here follows a horrible paper by Ioannidis". Yet, I think you would still be unfair to blame them, because their headline says "most innovative and influential", not just "most innovative". The terms "influential", "influence", "major influence" pervade my paper multiple times, but you pick one sentence with "innovative" instead, and interpret it entirely out of its context.<br>
      2. The phrases "the most important" and "very important" are not identical. Very important papers may not necessarily be THE most important. But they are very important - and influential. [As an aside, honestly, this repeated cross-examining quotation-comment style makes me feel as if I am answering the Spanish Inquisition. Am I going to be burnt at the stake now (please!) or there is one more round of torture?]
      3. We agree we need evidence, more evidence - evidence is good, on everything, including the current NIH funding system, which has practically no evidence that it better than other options, but still distributes tens of billions of dollars per year. Wisely, I am sure.<br>
      4. "your list contains...". This is not my list. This is the Scopus list. Right or wrong, I preferred not to manipulate it. Your colleagues did manipulate it and did not even share the data on how exactly they manipulated it.
      5. You continue to use the term "innovative thinker" out of its context. I scanned again carefully my paper and I can't find the word "excellent". In my mind, a student who has authored as first author a paper that got over 1000 citations (and the paper is not wrong/refuted) is already worthy to be given a shot as a principal investigator. If you disagree, what can I say, feel free not to fund him/her. And please don't worry, most of these guys are not funded anyhow currently, many of them even quit science. Hundreds of principal investigators who publish absolutely nothing or publish nothing with any substantial impact get funded again and again. Hurray!<br>

      I am afraid it is unlikely there will be more convergence in our views at this point. A million thanks once again, I have learnt a lot from your comments.

      John


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Jul 15, Raphael Levy commented:

      Thanks Harald for commenting on my blog post about SmartFlares / Nanoflares with reference to this paper.

      I reproduce the conversation below. I hope it continues and others join in.

      Raphael


      "Hi everybody, as the correspending author of a Stem Cell paper in which we have used the SmartFlares on different pluripotent cells of human, murine and porcine origin I want to reply to two of the above mentioned questions.

      Why do we see a signal at all in the scramble control? I think one cannot expect a negative control which does not produce a fluorescent signal at all. The fluorophore may not be quenched by 100% and may be subject to degradation, especially when applied for a longer time (two days or more). Nevertheless, within 16 to 24 hours after the application of the nanoparticles we see a clear-cut difference of fluorescence intensity when comparing scramble control and gene-specifc Smart Flares. http://www.ncbi.nlm.nih.gov/pubmed/25335772

      We believe that this difference is reliable and specific. We have selected freshly reprogrammed murine iPS cells based on their Nanog-specific fluorescence intensity in situ. In downstream experiments we could confirm that only colonies with a high fluorescence intensity expressed higher amounts of endogenous pluripotency factors and showed a superior capacity to differentiate. Therefore, we belive that these functional data strongly support the idea that the fluorescence intensity was indeed correlated to a specific interaction with the Nanog mRNA in these clones.

      Why do different cells take up varying amounts of SmartFlares? I think this difference is not surprising as the nanoparticles are engulfed by endocytosis. This process is influenced by the cell type, the differentiation status and the cellular ability to perform phago- and macropinocytosis. Therefore, we think that a uniform uptake rate cannot be expected."

      I replied "Hi Harald

      Thanks again for commenting here and sorry for the delay in replying. It is interesting that you see some differences but the big question that remains is how could the technology possibly work?

      It can only work if the particles escape endosomes, but: 1) there is no reason why they should, 2) this problem is not discussed in the original publication introducing the technology, 3) there is no direct evidence in the literature that it happens, and, 4) all the data we are accumulating indicates that the particles are indeed in vesicular compartments (more on this soon on the open notebook as we have just had our cell electron microscopy results this week).

      The images shown in your articles are low mag overviews of many cells and therefore the resolution does not allow to discuss any cellular localization. Do you have any higher resolution images that you could share? Do you have any (direct) evidence and/or proposed mechanism for endosomal escape?

      The unequal distribution of uptake (cell to cell variability) is also a big concern. I don鴠believe that it relates to differences between rate of uptake of different cells. Such differences would average over an 18 hour period and they should also be seen in the dextran uptake. A possible interpretation would be some degree of nanoparticle association/aggregation before interaction with the cells (this is to be tested experimentally).

      Raphael"


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Sep 17, Ghassan El-Baalbaki commented:

      authors of this comment are: Loose, T.<sup>1,3</sup> , Ashrah, L.<sup>1,</sup> Romero, M.<sup>1,</sup> Bégin, J.<sup>1</sup> and El-Baalbaki, G.<sup>1,2</sup> Affiliations, Université du Québec À Montréal<sup>1,</sup> McGill University<sup>2,</sup> Université de Nantes<sup>3</sup> .

      In this study, the authors examine the efficacy of Acceptance and Commitment Therapy (ACT) in treating partner aggression. After entering into communication with the first author, she confirmed that this study corresponds to her doctoral thesis (Zarling, 2013). We read her thesis in order to gain further knowledge about the published study, and we found two important aspects of the work that need to be brought into light: 1) we find it difficult to conclude that partner aggression was specifically reduced, because of threats to internal validity. 2) There seems to be inaccuracies in their statistical calculations.

      The primary objective was to demonstrate that ACT reduces intimate partner aggression better than the placebo control group. The authors strongly suggest that ACT was indeed able to do so. However, several points call this statement into question.

      First of all, it is worth noting that partner aggression is never specifically assessed. One may think that the Conflict Tactics Scales-2-Physical Assault Scale (CTS-2) did assess this construct (Straus, Hamby, Boney-McCoy & Sugarman, 1996), but when taking into account the exact content of the administered questionnaire that was figured in her thesis, it can be seen that the instructions were modified in order to include aggression towards “people we care about” (Zarling, 2013, p.132) instead of specifically towards a partner. This modification is not mentioned in the published article. Hence, participants could have hypothetically committed aggressive acts exclusively towards a non-partner, such as a child or a friend. It is thus hard to conclude that aggressive behavior, specifically among couples, decreased as a result of therapy.

      Furthermore, after modifying the instructions, one of the items even became incoherent. More specifically, when taking into the modified instructions of the questionnaire, the item stated “When upset or in a disagreement with someone you care about … have you became angry enough to frighten your partner?” (italics added)(Zarling, 2013, p. 132). It seems that this question was destined to ask if the participant became angry enough to frighten someone that they « care about » and not their « partner ». For example, if the participants were (or became) single, they could have interpreted this question as if they were angry enough to frighten someone that is not necessarily their partner. It is unclear how the authors wanted the participants to interpret this item. The way that the construct of partner aggression was assessed calls into question the internal validity of the study.

      It remains questionable why the authors neglected to assess attrition of couples during the study. Over the course of the study, it is possible that aggressive behavior decreased more in the ACT group as a result of more couples breaking up in this treatment condition than in the placebo control group. Even if the authors carried out both completer analysis and intent to treat analysis and the results did not differ on measured variables, the attrition of couples was not measured. Hence, it is impossible to know if the groups differed on partner attrition. It remains possible that participants continued to aggress former partners, but again this cannot be asserted simply because it was not measured. Assessing partner attrition would have helped neutralize a pertinent threat to internal validity. Hence, internal validity was limited by not taking into account partner attrition.

      Lastly, it is worth bringing up that the first author and two other investigators co-led all administered interventions (p. 201). Even if the authors state this in their article, after looking at the detailed explanation of the recruitment process of investigators, this choice does not seem to be justified. It seems that the first author was able to recruit investigators at no extra cost and it is questionable as to why she did not recruit three instead of two, thus avoiding a potential threat to internal validity. The explanation provided in her thesis, but not the article, also states that all the manipulation checks were carried out by the first author and supervised by the second and third authors (Zarling, 2013, p.61). Because of the implication of the authors in the experimental procedure, the results are potentially attributable to the experimenter expectancy effect. This is another threat to internal validity that evokes the tendency for researchers to unintentionally bias the results of their investigations in accord with their hypotheses. In taking into account the previously stated points, it seems difficult to conclude that this study shows that ACT specifically reduces partner aggression better than a placebo control group.

      On another note, the statistical validity of the study can be questioned:

      Throughout the results section of the paper (pp. 205–207), the authors report many beta values (β) for intercepts and slopes. If these are indeed beta values, they should vary between approximately 1 and -1 (Cohen, Cohen, West & Aiken, 2002). However in many cases, reported values are outside of this range. For example the authors state “The ACT participants also reported significantly less psychological aggression, β = 2.05, SE = 0.71, t(97) = 8.33, p < .001, and physical aggression, β = 2.21, SE = 0.65, t(97) = 8.19, p < .001, at the 6-month follow-up assessment” (p. 8). This leads us to think that the authors are actually reporting non-standardized coefficients (B) because B values can vary outside of this range. However, if this is the case, then the t values reported become incoherent. t values should equal B/(2SE) (we retained the value of 2 after rounding up from 1.96 which corresponds to 95% of a normal distribution (Christensen, 1986)), but reported t values do not correspond to this calculation. For example, let’s take another look at the previous citation and assume that it actually refers to B values. According to our calculations, t = B/(2SE) = 2.05/(2*0.71)=1.44, but the authors report that t = 8.33, p < .001. More simply stated,if the authors are indeed reporting β values, then many of the β values seem to be impossible (outside of the ± 1 range), but if they are reporting B values, the t values seem to be impossible according to our calculations. We were wondering how these seemingly paradoxical results could be explained.

      In conclusion, even if this research topic is of great interest, two main points should to be taken into account. First of all, it is hard to state that ACT is an effective treatment to reduce partner aggression because 1) partner aggression was never specifically assessed 2) partner attrition was not measured 3) of potential bias due to the experimenter expectancy effect. Secondly, it is hard to draw conclusions from the statistics figured in the study because 1) if β values were indeed being reported, then they lie outside of the ± 1 range and 2) if B values were actually reported, then the t values become incoherent.

      References

      Christensen, H. B. (1986). La statistique: démarche pédagogique programmée. Boucherville: Gaëtan Morin.

      Cohen, J., Cohen P., West, S. G. et Aiken, L. S. (2002). Applied multiple regression/correlation analysis for the behavioral sciences. New York: Routledge.

      Straus, M. A., Hamby, S. L., Boney-McCoy, S., & Sugarman, D. B. (1996). The revised conflict tactics scales (CTS2): Development and preliminary psychometric data. Journal of Family Issues, 17, 283–316.

      Zarling, A. (2013). A preliminary trial of ACT skills training for aggressive behavior. University of Iowa.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Oct 18, David W Brandes, MS, MD, FAAN commented:

      I am very happy to see more published information about this topic. I have been speaking professionally abut this topic for 10 years and it is becoming increasingly recognized as to it's importance in the management of MS patients. This article confirms the often unrecognized frequency of this issue. When MS patients say they "are tired all the time," we need to think about the details. Physical fatigue, cognitive fatigue, psychological fatigue (depression) and sleepiness may all be part of this complaint.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Aug 04, Paul Brookes commented:

      This paper addresses a very important point... namely, one of the proposed mechanisms by which ischemic preconditioning (IPC) is thought to bring about cardioprotection. It has been hypothesized that increased leakiness of the mitochondrial membrane to protons may lead to a lowering of ROS generation. We provided some evidence to support increased H+ leak in IPC in 2006 Nadtochiy SM, 2006. Unfortunately, it's not immediately clear to me exactly how the authors measured H+ leak in this study, and also whether their ROS measurements were performed correctly.

      First proton leak: The normal way this is determined is to make simultaneous measurements of membrane potential and respiration in a single chamber equipped with an oxygen electrode and another electrode sensitive to a lipophilic cation such as TPP+. The authors are to be commended on their choice to use a TPP+ electrode, which instantly makes this paper more quantitative than others using qualitative fluorescent probes such as TMRE. They measured membrane potential (as reported in Figure 3) using succinate as a substrate for complex II, with rotenone to inhibit complex I. All good so far. However, normally oligomycin is added to inhibit the ATP synthase as a potential source of leak. In addition nigericin is added to equilibrate K+/H+ and thus ensure the potential is a measure of protonmotive force. As such, the potential measured here was not in a true state 4 condition which is typically required for quantitation of proton leak.

      After describing membrane potential measurements, the methods section describes respiration and H+ leak methodology. It appears oligomycin was added this time (good!), but then something very odd happens... it is stated that "H+ leak was measured as the state 2 respiration rate required to maintain membrane potential at -150mV". Nowhere is it stated how that value of 150mV was arrived at. Normally, when doing these measurements, you set the mitochondria in state 4 (succinate, rotenone, oligomycin, nigericin), and then titrate the activity of the respiratory chain with an inhibitor such as malonate. Step-wise titration then gives you a series of membrane potential and respiration traces, a curve, from which the leak rate at any given potential can be read off. The question is, if such curves were made, how were they made (no mention of malonate anywhere), and why aren't they shown in the paper? The scary alternative explanation may be that they "measured" leak by imposing the 150mV membrane potential by titrating in uncoupler. This is incorrect - you can't add something that changes the leak as a way of measuring the leak. The other odd thing is that the average baseline membrane potential in the IR group barely above 150mV, so some replicates in that group must have had a potential value below 150mV to begin with. How did they authors get those mito's UP to 150mV for the leak measurement? Overall, it would be a whole lot better if they just showed the full leak curves.

      What about ROS? The problems here are two-fold. First it appears that SOD was not added to the incubations to scavenge any stray superoxide and turn it into H2O2 for the assay to pick up. Second, the calibration of the assay was performed incorrectly. It is stated in the methods that the signal was calibrated by "adding known concentrations of H2O2 to buffer solution containing horseradish peroxidase and Amplex-Red" The problem is, it is necessary to calibrate in the presence of mitochondria, so the signal you measure with the calibrating H2O2 is under the same condition as the measurement itself. The way this is commonly done is to just add a bolus of H2O2 at the end of each run. This has the advantage of making every run internally calibrated, which cuts down on noise (this is a very noisy assay). The outcome here is a bit odd... the values of H2O2 generation are in the range of 10-40 nM/min/mg protein. Ignoring the odd units (rates of things should be expressed in moles not Molar), let's assume they meant to write nmols/min/mg - that would put their rates about 1500-fold greater than typical for these conditions (e.g.Chen Q, 2003).

      So, TL/DR - good ideas, but more info' is needed on the proton leak method, and the ROS numbers are just wild. Also, lest anyone think I'm attacking this work because I happen to be one of the people who originally proposed proton leak --> lower ROS --> protection in IR injury, that's not the case. In-fact, my lab' is now pursuing a completely different downstream signaling mechanism, as a mediator of the protective effects of mitochondrial uncoupling, so if the results of this paper are true, my life just got a whole lot easier! I just want to be sure before I start citing it.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Aug 06, Xudong Fu commented:

      Thanks for your comments. It is really interesting. I hope I could explain why I don't think alpha-ketoglutarate works as an antioxidant to extend lifespan.

      It has been shown that α-ketoglutarate is an antioxidant. However, antioxidants are not necessarily able to extend lifespan. Actually, the relationship between ROS and lifespan has not been established. Increase of ROS may contribute to lifespan extention potenially through downstream defensim activation and overexpression of antioxidant enzyme is not able to extend lifespan in mice (http://www.sciencedirect.com/science/article/pii/S0092867413006454, http://onlinelibrary.wiley.com/doi/10.1111/j.1474-9726.2008.00449.x/full).

      Our paper discovers that α-ketoglutarate can bind and inhibit ATP synthase and extend lifespan in C. elegans. The inhibition of ATP synthase is actually likely to generate more ROS. It has been shown that oligomycin, a known ATP synthase inhibitor, could induce ROS in vivo. In our hand, we also find that α-ketoglutarate could induce ROS (by DCFDA dye measurement) both in C. elegans and cells. That suggest α-ketoglutarate is unlikely to increase lifespan simply through antioxidant function. It's more likely α-KG regulates lifespan through induction of ROS instead. We actually have explained why we don't think ROS plays a major role in the life extension effect for α-ketoglutarate; see Supplementary Information (http://www.nature.com/nature/journal/v510/n7505/extref/nature13264-s1.pdf). Briefly speaking, although α-ketoglutarate could induce ROS, additional antioxidants does not decrease the lifespan extension effect of α-KG in our hands. In addition, ROS has been suggested to activate TOR but we observed α-ketoglutarate decrease TOR.

      In terms of the dosage we used, it has been known that α-KG is not highly membrane permeable. To solve this issue, we made octyl α-KG, an ester form of α-KG which is more membrane permeable. After octyl α-KG goes into the membrane, it gets hydrolyzed and releases free α-KG. Most of the assays in Fig. 2 were done with octyl α-ketoglutarate instead of original α-KG. That's the main reason we don't need to use 8mM dose for Fig. 2 assays. We also tried octyl α-KG in lifespan assays (see Extended Data Table 2). It only required micromolar scale of octyl α-KG instead of millimolar for us to observe the lifespan extension. In addition, after treatment of 400 uM of octyl α-KG on cells or after treatment of 8mM α-KG on C. elegans, the in vivo concentration of α-kG increased comparably around 50-100%. That suggest 50-100% increase of endogenous α-KG could inhibit ATP synthase and extend lifespan, nonetheless the different forms or concentration of α-KG we used, consolidating our hypothesis that α-KG extend lifespan through direct ATP synthase inhibition. Also, because the endogenous increase of α-KG is only 50-100%, this concentration does not seem to be in line with the concentration suitable for non-enzymatic reaction.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Apr 24, BSH Cancer Screening, Help-Seeking and Prevention Journal Club commented:

      The HBRC journal club read Scherer et al’s paper with interest. While flu vaccination is not the focus of our work, using metaphors as a manipulation to increase the likelihood of a behaviour and the methods used to test the efficacy of doing so, resonated with our research team. Most of the group were unfamiliar with the metaphor literature and found the introduction to provide a useful summary of the field. The authors present a discussion of the role of risk perceptions and affect, but a more detailed discussion of how the two interact and their complexities might have provided a truer representation of the field.

      The group liked that the authors attempt to measure whether participants were likely to move beyond intention (measuring participants’ desire to receive a reminder email to get a flu vaccination), without having to measure behaviour (which is difficult to do objectively). However, contrary to the authors, the finding that individuals who occasionally get a flu vaccine were more likely to request an email reminder was unsurprising to us because individuals who always get a flu vaccination seemingly do not need reminding. A further strength of the paper is that non-emotive metaphors were considered (the flu as a weed), as this helped dispel the suggestion that the effect was due to vividity or violence (as might be the case with metaphors such as ‘beast’ and ‘riot’). The group wondered if a virus could also be considered to be a metaphor, given its use in computing. Additionally, it may have been of interest if the vaccination itself was also part of the metaphor, for example the flu virus being described as a ‘weed’ and the flu vaccine as ‘weed killer’. In Hauser DJ, 2015 using congruent metaphors to describe an illness and the measure to prevent the illness increased behavioural intentions compared to just using a metaphor to describe the illness.

      While metaphors may increase the vividness of the flu and encourage individuals to engage, the group were concerned about how use of metaphors to manipulate behaviour may not be congruent with informed decision making, instead being considered coercive. We disagreed with the authors suggestion that metaphors might have a use in decision aids, which we believe have a role in helping individuals make informed decisions and not swaying their opinion. We suggested that metaphors might be useful when the aim is to increase individuals’ understanding, rather than increasing intentions to engage in a behaviour. It may also be important to consider the unintended consequences of using metaphors, for example in the cancer field (the focus of our work), describing cancer as a battle may lead to suggestions that people who do not survive the disease did not fight hard enough. However, we acknowledge that metaphors are used ubiquitously in the media, which is difficult to control.

      The group felt that flu vaccination was a complex example to choose to conduct these studies. Flu is a fairly common illness, which might result in participants having an accurate estimate of their risk of contracting the illness or how serious it is. This might explain why the manipulation did not affect the mediators in the main analysis. Individuals are likely to have existing beliefs about vaccination (for example, beliefs about side effects, effectiveness) and the benefits of vaccination might not always be obvious (the individual does not contract the flu - a non event and herd immunity benefiting the population). It would have been helpful to have been informed about the flu vaccination recommendations in the USA where the study was conducted. Cultural differences in how the flu is appraised, treated and prevented between countries might alter the effect of metaphors. For example, in the group’s opinion, the metaphor ‘beast’ was an exaggerated conceptualisation of the flu and felt removed from the actual consequences of the virus.

      Study 3 was perceived by the group to be the most useful study of the paper as it used a validated measure of affect and a larger sample than Study 1. The ecological validity of studies 2 and 3 could have been increased if Study 1 had been a ‘think aloud’ study, whereby the authors could have gained a justification for the mediators of flu vaccination that were tested. The group thought that a strength of the studies is that the authors did consider the possible mediators of any effect of the metaphor. Other mediators that could have been considered include attitudes and a measurement of arousal (engagement). A ‘think aloud’ study might also help to ensure that metaphors are used appropriately, for example in Hauser DJ, 2015, use of an enemy metaphor reduced intentions for self-limiting cancer prevention behaviours (such as stopping smoking or limiting alcohol intake). The group also wondered how novel the ‘novel’ metaphors used in the studies actually were, something that pilot work could have investigated.

      The group thought it was interesting that the findings were not wholly consistent across the studies reported and considered that this could have been a product of differences in the sample. The authors do not describe whether participants were randomised to each condition, and no baseline measurements were reported, both of these factors leave readers unclear about whether the sample was similar across each of the studies. It would also have been helpful to have a justification for the sample size chosen for each of the studies.

      Scherer et al. present a novel paper, which has provided readers with examples of how to measure the impact of using metaphors to increase intentions to receive a flu vaccination. Future work in this field should consider conducting pilot work to ensure the topic, manipulation and measures used are relevant to the population of interest. We caution readers to consider the unintended consequences of using metaphors to manipulate behaviour, including concern about the ethical implications this might have.

      Conflicts of interest We report no conflict of interests and note that the comments produced by the group are collective and not the opinion of any one individual.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Mar 24, Karen Woolley commented:

      FINALLY...more attention is being paid to PRACTICAL issues about data sharing!

      Well done to Wilhelm, Oster, and Shoulson for reminding us that it takes resources (financial and nonfinancial) to share data. We should also remember that it takes resources (financial and nonfinancial) to publish data. This issue seems to have been lost in the hand-wringing taking place over low and slow publication rates.

      A robust systematic review, presented at the 2013 Peer Review Congress (organised by JAMA and the BMJ) identified “lack of time” as the main reason why researchers don’t write up manuscripts.<sup>1</sup> Clearly, many researchers need writing support (ie, from legitimate, ethical, highly trained and qualified professional medical writers; NOT ghostwriters). Similar to Wilhelm et al., we outlined the need to consider the cost issue if we want to enhance publication speed and quality.<sup>2</sup> We think our paper struck a chord - it was among the top 5 most downloaded papers from Current Medical Research & Opinion that year.

      Professional medical writers are trained to help make complex data understandable to different target audiences (eg, researchers, regulators, patients) and could, therefore, help address another critical point made by Wilhelm et al., “Standardization costs for data-sharing models include the additional effort required to share, beyond what is required of any high-quality clinical research, because it takes considerably more effort to organize and make data understandable to others.”

      Wilhelm et al. conclude that “Understanding and planning for the costs [for data sharing] at the outset of research can help realize the full potential of data sharing.” The same sentence could apply to publications ie, understanding and planning for the costs of manuscript writing at the outset of research can help realise the full potential of peer-reviewed publications.

      Authors and affiliations Karen L. Woolley PhD CMPP,a Art Gertel MS,b Cindy Hamilton PharmD,c Adam Jacobs PhD,d Jackie Marchington PhD CMPPe (Global Alliance of Publication Professionals; www.gappteam.org) a. Division Lead. ProScribe – Envision Pharma Group; Adjunct Professor, University of the Sunshine Coast, Australia. b. VP, Regulatory and Medical Affairs, TFS, Inc.. USA; Senior Research Fellow, CIRS. c. Assistant Clinical Professor, Virginia Commonwealth University School of Pharmacy; Principal, Hamilton House, USA. d. Director, Dianthus Medical Limited, UK. e. Director of Scientific Operations, Caudex Medical, UK.

      Disclosures All authors declare that: (1) all authors have or do provide ethical medical writing services to academic, biotechnology, or pharmaceutical clients; (2) KW’s husband is also an employee of ProScribe – Envision Pharma Group; all other authors’ spouses, partners, or children have no financial relationships that may be relevant to the submitted work; and (3) all authors are active in national and international not-for-profit associations that encourage ethical medical writing practices. No external sponsors were involved in this study and no external funding was used.

      References 1. Scherer RW, Ugarte-Gil C. Authors’ reasons for unpublished research presented at biomedical conferences: A systematic review. http://www.peerreviewcongress.org/abstracts_2013.html#1 Accessed 27 February 2014. 2. Woolley KL, Gertel A, Hamilton C, Jacobs A, Snyder G (GAPP). Poor compliance with reporting research results – we know it’s a problem…how do we fix it? Curr Med Res Opin 2012;28:1857-1860.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Oct 18, Pavel Baranov commented:

      What is the difference between Open Reading Frame (ORF) and Coding Sequence (CDS)?

      Thank you for the reply. I think the disagreement lies in our understanding of what Open Reading Frame is.

      A simple and effective definition of ORF is a sequence of codons not interrupted with stop codons: nucleotide sequence is open for reading in one of the three (for RNA) or six (dsRNA) frames. ORF is an abstract notion, it can be found in any sequence. Both protein-codng and non-coding sequences have ORFs.

      Coding Sequence (CDS) is the part of RNA that encodes protein. CDS often, but not always (exceptions are ribosomal frameshifting, stop codon readthrough, etc.), is located within a single ORF. A single ORF may contain more than one Coding Region if, for example, translation begins at different start codons within the same ORF.

      From your reply I presume that you suggest to define ORF as a sequence of codons from start to stop (like CDS). But the problem with this detention is that it is unclear what should we consider as a start of ORF. AUG? But not all AUGs are starts and not all starts are AUGs. Also there are no starts in non-coding RNAs, but there are ORFs in non-coding sequences.

      Besides if we use this definition, all eukaryotic mRNAs coding for multiple protein isoforms would need to be described as bi- or even polycistronic.

      I hope that this discussion brings some clarity to the terminology used or at least it draws attention to a potential confusion when terms such as ORF, CDS and cistron are not explicitly defined.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Jun 18, Gauthier Bouche commented:

      We would like to thank the authors for carrying out this very important update and for emphasizing that patients with locally advanced rectal cancer should be informed of the lack of consensus about adjuvant chemotherapy after preoperative chemoradiotherapy. As the authors point out in their discussion, ’the control of occult distant disease is becoming a major objective in treatment strategies’. They indicate upfront chemotherapy as one relevant option being tested in a large randomised trial (PRODIGE 23). Our international collaborative group interested in financial orphan therapies - i.e. therapies with high clinical but no or low commercial value - has identified over the years several therapeutic options that we think could prevent distant metastasis in patients with locally advanced rectal cancer. We would like to list three promising strategies supported by clinical and biological evidence. These strategies carry a low toxicity risk, are low cost and are easy to administer.

      Cimetidine is an H2-receptor antagonist which has been used for decades as an oral drug in the prevention and treatment of gastro-duodenal ulcers. Cimetidine has demonstrated a strong anticancer effect (Kubecova M, 2011) that is unlikely to be explained by one single mechanism. Amongst other things, cimetidine inhibits cancer cell adhesion via E-selectin inhibition (Matsumoto S, 2002), increases the antigen-presenting capacity of dendritic cells (Kubota T, 2002), increases lymphocyte infiltration in tumours (Lin CY, 2004) and reduces accumulation of myeloid-derived suppressive cells (Zheng Y, 2013), which may all contribute to the clinical findings in colorectal cancer patients. A 2012 Cochrane Meta-Analysis reviewed the five randomised trials performed in colorectal cancer patients and concluded that ‘cimetidine appears to confer a survival benefit when given as an adjunct to curative surgical resection of colorectal cancers’ (Deva S, 2012). The main unanswered question is duration of treatment. The randomized trial currently ongoing in 120 colorectal cancer patients in New-Zealand and Australia (ACTRN12609000769280) which aims to look at the effect of perioperative cimetidine should answer this question. Recruitment in this trial has been completed and 45% of all patients have a primary rectal cancer (http://meetinglibrary.asco.org/content/122801-143).

      Polysaccharide-K (PSK) is isolated and purified from the cultured mycelium of the Basidiomycete Coriolus versicolor. It was first approved as an oral anticancer drug in Japan in 1976 under the name of Krestin® and its use has been restricted to colorectal, gastric and small-cell lung cancer after re-evaluation by the Japanese authority in 1989 (Maehara Y, 2012, Fujiwara Y, 1999). Several randomized trials of various quality have been performed over the past four decades in Japan which overall indicate a benefit in colorectal cancer patients (Maehara Y, 2012). The drug is now available as a generic in Japan (Fujiwara Y, 1999). The most critical question is whether similar outcomes would be found in a non-Japanese population. Since this therapy has shown excellent safety and is already commercially available in Japan, this question could be easily tested in randomized trials. Mechanistically and according to the wealth of literature available from Japanese researchers, PSK seems to have multiple mechanisms of action, from direct anticancer activity to immune-mediated effects (Maehara Y, 2012). Obviously, any project or trial outside of Japan should involve Japanese clinicians and researchers experienced with PSK.

      Ketorolac is a Non-Steroidal Anti-Inflammatory Drug (NSAID) approved in the EU and in the USA and available for injection in the management of postoperative pain. In 2010, Forget et al. found in a retrospective study that breast cancer patients undergoing mastectomy who had received one single injection of ketorolac during surgery had a 63% reduction in the risk of recurrence compared to patients who did not, even after adjustment for known confounders (Forget P, 2010). This association was confirmed in patients receiving either ketorolac or diclofenac and undergoing breast-conserving surgery (Forget P, 2014), as well as in patients undergoing lung cancer surgery (Forget P, 2013). The work from Ben Eliyahu’s group provides a biological rationale for this, even though Ben Eliyahu used etodolac in combination with propranolol and the design of the mice experiments is different from the human observations (Benish M, 2008). Retrospective data in colorectal cancer patients should be available soon from the group of Patrice Forget (personal communication) and will hopefully further support this hypothesis. The potential benefit of perioperative administration of ketorolac will have to be balanced against a potential risk of anastomotic leakage. A meta-analysis of randomized trials indicates a non-significant doubling of the risk (2.4% in patients not receiving NSAID vs. 5.1% receiving NSAID within 48 hours of surgery Burton TP, 2013), but such increase has not been observed when only one single dose of NSAID was administered. This will however need to be taken into consideration when designing a trial of perioperative ketorolac in colorectal cancer patients.

      We can deduce from the Bosset et al. article that there is an early peak of recurrence in the first two years, as more than 50% of the DFS events occurred during this period. This is in line with data from DeMicheli for breast (Demicheli R, 2010) and lung (Demicheli R, 2012) cancer and is another argument to target the perioperative period with drugs such as NSAID or cimetidine to prevent recurrence (Demicheli R, 2008).

      There are other options which could be considered. However, the strategies we propose have a common theme. Our goal is to address the fact that the perioperative period is one that is characterized by a systemic immune and pro-angiogenic milieu that is found in a typical wound healing response. Such a response is characterized by a post-operative surge of cytokines and growth factors into the systemic circulation that may awaken dormant micrometastases, thus accounting for the early recurrence pattern seen following surgery for many types of cancer (Retsky M, 2013). Thus short-term perioperative treatments aimed at countering such a response could have long-term beneficial consequences. Finally, these treatment options represent low-hanging fruits which, if their efficacy were to be proven in trials, could be rapidly implemented in practice in most places in the world with minimal toxicity and at a reasonable cost. We sincerely hope to see others supporting this type of research.

      Gauthier Bouche, Anticancer Fund, Strombeek-Bever, Belgium

      Pan Pantziarka, Anticancer Fund, Strombeek-Bever, Belgium & The George Pantziarka TP53 Trust, London, UK

      Lydie Meheus, Anticancer Fund, Strombeek-Bever, Belgium

      Patrice Forget, Department of Anesthesiology, Cliniques universitaires Saint-Luc, Université catholique de Louvain, Brussels, Belgium

      Vidula Sukhatme, GlobalCures, Inc; Newton MA 02459, USA

      Vikas P. Sukhatme, GlobalCures, Inc; Newton MA 02459, USA & Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA 02215, USA

      The authors declare that they have no competing interests.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Jul 09, Mathias Wellmann commented:

      Comment regarding Kukkonen et al. Treatment of non-traumatic rotator cuff tears. A RANDOMISED CONTROLLED TRIAL WITH ONE-YEAR CLINICAL RESULTS, Bone Joint J 2014;96-B:75–81. PMID: 24395315

      Dear Authors,

      we read the study with great interest und discussed the clinical implications. We think the authors did a great job respecting formal aspects (prospectively randomized design, equivalent cohort size, homogeneous patient distribution). However, there are a few aspects of the study which are potentially misleading:

      1. The title of the study is misleading or to general at best. A more precise title would be: Treatment of small, well compensated non-traumatic supraspinatus tendon tears. We think a study title should be as clear as possible and should especially answer the question: What issue was studied? The study of Kukkonen et al. exclusively investigated patients with small (<10mm) supraspinatus tendon tears, which were well compensated regarding range of motion (full range of motion, inclusion criteria). In its present form the title of the study may lead to a transfer of the results to patients with decompensated full-thickness tears of the supraspinatus tendon. Such transfer is not valid and should be prevented using a more precise study title.

      2. All patients with an passive external rotation <30° and an elevation <120° were excluded from the study and the limitation of elevation and external rotation was defined as stiffness. However, a loss of elevation <120° is not a sufficient criterion to define shoulder stiffness. An adequate examination would have quantified passive glenohumeral abduction and external rotation (external rotation in comparison to the contralateral unaffected side). Further, the percentage of patients excluded because of shoulder stiffness should be indicated, since this is not a very common combination in patients with atraumatic rotator cuff tears. In the given form there is a risk that the study design systematically excludes patients with restricted range of motion caused by loss of strength and pain. This is a basic issue, since these are the typical patients, in which we think about rotator cuff refixation.

      3. The type of supraspinatus-tears for the patients, that were included in the study should be clearly defined (full thickness versus partial thickness tears). The authors use the term „supraspinatus tendon tear comprising <75% of the tendon insertion and documented with MRI“. Thereby it is unclear, if partial articular und bursal sided tears involving <75% of the tendon substance were also included in the study. In the results section the authors indicate the sagittal diameter of the tears treated by surgery. Does that mean, that all tears were full-thickness tears?

      4. It is unclear, if any of the patients had been treated by physiotherapy previous to the inclusion, or if this was an exclusion criterion as well. How did the authors deal with patients, that were randomized to be treated by surgery but did not agree to a surgical intervention.

      5. The authors did not perform a follow up MRI or even sonography to determine the rerupture rate of the rotator cuff repairs. This would have been a substantial information estimating the clinical success of rotator cuff repair. If further follow up investigations are planned in the study design, we strongly recommend to perform MRI scans.

      6. The Constant Score may not be the most helpful score with regard to outcome discrimination for a patient population with „ full range of motion“, since it strongly estimates range of motion (40 points) compared to pain (15 points) and the level of daily actvities (20 points). For such patients more detailed patient reported outcomes (PROs) should be beneficial.

      We recommend to revise the marked aspects to reach the highest possible scientific impact for this publication.

      Mathias Wellmann on behalf of the Shoulder committee of the AGA - Society for Arthroscopy and Joint Surgery


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Jan 24, Tom Kindlon commented:

      The validity of using subjective outcome measures as primary outcomes is questionable in such a trial

      Crawley and colleagues suggest dropping school attendance as a primary outcome from the full study, and replacing it with self-report outcomes "such as the SF-36 or the Chalder Fatigue Scale" and using "school attendance as a secondary outcome"(1). And indeed, this is what they have done with the full study which has two primary outcome measures: "Chalder Fatigue Scale at 6 months" and "SF 36 physical function short form at 6 months"(2). I question the wisdom of using self-report measures as primary outcomes for such a trial.

      The authors do not give much information about the details of the Lightning Process (they do mention it is "developed from osteopathy, life coaching and Neuro-linguistic programming (NLP))" but here are some descriptions from other sources (individuals who have undertaken a course) (note these are not descriptions from participants from the trial itself):

      (a) "It felt very naughty but I whispered to one of the woman (sic) sitting next to me 'how are you, is this working for you?'. She was reluctant to answer, to say anything but that she was doing well would be to go against the process because that is a negative thought. It was pointless asking really. Still I wanted it to work, but I was starting to worry about the fact that I was not only not feeling any better the effort of doing the course, not getting my normal rest was making me feel worse. But these were negative thoughts. I started to ruthlessly suppress them like I had been shown." (3)

      (b) “They tell you that you're not allowed to say that you're ill anymore or that you have any symptoms. They force you to ignore the symptoms because they say that the symptoms don't really exist. They force you to do activities even though it's making you really ill, but you're not allowed to say so.” (4)

      Another, more thorough, description can be found on the Skeptic's Dictionary website(5).

      I have seen many similar descriptions to these in discussions in many private fora. Here's one example: "So the first step in the process is to recognise when you’re in THE PIT. Maybe sometimes or all the time. It’s important to recognise what you say to yourself as you go into the pit. For example “I’m feeling really ill this morning”, “if I do this, then I’ll get exhausted”, “last time I did this I got really ill for days”, “I can never eat this” etc. This takes some practise but we were assured that you always say something in your head as you go into the pit. As soon as you spot one of your “pit” phrases you want to STOP yourself right away. So imagine you’re on the mat and you start to say “I feel really ill today”. Before you get to the end of this phrase you will interrupt with a very firm, loud “STOP” (yes, talk out loud to yourself!) and jump into the STOP position as described above. So you jump outside the mat, to your left. Now you’re here you have interrupted your bad thought patterns."

      In essence when one is doing the lightning process, both during the three days training and after, you are to declare yourself well. You are not to say (or think) you have symptoms and you are not to say (or think) you have limitations. In other words, patients are 'trained' to dismiss and deny their symptoms and illness. Patients are instructed not to “do” ME or CFS anymore. In such a scenario, subjective reports from the patient are no longer reliable. Hence the need for objective measures in such trials.

      To a lesser extent, it can also be argued that subjective measures are not ideal for all participants in this trial, including the control group. With graded activity-oriented therapies, which all the participants undertake (1), there may be response bias (6). This was seen in a review of three Dutch trials of graded activity-oriented cognitive behaviour therapy (CBT) for CFS. While the CBT participants reported improvements in fatigue (and also SF-36 physical functioning although that was not measured in all of the trials), no improvements in objectively measured activity were reported over the control group(7). Similarly a mediation analysis showed changes in physical activity were not related to changes in fatigue. A similar effect with CBT could be in PACE Trial, the largest such trial in the CFS field(8).

      Although participants who had undergone CBT reported improvements in scores on both the Chalder Fatigue Scale and SF 36 physical function subscale, no improvements were reported in the six minute walking distance compared to the group who had undergone no additional individual therapy (all participants had specialist medical care). Similarly, CBT did not significantly reduce employment losses, overall service costs, welfare benefits or other financial payments(9).

      Objective measures of physical activity are one type of objective measurement that could be employed. This could be used not just to measure the total quantity of activity but also to check the intensity of activity as people with ME/CFS may perform lower levels of more intensive activity(10). Tests of neuropsychological performance could also be used, given the impairments that have previously been reported(11). If necessary, a webpage could be created that could be accessed remotely.

      One particular problem with the 0-33 scoring method for the Chalder fatigue scale is that patients can give unusually low, or artificially good, scores.

      The scale consists of 11 questions. For each question (e.g. "Do you have less strength in your muscles?"), patients have to say whether they have the symptom: "Less than usual" (score of 0); "No more than usual" (score of 1); "More than usual" (score of 2) or “Much more than usual” (score of 3). So healthy people should score around 11 and indeed in the only population study I can recall that gave data for a healthy (no disease/current health problem) group, the mean was 11.2 (12).

      However, some unusual scoring is possible. This was seen in a multiple sclerosis trial, of a similar CBT intervention to the graded activity-oriented CBT used in CFS (13). Participants in the CBT leg of the trial entered with a Chalder fatigue scale score of mean (sd): 20.94 (4.25). Two months after therapy, the mean (sd) was 7.90 (4.34) i.e. this groups suffering from multiple sclerosis had a much better score than one sees in healthy people!

      Such averages could also hide non-responders or, possibly more seriously, people who deteriorated, depending on how the data was analysed. For example, if two participant deteriorated and ended up with a Chalder fatigue scale scores of 32, but 8 others had an average score of 2, this would give an average score of 8, again giving the impression that this group had less fatigue than healthy people. This issue of supranormal scoring with likert scoring (0-3) can be dealt with by utilising the commonly used bimodal scoring method for the Chalder fatigue scale. However, both it and likert scoring suffer from ceiling effects in CFS populations so other scales are likely preferable (14,15). If the Chalder fatigue scale is used, the original developers of the scale suggested, based on their analyses, that "it would probably be more useful to have two scores, one for physical fatigue and one for mental fatigue"(16).

      (Message has reached word limit - references are reply)

      Competing interests: I am the Assistant Chairperson and Information Officer for the Irish ME/CFS Association. All my work for the Association is voluntary.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 May 22, L. Norton commented:

      Nice work. Background is certainly an issue in ChIP-Seq studies and I think one way to deal with this is to examine gene expression of the proposed targets following knock-down of the transcription factor. Although this is not a perfect solution, after doing these experiments it becomes apparent that very few ChIP-Seq binding events are 'functionally significant' in a way that we expect (i.e. effect transcription). As a side note, I wonder if you or anyone else has examined the impact of sonication vs. MNase digestion on this phenomenon?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Mar 21, Jacob Puliyel commented:

      Williams and colleagues have described assessment of AEFI employing the algorithm described by Halsey < PMID: 22507656>.

      I have posted two very detailed comments to an article by Tozzi Tozzi AE, 2013 which discusses the same subject of the revised WHO Classification of AEFI. I will not repeat the points I have made there but it may be viewed here.

      As this is a matter of patient safety I think it is important that the experts who understand the new scheme must explain why the revision was needed and that it will not miss opportunities of picking up new signals. The question is whether the new scheme would have picked up and flagged the signal of adverse-effects like the RotaShield-reactions, had the scheme been in use in 1999. The purpose of this posting is to invite the learned authors of this article on causality assessment to respond to the issues raised in the postings to the Tozzi article and I propose to flesh out those concerns a little further in the context of the article in Pediatrics by Williams and colleagues.

      1) Williams and colleagues Williams SE, 2013 suggest that the first step in the general approach to evaluating serious AEFI is to establish a clear diagnosis using Brighton Collaboration case definitions.

      The second step is to consider known biological mechanisms.

      Neither of these would have been evident when the intussusceptions signal was picked up by the old scheme (and the vaccine was withdrawn expeditiously preventing unnecessary distress to thousands of babies). Even today although a case definition has been developed for ‘intussusceptions’, the biological mechanism is not clearly defined and so the second step described by Williams et al cannot be completed.

      It was reported recently that Pentavalent vaccine (DPT co-administered with measles vaccine (MV) and yellow fever (YF) vaccine) is associated with increased mortality compared to MV + YF alone Fisker AB, 2014. It is pertinent to mention that the biological mechanisms involved are not understood.

      Neither is the biological mechanism for increased female mortality in recipients of the high-titer Edmonston-Zagreb vaccine known, although this was first noticed 2 decades ago. < PMID: 8237989>, Aaby P, 1993.

      2) It will be instructive to look at how the new algorithm has failed to flag up the deaths following Pentavalent vaccine used in Asia (DPT + Hib + Hepatitis B) and as a result, numerous children continue to be exposed to the risks of this vaccine.

      The glossary of the User Manual for the [Revised WHO classification]( who.int/vaccinesafety/publications/aevimanual.pdf) suggests ways and means to rule out a causal association. It defines causal association as a cause-and-effect relationship between the causative factor and a disease with no other factor intervening in the process.

      There have been many deaths following use of this Pentavalent vaccine in Sri Lanka. The committee WHO vaccine safety examined 19 deaths in Sri Lanka, 14 of them between 2010 and 2012. In six of the 19, a congenital heart disease was reported.

      Does preexisting congenital heart disease rule out a causal association between the vaccine and the deaths? Under this definition the 6 deaths in children with heart disease were not causally related to the vaccination.

      The older Advisory Committee on Causality Assessment Collet JP, 2000 looked at the problem more logically and holistically. For example it noted that elderly persons with concomitant or preceding chronic cardiac failure can develop cardiac decompensation after influenza vaccination due to a vaccine-caused elevation in temperature or from stress from a local reaction at the site of vaccinating. The vaccine is considered to have contributed to cardiac failure in this specific situation. It is obvious that with the older method of assessment of AEFI, caution would have been exercised when administering influenza vaccine to persons with preceding chronic cardiac failure, to avoid decompensation.

      The deaths in children with heart disease following administration of Pentavalent vaccine could well be due to decompensation. The Pentavalent vaccine must be used with caution in the presence of an underlying heart condition albeit asymptomatic. However detection of asymptomatic heart disease prior to vaccination in developing countries is impractical where the vaccine is administered by health workers who are barely literate. Is it prudent to use the vaccine under these circumstances given the findings of the Sri Lanka investigation? The new system disregards this real danger.

      3) Step 2 Checklist 4 of the revised [WHO classification for causality assessment]( who.int/vaccinesafety/publications/aevimanual.pdf) asks to check if the event can occur independently of vaccination (background rate). Thus it seems that until the deaths from vaccine AEFI are frequent enough as to increase the age specific mortality-rate in a statistically significant manner, they are to be ignored.

      The question of what background rate to use is not addressed specifically and this can further confound objective assessment of the AEFI. The Pentavalent vaccine in Asia is administered after 6 weeks of age. Would the local post-neonatal infant mortality rate (PN IMR) in the community before introduction of the vaccine be the comparator?

      Most of this post-neonatal IMR is made of babies who are very sick with pneumonia, diarrhea, sepsis, meningitis etc. The fact that the AEFI babies were brought by the mother for routine immunization suggests that the child was not sick and the mother did not consider the child was likely to die in the next day or two. The comparator must really be the SIDS rate in the locality for babies of a comparable age.

      Deaths in Bhutan were investigated and local newspapers reported on the various official explanations. It was argued that the deaths could have been due to encephalitis although there was little evidence for it. Officials explained that the encephalitis death rate in the years after the vaccine was introduced (even after adding AEFI deaths) had not increase significantly. This was sufficient grounds to accept the ‘coincidental encephalitis’ theory. One cheeky health official however pointed out that there were no cases of meningo-encephalitis reported among children below one year, in the eight months when Pentavalent vaccine was suspended in Bhutan.

      4) Another factor related to the deaths following Pentavalent vaccine is that the vast majority have occurred after the first dose and fewer after the second dose. A random event or coincidental SIDS cannot explain these deaths. However the new algorithm does not take this important factor into consideration.

      For all these reasons it would appear that the new algorithm is not a comprehensive means to assess serious adverse events. Its use will delay withdrawal of vaccines that result in serious AEFI and in the end it will erode confidence in the entire immunization programme and those who administer it.

      Can I suggest that we need to go back use older scheme namely Brighton Classification of AEFI till we find a better method to assess AEFI.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Aug 11, Dr. Priyanka. M Jadhav commented:

      Ayurveda: science beyond scepticism and snake oil

      Priyanka M. Jadhav, Research Scientist

      Maharashtra University of Health Sciences

      Comment on: Ayurvedic medicine offers a good alternative to glucosamine and celecoxib in the treatment of symptomatic knee osteoarthritis: a randomized, double-blind, controlled equivalence drug trial

      Sir, This is the age when the concept of remote surgery does not surprise us, but we are not prepared to accept a concept that has existed and survived for centuries—the science of Ayurveda. I read with great interest the study published by Chopra et al. and am excited that the New Millennium Indian Technology Leadership Initiative (NIMTLI) project set up with an interdisciplinary approach is evaluating traditional and Ayurvedic drugs in clinical trials for treatment of osteoarthritis[1]. It is interesting to see how modern medicine approach is being used to study the effect of an Ayurvedic medicine assessed by a series of experiments [2–4]. The acceptance of Ayurveda or any other traditional system of medicine in western medicine is low; however, what causes frustration are the allegations and misinformation being promoted.

      Recently, a a letter was published in which the author termed Ayurveda as quackery.[5] Patients and the next generation who aspire to find a cure and career in this age-old science find such information as very misleading and confusing. On the other hand, many other journals are documenting the reports on the study of these drugs.

      The National Center for Complementary and Alternative Medicine clearly states the lack of randomized controlled clinical trial for complementary and alternative medicines and a need to build evidence-based datab to assess the plausibility of claims[6].

      As an Ayurvedic practitioner, I welcome such suggestions and efforts, which are more than mere dismissal of age-old science. Although the current research and published papers concentrate on the extracts of plants, the point of holistic treatment is lost. As many scientists use extracts, the pressure of using purified extracts in studies is very high. Any researcher who proposes to work on a whole medicinal plant not only faces technical difficulties but also pressure from the scientific community, which again steers them to use purified extracts. Negative results of many experiments are not even published.

      NMITLI arthritis project is one such attempt to translate the language of traditional medicine in a way that the western world can understand. In doing so, the scientists involved are those who are experts in their field and have done their best to generate data from in vitro and human studies. Moreover, one can also argue that Ayurveda treats a patient not only by medicine but also by other methods such as application of oil, hot fermentation, purgation, etc., which varies for each patient depending on their prakruti or constitution.[7] If one aims to study these medicines, a novel and innovative approach or a fresh perspective is needed when designing a suitable trial. [8]

      More such studies as the one done by Chopra et al. should be done. Such studies will focus on chronic and infectious diseases, which are of grave concern not only for the developing or resource-poor countries but also for the developed world too. These medicines may form a good alternative, if not a substitute, for modern medicine that has mild to severe adverse effects particularly for chronic diseases. In a way, this study may be a base to prove to the funding agencies and other organizations that natural products can be just as effective as modern ones.

      Some very important issues to be addressed are whether we are prepared to accept a concept riddled with prejudice or embrace one used by billions for their good for centuries.

      References

      1 Chopra A, Saluja M, Tillu G et al. Ayurvedic medicine offers a good alternative to glucosamine and celecoxib in the treatment of symptomatic knee osteoarthritis: a randomized, double-blind, controlled equivalence drug trial. Rheumatology 2013. doi: 10.1093/rheumatology/kes414. First published online: January 30, 2013.

      2 Chopra A, Saluja M, Tillu G et al. Evaluating higher doses of Shunthi - Guduchi formulations for safety in treatment of osteoarthritis knees: a Government of India NMITLI arthritis project. J Ayurveda Integr Med 2012;3:38-44.

      3 Chopra A, Saluja M, Tillu G et al. A randomized controlled exploratory evaluation of standardized ayurvedic formulations in symptomatic osteoarthritis knees: a government of India NMITLI project. eCAM 2011;2011:724291.

      4 Sumantran VN, Kulkarni A, Chandwaskar R et al. Chondroprotective potential of fruit extracts of Phyllanthus emblica in osteoarthritis. Evidence-based complementary and alternative medicine : eCAM 2008;5:329-35.

      5 O'Cathail S, Stebbing J. Ayurveda: alternative or complementary? Lancet Oncol 2012;13:865.

      6 Briggs JP. NIH. Turning discovery into health? Available from: http://nccam.nih.gov/about/offices/od/2011-06.htm (17 April 2013, date last accessed).

      7 Dieppe P, Marsden D. Managing arthritis: the need to think about whole systems. Rheumatology 2013.doi: 10.1093/rheumatology/ket127 First published online: March 7, 2013.

      8 Patwardhan B. Ayurveda GCP guidelines: need for freedom from RCT ascendancy in favor of whole system approach. J Ayurveda Integr Med 2011;2:1-4.

      Conflict of Interest: None declared

      Published July 9, 2013 http://rheumatology.oxfordjournals.org/content/52/8/1408.long/reply#rheumatology_el_105


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 May 06, Madhusudana Girija Sanal commented:

      Dr. Norman is picturing the great progress in education as a result of the information distribution revolution as if it is an ‘unavoidable evil’! Probably, many of his generation expresses high inertia, still consider, 'physical' universities, libraries, books, lecture halls etc. very much essential! I understand their nostalgia! You know what this means for humanity? Rich countries such as USA hold only a small fraction of the world’s population. Through virtual universities more and more people across the world, irrespective of rich and poor, would be able to attend the best schools and courses. They would be able to take the same exams and get ranked along with the most privileged, rich or intelligent. This would be great! This is 'new' justice! (Although I believe “Justice is ‘man made’ or artificial”). All we want is better tools to evaluate human intellectual qualities, online. I do not think face to face lectures will be better than online recorded, interactive lectures, may be multidimensional (3D-4D-5D) lectures by several professors of the learner’s choice. Lectures will be ranked and paid by based on their quality by student communities and not by bureaucrats or by administrators and politicians. This system is great especially when I consider the advantages! On demand, personalized lectures would be "ready-made" for commonly observed (student) personality traits -say there are 100 personality subtypes and intellectual levels! Custom lectures are available for them all because there are much more people to teach online-lectures need not be real time. You can learn from a "personality" who matches your rare personality, perhaps, one who lived 10 years back. His lectures had to wait for ten years for a student like you! Is not this a very exciting possibility? However, I do agree that face to face lectures can be more individualized and beneficial (for the rich, because they only have money to ‘buy’ good teachers!) Nevertheless, I do not think there is a huge benefit for extreme personalization except for exceptional children who are extremely out-of-the-box in a positive or negative way. It may be, however, noted that overall poor but brilliant students have a better opportunity to come out and stand before the world. This global free learning system will benefit specially those brilliant minds in less privileged countries. Do Dr. Norman has any evidence to support his statements in the editorial? I do not know what Dr. Norman will write if tomorrow we find a new technology which allows direct transmission of knowledge to brains! Then what will happen? Think! Everyone will have equal opportunity to accumulate the same amount of information if he or she wants. Now who will win this game? Those who have money? Probably not! Those fractions of the society who are blessed with right genome, epigenome and the best neural connections!


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Apr 02, Marcus Munafò commented:

      Flegal and colleagues suggest that being overweight may decrease risk of all-cause mortality, and being slightly obese confers no increased risk compared to being of normal weight [1]. We think these conclusions are misleading and may give a detrimental public health message, particularly given the publicity they attracted.

      Despite the authors’ suggestion that there is “little support for the suggestion that smoking and pre-existing illness are important causes of bias”, these factors are not consistently controlled for within the contributing studies [1]. It has been demonstrated in two large prospective studies that the increased risk of death from being overweight or obese can be masked when smoking is not adequately controlled for, and when sufficient initial years of follow up are not excluded [2]. Smoking is associated with both lower body mass index (BMI) and increased risk of death, while underlying illness can lower BMI prior to death. Furthermore, it is widely appreciated that conventional statistical approaches to “control” for such factors are inadequate. Support for a positive causal effect of BMI on mortality comes from studies less subject to the biases associated with BMI measured in later life because they used BMI in adolescence or offspring BMI as the exposure [3, 4]. These measures are suitable proxies since they are strongly associated with an individual’s BMI in middle age but are not substantially affected by reverse causality and are less confounded by smoking. Higher BMI in adolescence was associated with greater risk of all-cause mortality in middle age in a study of over 200,000 individuals [3]. Stronger inferred associations were observed with offspring BMI and all-cause and cardiovascular mortality than with own BMI, suggesting that positive associations of BMI and mortality may in fact be underestimated in conventional observational studies [4]. In addition, this study demonstrated that the commonly observed inverse association between BMI and death from respiratory and other diseases may also be due to such biases.

      Given the importance of the public health messages regarding “healthy” weight, we feel that further research is needed in this area using innovative research methods to overcome potential biases; further conventional observational studies will simply recapitulate the biases inherent in all such investigations. One such approach would be Mendelian randomisation, using obesity-related genetic variants, such as those in FTO. Mendelian randomisation methods, when applied correctly, are free from confounding by environmental factors and are not affected by reverse causality [5].

      Marcus Munafò, Amy Taylor and George Davey Smith

      1. Flegal, K.M., et al., Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA, 2013. 309(1): p. 71-82.
      2. Lawlor, D.A., et al., Reverse causality and confounding and the associations of overweight and obesity with mortality. Obesity (Silver Spring), 2006. 14(12): p. 2294-304.
      3. Bjorge, T., et al., Body mass index in adolescence in relation to cause-specific mortality: a follow-up of 230,000 Norwegian adolescents. Am J Epidemiol, 2008. 168(1): p. 30-7.
      4. Davey Smith, G., et al., The association between BMI and mortality using offspring BMI as an indicator of own BMI: large intergenerational mortality study. BMJ, 2009. 339: p. b5043.
      5. Davey Smith, G. and S. Ebrahim, 'Mendelian randomization': can genetic epidemiology contribute to understanding environmental determinants of disease? Int J Epidemiol, 2003. 32(1): p. 1-22.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 May 05, Madhusudana Girija Sanal commented:

      Creativity in biology-reply to Bruce Alberts

      Bruce Alberts, describes biology as 'not at all an easy science' (1). This entirely depends on the definitions of boundaries! Biology is more factual and visual (and hence ‘easy’ for majority) than physics and mathematics. However, biology attains the same degree of difficulty at its interface with these subjects were mathematics or physics is applied to answer biological questions.This relative ease may be one of the reasons we find more students graduating in biology than other basic sciences (2). Creativity in the generation of new ideas and concepts is the essence of science. The current 'reward-recognition' system which just counts the papers is not recognizing this fact. The current trend is to evaluate scientists on the basis of the number of publications and their impact factors rather than their impact on the growth of science and the betterment of the society. This has resulted in a ‘publish or perish’ situation leading to an increase in junk and fraudulent publications (2). The cracking of the central dogma in biology or the invention of PCR brought new concepts in biology. Personalized medicine, designer molecules and proteins and regenerative medicine have huge potential. However,currently the society is wasting its resources in premature translational research and personalized medicine which are growth arrested in infancy, awaiting major developments in technology to overcome the bottlenecks (3). Any significant leap in biology needs a major leap in chemistry, physics and mathematics. These subjects provide not only technology and tools but also concepts for the growth of biology. So funding and research in other fields need be encouraged for the development of biology (2).

      Life Sciences-where ‘workers’ take a lead over thinkers!

      The number of publications in biology is proportional to the amount of ‘work done’ rather than “adventure of ideas”. This is causing many problems. For example this results in the dissolution of the boundaries between a scientist, a technician and a robot (2). The number of papers and the journals in which they are published often becomes a matter of chance, available workforce, availability of funds and collaborations which in turn depends on politics, influence and umpteen other factors decreasing the overall importance of intellect on scientific publication. It is difficult to assess individual contributions of the authors-who contributed more for the concept and who did most of the (technician) work-from a research article. Needless to say that who contributed towards the development of the concept should be given more credit. But this is rarely practiced (2).

      The future of Biology

      We are still taking the same cereals, pulses, milk and meat, which existed thousands of years before! We have yet not created any new plant or animal! Currently in biology, we move in a top to bottom fashion i.e.; explore the existing biological systems and reveal the science behind them and copy it, re-engineer it, according to our need. For example, find out a gene, find its importance, function, the protein coded, structure, interactions etc But I feel biology mature enough to think moving the other way i.e.; design a gene according to our need-plan its structure function, interactions etc according to our need design it. If I extend this view we should be able to engineer and produce totally new proteins or organisms rather than building or modifying from an existing (natural) platform (2).

      Other major advancements will be in creating brain-computer interfaces and computer assisted thinking, electronic immortalization of personalities (individuals), planned generation of citizens of varying functions and capabilities for the better service of the society etc. For all this to happen the limiting step is the development of basic sciences which could then be easily translated to appropriate technologies.

      1) Alberts B.Creativity at the interface. Science. 2012 Apr 13;336(6078):131. 2) Sanal MG Where are we going in science? Publish and perish! Current Science 2006 3) Maher B. Nature.Tissue engineering: How to build a heart. 2013 Jul 4;499(7456):20-2


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Jan 12, Brett Snodgrass commented:

      Dear Authors,

      Thank you very much for the excellent publication and acknowledging that the connections between the coronary arteries and left ventricle were not probably not veins.

      These connections may represent the vessels of Wearn, which appear unusually prominent.

      An applicable ICD9 code might be 746.85, coronary artery anomaly, as it may meet the clinical criteria for the definition of fistulae.

      Referring to these arterial connections as Thebesian veins has caused me much confusion when I was trying to understand the simple relationship of pulmonary atresia with intact ventricular septum and the coronary arteriopathy seen in PAIVS. With the growth of social media, and electronic data storage, I think that we may now be able to address the diffuse distribution of ambiguous or misleading nomenclature that fills as least half of related publications.

      Dr. Paul Lurie's plea for collaboration in this regard has motivated me to take several steps in an effort to try to help produce accurate, simple, precise anatomic nomenclature and include:

      1. I collaborated with Elsevier to help get the article by Wearn et al. changed to open-access. http://bit.ly/JTWearn

      2. I obtained the original article (through ArtRieve http://www.artrieve.com/) written by Thebesius and uploaded the first digital copy. http://bit.ly/Thebesius

      3. I published or wrote in the American Journal of Cardiology, Cardiovascular Pathology, Twitter, Facebook, Google Plus, and directly E-mailed several authors that were publishing related content.

      4. I have posted numerous PubMed Commons comments. I regret that they may not always be helpful to every user, but I echo Dr. Lurie’s plea for collaboration in this regard. The PubMed Commons commentary is a welcome means with significant potential to vastly improve the peer review process.

      My aim is that researchers in the future will probably not need to read more than 30 articles before they realize that myocardial sinusoids indeed exist, they were defined by Wearn, and that Thebesian veins are not arteries.

      Please see

      1. http://bit.ly/JTWearn

      2. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1933738/

      3. http://bit.ly/vasaThebesii

      4. http://bit.ly/ThebesianByPratt

      My opinion is that accurate anatomic terminology is a basic principle underlying good medical science, and I ask others to consider whether the aforementioned definitions are appropriate. If this comment is not helpful, please let me know how it might be improved.

      Comments and suggestions are welcome.

      Thank you very much.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 06, Mark Bolland commented:

      This comment provides background information to an earlier comment.

      We were interested in why different meta-analyses of vitamin D supplements came to different conclusions, and noticed that different meta-analyses used different data from this trial. We identified several errors/inconsistencies in the text and emailed the lead author requesting clarification about the data in March 2014. He responded that the data in the Tables were correct. We replied that there were inconsistencies within the Tables as well and asked for clarification, but he did not respond.

      Therefore in April 2014, we contacted the editor of Osteoporosis International advising the editor of the inconsistencies and requested that they be corrected. We felt this is particularly important as it is a highly cited, influential trial reporting benefits of vitamin D supplements on falls, and the inconsistencies occur in the treatment group numbers and the primary and secondary outcomes, so have a major bearing of the interpretation of the trial results. Our primary concern was/is to use the correct data for the trial in meta-analyses. In May 2014, the editor passed on an extract of the lead author’s response and indicated that an erratum would be published. However, we felt the lead author’s response was inadequate because it left a number of uncorrected inconsistencies/errors in the text. We pointed this out to the editor. We are not sure what action the editor took, but no erratum was published. We followed up with 2 further emails to the editor over the next year, and in April 2015, the editor advised that it seemed unlikely that an erratum would be forthcoming.

      Therefore, we requested the permission to summarize the issues in a very brief letter. The editor agreed and a short letter summarizing the six errors/inconsistencies in the article was published (Osteoporos Int 2015;26:2713 Bolland MJ, 2015) with a response from the author (Osteoporos Int. 2015;26:2715-6 Pfeifer M, 2015). Unfortunately, in his response the author chose to correct only 2 of the errors identified and introduced a further inconsistency.

      In a further letter to the editor, we therefore highlighted the remaining errors/inconsistencies and the consequence of using different possible data combinations from this trial for meta-analyses. The editor indicated that the issues we raised were important but probably irresolvable and offered to publish the letter in Archives of Osteoporosis. We indicated that our preference was that the data were corrected in the original publication rather than our letter being published. We think it quite straightforward to make the simple necessary corrections to two tables and a couple of sentences of text. We feel it is essential that the corrections occur as the errors are in the most important data from the trial: the treatment group numbers and the primary outcome data for falls. The editor considered the issue further and then published our letter (Arch Osteoporos. 2015;10:43 Bolland MJ, 2015) along with a response from the author (Arch Osteoporos. 2015;10:42 Pfeifer M, 2015).

      In this second response, the author has still not corrected the identified errors, so, because of the inconsistencies/errors in the data, it is not possible to be sure how many women were in each randomized treatment group, how many women had a fall during the trial, how many total falls occurred in each treatment group, or what is the breakdown of participants by numbers of falls (no falls, 1 fall, 2 falls, etc).

      Therefore, we think that the study should be excluded from meta-analyses and systematic reviews and its results viewed with caution until consistent data are provided by the authors with some explanation as to how the errors/inconsistencies occurred.

      Mark Bolland, Andrew Grey. University of Auckland


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2014 Jan 04, Tom Kindlon commented:

      The dropping of actometers as an outcome measure and other points relating to the outcome measures being used

      (I'm posting this e-letter/comment from 2008 here for the same reason as the first e-letter below)

      In their reply to my comments, Peter White and colleagues say they are using [i]"several objective outcome measures"[/i] [1]. If they think these tests are useful as objective outcome measures, why is at least one of them not being used as a primary outcome measure rather than the current situation where there are only two subjective outcome measures being used.

      I have already made some points on the outcome measures but another one is that the bimodal Chalder Fatigue Scale hardly seems a very good outcome measure for a "CFS/ME" trial where there is likely going to be so many maximum or near maximum scoring initially[2]

      Also, there are so many (14) secondary outcome measures in this study, along with so many (18) predictor variables, that it seems unlikely all the different methods of looking at the secondary outcome measures can be explored in the final published paper, given authors are encouraged not to make papers too long (especially journals that have paper editions). The protocol itself is 20 pages long when all the different aspects of it are listed! At least some of the information will need to be re-iterated in the final paper.

      It is of course important to take the burden on participants into account when deciding what outcome measures to use. However I find the following point very strange: "Although we originally planned to use actigraphy as an outcome measure, as well as a baseline measure, we decided that a test that required participants to wear an actometer around their ankle for a week was too great a burden at the end of the trial." Firstly they clearly don't find it that great a burden that they drop it altogether as it is being used on patients before the start. If they feel it was that big of a burden, it should probably have been dropped altogether.

      Of course, other studies in the area have used measuring over a similar or longer period. For example, Bazelmans [3] used an actometer over 14 days, Black [4] used actigraphy over 14 days, Sisto[5] used actigraphy over 7 days, Vercoulen[6] used an actometer over 12 days and Van der Werf [7] used an actometer for 12 days.

      Also if one wants to reduce the burden on patients, why not take out one or both of the exercise tests instead. As the clinicians in the study would know, post-exertional symptoms are part of the condition.

      For example, Nijs[8] performed a gentle walking exercise on patients where they walked on average 558m(+/-340) (range: 120-1620) at a speed of 0.9m/s (+/-0.2) (range: 0.6-1.1). This resulted in a statistically significant (p<0.05) worsening of scores in the following areas when comparing pre-exercise, post-exercise and 24 hour post-exercise scores using ANOVA: VAS fatigue, VAS musculoskeletal pain, VAS sore throat, SF-36 bodily pain and SF-36 general health percention. 14 out of 24 subjects experienced a clinically meaningful change (worsening) in bodily pain (i.e. a minimum change of the SF-36 bodily pain subscale score of at least 10).

      Those results are similar to another study[9] which involved the acute effects of 10 discontinuous 3-minute exercise bouts on a treadmill in 10 CFS patients. In between exercise bouts, there was a 3-minute recovery period between exercise bouts. The participants walked at a comfortable walking pace self-selected by the subjects. On average, the subjects walked at a speed of 0.71+/-0.20 m/s. Some patients reported experiencing headaches, leg pain, fatigue or sore throats.

      In another study, Lapp [10] (not to be confused with Clapp[9]) reported on the effects of 31 patients to his practice who were asked to monitor their symptoms three weeks before to 12 days after a maximal exercise test. 74% of the patients experienced worsening fatigue and 26% stayed the same. None improved. The average relapse lasted 8.82 days although 22% were still in relapse when the study ended at 12 days. There were similar changes with exercise in lymph pain, depression, abdominal pain, sleep quality, joint and muscle pain and sore throat.

      These are just a small selection of the studies which show patients experience an exacerbation of their symptoms following exercise testing. So these are the sorts of symptoms the patients may expect following the exercise. This reminds me that there seems to be a lot of concentration on measuring fatigue in this study - there are many other symptoms that are part of "CFS/ME".If they had used actometers instead of, say, doing one of the exercise tests, the response to the exercise could have been followed to see how long and how severe an effect the exercise had on the patient. Or they could have dropped both the exercise tests altogether.

      As well as "subjective" findings following exercise testing, there have also been objective findings. Arnold et al[11] found excessive intracellular acidoss of skeletal muscles with exercise. Jammes[12] found an increase of damaging oxidative stress following exercise testing. So patients could not just endure temporary sysptom but possibly also longer-term harm from exercise testing. There are numerous other exercise abnormalities.As the clinicians involved in the study probably hear from patients, one of the frustrating things about ME or CFS is that people don't realise the payback that they can have from doing things. This would have been an opportunity to investigate this as part of the study. But now the effort patients will put in and the payback they will feel in some ways is being wasted as the effects won't be measured.

      Anyway, to repeat again, given the authors familiarity with the literature, I find it strange that they would decide using an actometer would be worse than putting patients through two exercise tests.

      I also find it surprising that in a study part-funded by the Department of Work and Pensions (DWP) that the objective outcome measures (not involving questionnaires) are all once-off exercise tests. It has been established that patients need to be able to do things on several days during a week before they can be passed fit for work. I have mentioned using actometers following exercise tests after an exercise test above; of course, actometers wouldn't have to be used at that time but also during a "normal week".

      Proponents of pacing methods including APT would say that there is a "ceiling of activity" that patients can't go above without experiencing a worsening of symptoms. Black[13] has found evidence of this. Proponents of CBT or GET for "CFS/ME" would suggest that patients can gradually just increase how much activity they can do. Actometers would also have tested the hypothesis. As it stands, the study will not give us information on this as just because patients answer questionnaires saying they're improved (which could simply be because they think they're better) or improve their exercise results (which might simply be because they're willing to push themselves more) doesn't prove that they don't have an activity ceiling above which they experience disabling symptoms (esp. when, as in this study, there is no follow-up period following the exercise testing). This is the real "heart" of the issue but given the current design, the question won't be answered.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2015 Dec 28, Lydia Maniatis commented:

      Part 1. I have criticized Purves and colleagues' various iterations of the “wholly empirical” account of visual perception in various comments on PubMed Commons and on PubPeer. Here, I want to present a more comprehensive and organized critique, focussing on a study that has served as a centerpiece of these accounts. I think it is important to clarify the fact that the story is profoundly inadequate at both the conceptual and the methodological levels, and fails to meet fundamental criteria of empirical science (logical consistency and testability). Such criticism (which is not welcome in the published literature) is important, because if “anything goes” in published research at even the most elite journals, then this literature cannot help to construct the conceptual infrastructure necessary for progress in the visual sciences.

      Conceptual problems LACK OF THEORETICAL MOTIVATION The hypothesis that the authors claim to have tested is effectively conjured out of thin air, subsequent to an exposition that is inaccurate, inadequate, and incoherent.

      The phenomenon to be explained is inadequately described. We are told, first, that the luminance of a “visual target” elicits a “brightness” percept that can vary according to “context.”

      The term brightness, here, is being used to refer to the impression of white-gray-black that a surface may elicit (and which is currently referred to by researchers as its “lightness.”) However, surfaces do not necessarily, or even usually, elicit a unitary percept; they often produce the impression of double layers, e.g. a shadow overlying a solid surface. Both the shadow and the surface have a perceptual valence, a “brightness.” Indeed, it is under conditions where one “target” appears to lie within the confines of a double layer, and the other does not, that the most extreme apparent differences between equiluminant targets arise.

      The logic is simple: If a surface appears to lie under a shadow, then it will appear lighter than an equiluminant target that appears to lie outside of the shadow, because a target that emits the same amount of light under a lower illumination as one under stronger illumination must have a higher light-reflecting tendency (and this is what the visual system labels using the white-gray-black code.). The phenomenon and logic of double layers is not even intimated at by these Yang and Purves (2004); rather, the impression created is that the relationship between “brightnesses” under different “contexts” is not in the least understood or even amenable to rational analysis. (In addition, the achromatic conditions being considered are rare or even non-existent in natural, daytime conditions; but the more complex phenomenon of chromatic contrast is not mentioned.)

      After this casual attempt to convince readers that vision science is completely in the dark on the subject of “brightness,” the authors assert that “A growing body of evidence has shown that the visual system uses the statistics of stimulus features in natural environments to generate visual percepts of the physical world.” No empirical studies are cited or described to clarify or support this rather opaque assertion. The only citation is of a book containing, not evidence, but “models” of brain function. The authors go on to state that “if so, the visual system must incorporate these statistics as a central feature of processing relevant to brightness and other visual qualia.” (It should be noted that the terms “statistics” and “features” here contain no concrete information. The authors could be counting the stones on Brighton beach.)

      Even if we gave the authors the latitude to define “statistics” and “features” any way they wish, the following sentence would still come out of the blue: “Accordingly, we propose that the perceived brightness elicited by the luminance of a target in any given context is based on the value of the target luminance in the probability distribution function of the possible values that co-occur with that contextual luminance experienced during evolution. In particular, whenever the target luminance in a given context corresponds to a higher value in the probability distribution function of the possible luminance values in that context, the brightness of the target will be greater than the brightness elicited by the same luminance in contexts in which that luminance has a lower value in the probability distribution function.”

      This frequency-percept correlation, in combination with the claim that it has come about on the basis of evolution by natural selection, is the working hypothesis. It is a pure guess; it does not follow naturally from anything that has come before, nor, for that matter, from the principles of natural selection (see below). It is, frankly, bizarre. If it could be corroborated, it would be a result in need of an explanation. As it is, it has not been corroborated, or even tested, nor is it amenable to testing (see below).

      THE HYPOTHESIS The hypothesis offered is said to “explain” a narrow set of lightness demonstrations. Each consists of two displays, both containing a surface of luminance x; despite being equiluminant, these surfaces differ in their appearance, one appearing lighter than the other.

      The claim is as follows: The visual system has evolved to represent as lighter those surfaces that appear more frequently in one “context” than in another. Thus, in each display, the surface that appears lighter must have been encountered more often in that particular “context” during the course of evolution. It is because seeing the higher-frequency target/context combination as lighter was “optimal” this situation arose. More specifically, it is because the response “lighter” to a target in one “context” and “darker” to a target in another context, and so on for contexts in-between, has had, over evolutionary time, positive adaptive effects, that these percepts have become instantiated in the visual process.

      DO ORGANISMS TRACK ABSOLUTE AND RELATIVE LIGHT INTENSITY OVER MOMENTS, HOURS, DAYS, EONS, AT ALL POINTS IN THE RETINAL IMAGE, AND WHEN DID THEY STOP DOING THIS? The hypothesis would appear to presuppose that organisms can discriminate between, and keep track of, the absolute luminances of various “targets” in various “contexts,” as they have been encountered with every glance, of every individual (or at least the ancestors of every individual), at every point in the visual image, every moment, hour, day, across the evolutionary trajectory of the species. There is no suggestion as to how these absolute luminances might be tracked, even “in effect.” Not only is it difficult to imagine what mechanism (other than a miracle) might allow such (species-wide) data collection (or its equivalent) to arise, it runs contrary to the physiology of the visual system, in which inhibitory mechanisms ensure that relative, not absolute, luminance information, is coded even at the lowest levels of the nervous system. Furthermore, given that lightness perception does not change across an individual's lifetime, nor depend on their particular visual experience, it would seem that this process of statistical accumulation must have stopped at some point during our evolution. So we have to ask, when do the authors suppose this (impossible) process to have stopped, why did it stop, and what were the relevant environments at that time and previously? (The logical assumption that the authors suppose the process has, in fact, stopped is reinforced when they refer to “instantiated” rather than developing “statistical structures.”)


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2016 Jul 23, Lydia Maniatis commented:

      This article is illustrative of the degradation of theory and practice in vision science. The article is not new (but compare with, e.g. Graham, 2011), but many of its assumptions and the style of argument - tolerance for layers upon layers of uncorroborated assumptions and inadequate, ad hoc models - are still very current and underpin a broad swath of contemporary (arguably pseudoscientific) activity. Below are quotes from the article and my comments.

      The abstract: "Light adaptation has been studied using both aperiodic and periodic stimuli. Two well-documented phenomena are described: the background-onset effect (from an aperiodic-stimulus tradition) and high-temporal-frequency linearity (from the periodic-stimulus tradition). These phenomena have been explained within two different theoretical frameworks. Here we briefly review those frameworks. We then show that the models developed to predict the phenomenon from one tradition cannot predict the phenomenon from the other tradition, but that the models from the two traditions can be merged into a class of models that predicts both phenomena."

      Comment: One wonders whether the merger was ultimately successful, and whether falsifying phenomena from yet other traditions could be merged with these two, to expand the ever-expanding ad hoc circle of tradition.

      Note that the piecemeal "merger" philosophy expressed by Graham and Hood seems to preclude falsification. Falsifying phenomena from another "tradition" simply lead to the summing of individual ad hoc models into a compound ad hoc model, and so on. The complexity but not the information content of the models will thus increase. (This will also likely produce internal inconsistencies that not be noticed because they will not be looked for.) The question of whether these models correspond with the facts that they are supposedly trying to explain never really arises.

      All this can be garnered from the abstract: If anything, the text is even worse.

      "None of the parts of the merged models, however, is necessarily correct in detail. Many modifications or substitutes would clearly work just as well at this qualitative level. …Also, one could certainly expand the model to include more parts. …Similarly, the initial gain-controlling process might be composed of several processes having somewhat different properties"

      Comment: The models are arbitrary, ad hoc, incomplete on their own terms, and there is no rational criterion for choosing among an infinite number of alternatives.

      "No attempt was made to fine-tune either model to predict all the details in any one set of psychophysical data…much less in all the other psychophysical results that such a model might bear on…To attempt such a project in the future might be worthwhile...particularly if the experimental results were all collected on the same subjects..."

      Comment: In other words, because the models are ad hoc they may only explain the results to which they were tailored. It may (or may not??) be worth checking to see whether this is the case. I.e. testing models for correspondence with the phenomena is optional. The effects in question are too fragile and too little understood to be tested across subjects with varying methods.

      "The more vaguely-stated possibility suggested by this observation, however, is that any decision rule of the kind considered here may be in principle inadequate for the suprathreshold-discrimination case. It may be impossible to explain suprathreshold discriminations without more explicit modeling of higher-level of higher-level visual processes than is necessary to explain detection (where detection is a discrimination between a stimulus and a neutral stimulus.)... Thus a simple decision rule may be suitable in the detection case simply because all the higher-level processes are reduced to such simple action THAT THEY BECOME TRANSPARENT." [caps mine].

      Comment: The idea that there are certain experimental conditions in which the conscious percept consists in the reading off of the activity of "low-level" neurons (as though there was a direct qualitative correspondence between neural firing and seeing, e.g., a blank screen!) is patently absurd and has been criticized in depth and from different angles by Teller (1984).

      Pretending that we take it seriously, we could refute it by noting that the presumably simplest of all stimuli - a white surface free of any imperfections - produces the perception of a three-dimensional fog. Other experiments have also shown that organizational processes are engaged even at threshold conditions, and that, indeed, they influence thresholds.

      The "transparency theory" is repeated by Graham (2011) in an article in Vision Research: "It is as if the near-threshold experiments made all higher levels of visual processing transparent, therefore allowing the properties of the low-level analyzers to be seen."

      This casually proposed "transparency" theory of near-threshold stimulation is apparently the basis of the widespread belief in spatial frequency channels and the extraordinary elaboration of associated assumptions. Without the transparency theory, it is not clear that even cherry-picked evidence can support this popular field of research. (If you've ever wondered at the widespread use of "Gabor patches" in vision research, this is the root cause - they are considered elemental due to the transparency theory-supported low-level neuron spatial frequency sensitivity hypothesis. I suspect many of the people using then don't even know why).

      Graham and Hood (1992) also go into a little finer detail about the putative basis of transparency: " In the suprathreshold discrimination case, the observer is trying to discriminate between two sets of neural responses…both of which sets contain many non-baseline responses. In the detection case…the observer is simply trying to detect some non-baseline responses in either set (since that is the set most likely to be the non-blank stimulus). Thus a simple decision rule may be suitable in the detection case simply because all the higher-level processes are reduced to such simple action that they become transparent."

      Comment: The suggestion that investigators are managing to set their low-level visual neurons at baseline at the start of experiments also seems implausible. I certainly don't think this has been explicitly discussed from the point of view of method. "However, at this moment in time, we seem to be able to explain background onset with a subtractive process…Thus, we can just leave as a marker for the future – should troubles in fully explaining the data arise – the possibility that the background-onset effect in particular (and perhaps all the results) will not be explained in detail without including more about higher-level visual processing."

      Comment: Translation: We'll accept our ad hoc hypo thesis for now, but it's probably wrong. But we'll worry about that later, if and when anyone makes trouble by bothering with actual tests. Because there's always a (zero) chance that it will be corroborated.

      Finally, the article also provides a good example of misleading use of references: "Quantal noise exists in the visual stimulus, as is well known (see Pelli for a current discussion.)"

      Comment: From Pelli, 1990: "It will be important to test the model…For gratings in dynamic white noise, this [model] prediction has been confirmed by Pelli (1981), disconfirmed by Kersten (1984) and reconfirmed by Thomas (1985). More work is needed." But we don't need to wait for evidence to firmly believe in "quantal noise."


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    1. On 2017 Feb 10, David Attwell commented:

      The answers to Dr Brette’s new points are as follows.

      (1a) Ohmic vs Goldman dependence of current on voltage

      For a cell as assumed in Attwell & Laughlin (2001) (i.e. 200 Megohm membrane measured with a 10mV hyperpolarizing step, VNa=+50mV, VK=-100mV, Vrp=-70mV, [Na]o=[K]i=140mM, T=37C), assuming a Goldman voltage dependence for the Na<sup>+</sup> and K<sup>+</sup> fluxes through ion channels leads to PNa/PK=0.0746 and a Na<sup>+</sup> influx at the resting potential of 130pA, corresponding to an ATP consumption of 2.7x10<sup>8</sup> molecules/sec. This is 21% less than the 3.42x10<sup>8</sup> molecules/sec we calculated using an ohmic dependence of the currents on voltage. This difference is negligible given the variation of measured input resistances and the range of other assumptions that we needed to make. Furthermore, there are no data establishing whether the voltage-dependence of Na+ influx is better described by an ohmic or a Goldman equation.

      (In a later post Dr Brette claims that the error arising is 40%. We suspect that his value arises from forgetting the contribution of the Na/K pump current to setting the resting potential, which leads erroneously to PNa/PK=0.05, and a 41.3% lower value than the ohmic dependence predicts.)

      (1b) Cl<sup>-</sup> permeability

      Our point was that the Cl<sup>-</sup> permeability was negligible. Which equation was used to derive that fact is therefore irrelevant.

      (2) Pumps

      We still disagree with the notion that, for a membrane with just Na<sup>+</sup> and K<sup>+</sup> fluxes, the system is unstable if it only has a Na/K pump. The Na pump rate is adjusted to match activity via its dependence on [Na<sup>+</sup> ]i and by the insertion of more pumps when needed.

      (3) Cost of Na<sup>+</sup> extrusion at the mean potential

      Tonic synaptic activity may depolarize cells by a mean value of ~4-8mV (Paré et al., 1998, J Neurophysiol 79, 1450). This will affect the calculation of “resting” Na+ influx negligibly (e.g. by ~6mV/120mV = 5% for a 6mV depolarization with Vrp=-70mV and VNa=+50mV). As stated in our earlier comment, this depolarization does not affect the ATP used per Na<sup>+</sup> pumped by the Na/K pump. Finally the ATP used on extruding synaptic ion entry is considered separately in the calculations.

      (4) What input resistance tells us

      For cortical L2/3 pyramidal cells the majority of the membrane area is in the basal dendrites, which have an electrotonic length of ~0.24 space constants, while the apical dendrites have an electrotonic length of ~0.69 space constants (mean data at body temperature from Trevelyan & Jack, 2002, J Physiol 539, 623). Larkman et al. (1992, J Comp Neurol 323, 137) similarly concluded that most of the dendrites of L2/3 and L5 pyramidal cells were within 0.5 space constants of the soma.

      Elementary cable theory shows that, for a cable (dendrite or local axon) with a sealed end, with current injection at one end, the ratio of the apparent conductance to the real conductance, and thus the ratio of our calculated ATP usage (on Na<sup>+</sup> pumping to maintain the cable’s resting potential) to the real ATP usage, is given by (1/L).(exp(2L) - 1)/(exp(2L) + 1) where L is the electrotonic length (cable length/space constant). For L=0.24, 0.5 and 0.69, respectively, this predicts errors in the calculated ATP use of 1.9%, 7.6% and 13.3%, which are all completely negligible in the context of the other assumptions that we had to make.

      For the axon collaterals near the soma, there is less information on electrotonic length, but the few measurements of axon space constant that exist (Alle & Geiger, 2006, Science 311, 1290; Shu et al., 2006, Nature 441, 761) suggest that the axon collaterals near the soma will similarly be electrically compact and thus that their conductance will be largely reflected in measurements of input resistance at the soma. We excluded the part of the axon in the white matter from our analysis, but did include the terminal axon segments in the grey matter (where the white matter axon rises back into a different cortical area. Re-reading after 16 years the source (Braitenberg & Schüz, 1991, Anatomy of the Cortex, Chapter 17) of the dimensions of these axons, it is clear that those authors were uncertain about the contribution of the terminal axon segments to the total axon length, but assumed that they contributed a similar length to that found near the soma in order to account for the total axon length they observed in cortex. It is unlikely that these distant axon segments will contribute much to the conductance of the cell measured at the soma but, partly compensating for this, part of the axon in the white matter will. This, along with the electrical compactness of the dendrites and proximal axons discussed above, implies that our calculated ATP use on the resting potential is likely to be correct to within a factor of 1/f = 1.57 (where f=0.64 is the fraction of the cell area that is electrically compact [ignoring the minor voltage non-uniformity quantified above], i.e. the soma, dendrites and proximal axons, calculated from the capacitances in Attwell & Laughlin and assuming that the proximal axons provide half of the total axon capacitance in the grey matter).

      In fact the situation is likely to be better than this, because this estimate is based on membrane area, but ATP use is proportional to membrane conductance. Estimated values of the conductance of axons (Alle & Geiger, 2006, Science 311, 1290) suggest that the specific membrane conductance per unit area in axons is significantly lower than that in the soma and dendrites (see the Supplementary Information section on Granule Cells in Howarth et al. (2010) JCBFM 30, 403), which reduces the ATP used on maintaining the resting potential of axons.

      General reflections on what people expect from the Attwell & Laughlin paper

      Our paper tried to introduce a new way of thinking about the brain, based on energetics. Given the large number of assumptions involved it would be a mistake to expect individual values of ATP consumption to be highly accurate. Remarkably, the total energy use that we predicted for the grey matter turned out to be pretty well exactly what is measured experimentally. Nevertheless, constant updating of the assumptions and values is, of course, essential. It is interesting that the value we derived for the ATP used per cell on the action potential (3.84x10<sup>8</sup> ATP) was initially revised downwards nearly 4-fold in the light of papers showing less temporal overlap of the voltage-gated Na<sup>+</sup> and K<sup>+</sup> currents than occurs in squid axon (Alle et al., 2009, Science 325, 1405), but has increased with more recent estimates back to be close to our original estimate (3.77-8.00x10<sup>8</sup> ATP, Hallermann et al., 2012, Nature Neuroscience 15, 1007).

      The most important assumption that we made was that all cells were identical, which immediately implies that this can only be an approximate analysis. We were very happy that the total energy use that we predicted from measured ionic currents, cell anatomy and cell densities was so close to the correct value.

      General reflections on post-publication peer comments

      We believe that if someone has questions about a paper then the most productive way to get them answered is: (i) to think about the issues; if that fails (ii) to write to the authors and ask them about the questions, rather than posting some vague and erroneous comments that will forever be linked to the paper, regardless of their validity; and if that fails (iii) to write a paper or review which goes through peer review, pointing out the problems. Peer review is crucial for determining whether the points are valid or not - it potentially saves many readers the time needed to read possibly erroneous comments.

      It takes a long time to reply to such comments, and we feel that Dr Brette could have done the calculations that we have provided in our two sets of responses. We will not be posting further responses therefore.

      David Attwell & Simon Laughlin, 09-02-17


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  7. Jun 2018
    1. develop and validate measures of writing quality

      I think you can see this growth in writing over time. By having students engage in multiple pieces of authentic writing we may be able to parse growth.

      Think about annotations and developing a code book to annotate over time. That is clearly demonstrable growth thaty could easily be analyzed.

      Tools that make teachers more reliable are just as important.

    1. After reading “Gilgamesh and the Bible” I am not surprised that “The Epic of Gilgamesh” and the Hebrew Bible have similar stories. To be honest, the bible has been translated so many times I am not surprised that the original may have been copied and somewhat changed. After all, there is so much good information it may have been plagiarized by others who wrote the bible. Unfortunately, a lot of people take something and translate it their own way or to simply have a better understanding of how man kind started. The article explains that start, the problems, and the consequences, and death of human life. Even back then temptations will come along, the humans will fall into a trap, and have to deal with the consequences of their actions. Today people still have to face the same problems that they did back then like being tempted to drink beer, separate themselves from nature, dating the wrong people, and look up the god like figures. I believe that people should unite without having to depend on religion or opinions of others, we as humans can be strong individuals. We have to believe in ourselves, not go against nature, heal our inner souls; we will have stress free easier lives to live by. What stood out to me the most in this article was the poem at the end. It simply says that our life will end eventually so we must enjoy it while we can. Eat well, bathe, take care of our children, and let the woman be free. I think this sure is the way of life and we can get over every hurdle that life throws at us.

    1. Why does boredom seem painful? Shouldn’t it just be boring?

      While self-shattering is nonviolent, there are many other ways that thanatos, the destructive instinct, is twinned with boredom

      Renata Salecl acknowledges the twinning of boredom and aggression when she writes that the society “which allegedly gives priority to the individual’s freedoms over submission to group causes” (2006) and filters choice through the prism of “opportunity cost” is one that “causes aggression towards [the self] and apathy in relation to contemporary social problems which are completely ignored by the emporium of individualist choices” (2013a).

      Sometimes the aggression turns outward, as well. The Internet troll as bored, isolated malcontent is well established as a cultural trope and borne out by empirical data (Sanghani 2013). Liam Mitchell (2013) even ups the ante on this notion by proposing that the troll tackles the “desire for desires” problem by erecting “a conscious barrier to unconscious desire” by eliding investment in its principal object, which is amusement at another’s expense, or “lulz.” In Mattathias Schwartz’s (2013) formulation, lulz is “a quasi-thermodynamic exchange between the sensitive and the cruel”; humour derived from “disrupting another’s emotional equilibrium.” In pursuing lulz, the troll establishes “a distance from other trolls (with whom he may or may not feel a bond) and from the people who are governed by normal formations of desire” (Mitchell 2013). Insofar as the troll’s pursuits “bypass or forestall normal formations of desire, they may be characterized as non-subjective.” This is significant because, as Mitchell says, our choices only “have lasting meaning, for others and for ourselves … when we can be held accountable to our promises,” and this is impossible in a condition of both online anonymity and refusal of subjectivity.

      The study “Just think: The challenges of the disengaged mind” (Wilson et al. 2014) asked participants to spend some time alone thinking in an empty room. There were three study conditions, in all of which participants generally gave high ratings of boredom. In one condition the experimenters gave people the option of giving themselves a mild electric shock. 67% of men and 25% of women shocked themselves. So goes the saying “the devil makes work for idle hands.”

      Or more broadly: there are many ways, as Baudelaire said in Les Fleurs du Mal (1857, xxv), that “ennui makes your soul cruel.”

    1. There may be frustration that more progress wasnot made, or that some misunderstanding still remains,but we can’t expect everything in real life to be totallysuccessful. The consultation will have been a failureif, at its end, the client feels that the consultant wasnot listening, or was not interested in trying to under-stand the problem, or did not know enough Statistics tosolve the problem.

      I think this is a great thing to focus on and keep as a reminder for any kind of consulting project. If we can maintain a solid relationship with the client, solutions do happen.

    1. including the argument to burn it all down

      It would be truly amazing to see how the world would be without the internet now that we are surrounded and completely submerged in it as a society. Some may say well we did it all the time before the late 80's so it shouldn't be that hard. That is true however back then the internet wasn't even fully evolved like it is today and/ or used by the majority yet. I think it would definitely be a different story today because we are reliant on it for everyday purposes. I mean hey I'm taking an entire course online as we speak!

    1. After more than 40 hours of research and over a month of testing 13 devices, we think the GreatCall Lively Mobile is the best medical alert system for most people. Unlike most devices, it can reach either 911 or a call center from anywhere in your home or out in the world. That means the GreatCall Lively Mobile can help in all manner of situations, from getting EMTs and loved ones on the scene after a fall to contacting a friend for you if you can’t find your phone (yes, really; we tried it). Share this review on Facebook Share this review on Twitter Save this review on Pocket Share this review on Pinterest Share this review with E-mail It’s less expensive per month than any similar device, and relatively stylish, too. But choosing a medical alert is a personal decision, so there are different factors to consider: If you won’t wear a medical alert that looks vaguely like a medical device, won’t remember to charge a medical alert, or will have trouble pushing a button during an emergency, we have picks for you, too. Our pick GreatCall Lively Mobile The best medical alert system Our favorite medical alert system is comfortable to wear around your neck or on a belt clip. We found that the call center picks up faster than the competition, typically 15 seconds after you push the button. $40 from GreatCall $34 from Walmart The water-resistant GreatCall Lively Mobile can dial a call center or 911 directly (the ability to do both is a rare feature) from anywhere, and it’s easier to wear than the competition: It can go on a lanyard (with a magnetic clasp) that’s long enough to slide over your head, or on a belt clip. The silver box and plain white indicator light are more understated than the competition, and GreatCall offers the lowest-cost month-to-month plan of anything we looked at. The battery lasts 24 hours, according to GreatCall, though we found it could go up to 50 hours with minimal use. The advertised battery life is on the low end of the models we tested, but our experts recommended getting in the habit of charging your medical alert every day anyway. The GreatCall Lively Mobile works anywhere there’s Verizon cell service, which we’ve found to be the most reliable network. Advertisement googletag.cmd.push(function() { googletag.display('div-gpt-ad-1524661673579-0'); }); Also great Lifestation At Home An at-home medical alert system Call for help from a room or two in your house with this less-expensive and easier-to-wear system, available in versions that connect to a landline or cell service. $26 from LifeStation If you are with someone whenever you leave your house and have a small living space, or just want protection in one place (like the shower), you might not need a mobile medical alert like our pick. An at-home medical alert is less expensive with a less bulky button to carry, but the range is very limited. Most at-home systems are similar, but we found the Lifestation At Home to be a little easier to use and less expensive than the competition (month-to-month plans are $30 per month for a device that connects to a landline, and $37 per month for one with cellular service). The major downside of the Lifestation At Home is that you cannot speak directly into the device. If you fall, you can push the button from a few hundred feet away from the base station to dial the call center, but you’ll need to be within shouting distance of the base station to communicate whether you’re in need of 911 help or just want an emergency contact to come help you get up. (If you are unable to speak, or the call center cannot hear you, they will follow a course of action that you specify when you sign up: call a family member, call EMTs, or some combination thereof.) The battery in the wearable button lasts three years, and the base unit plugs into the wall. The button connects to the base via radio signal. Also great Apple Watch Series 3 (aluminum) No call center and no contract Bare-bones emergency features, but the most stylish. $330 from Apple If looks matter to you and you have an iPhone, or if you may have trouble pushing a button in an emergency, consider the Apple Watch Series 3. Though it’s bare-bones in emergency functions, it’s much more discreet to wear than other devices we tested. Because compliance matters more than anything when it comes to these devices, wearability is important. At a one-time cost of a few hundred dollars, the Apple Watch works out to be less expensive than buying a separate medical alert with a monthly bill after about a year of use (not including the cost of iPhone service, which is required for the Apple Watch to place phone calls). A button on the side of the watch allows you to place a call to 911 and can alert your emergency contacts that you placed a call when you are in range of your phone or connected to the same Wi-Fi network, and you can speak directly into the watch. You can also place an ordinary nonemergency call through the watch, either by scrolling through your contacts list or saying, “Hey, Siri, call [contact].” The Apple Watch battery lasts 18 hours with some use (less if you’re using it to make frequent phone calls). In September 2017, Apple announced a version of the watch with LTE, but we recommend the one without cellular connectivity for now. Budget pick Ask My Buddy A bare-bones option for home A voice-controlled app that can give you added peace of mind, but can’t call 911. Buy from Ask My Buddy Amazon Echo Our favorite voice-controlled device It’s relatively easy to set up Ask My Buddy on the Amazon Echo, which can also play music, tell you the weather, and control smart devices. $180 from Amazon Buy from Amazon Buy from Amazon Buy from Amazon If you want an extra layer of security at home and are considering getting a voice-controlled smart-home device anyway, Ask My Buddy is a free service available on the Amazon Echo (here’s our full guide on Amazon’s Alexa devices). If you need help and are in the same room, you can say “Alexa, ask my buddy to send help” and your emergency contacts will get a notification via email, text, or robocall. You can also place a call through the Echo to anyone with an Alexa device, or the app. Of all the medical-alert-capable devices, the Echo plus Ask My Buddy is one of the least expensive and least intrusive options. However, it offers very minimal protection: It can’t travel with you, and it can’t actually call 911 or reach anyone who is constantly available to dial 911 for you if you ask. We wouldn’t rely on any medical alert alone to save us in an emergency where every second counts, anyway, but this one ranks the lowest in terms of how much it can help in a variety of situations. Keep up with everything Wirecutter from your inbox Wirecutter Weekly: New reviews and picks, sent weekly Deals We Love: The best deals we can find, sent daily Please choose a newsletter to subscribe to. Sign up for Wirecutter's Newsletter Subscribe That wasn't a valid email address. Please try again. Feel free to opt out or contact us at any time. Opt out or contact us at any time. Thanks for subscribing! You'll be hearing from us soon. The research Expand all Why you should trust us Who should get this Can I just use a cell phone, a smart watch, or Alexa/Google Voice? How we picked How we tested What medical alerts are like to use and wear Our pick: GreatCall Lively Mobile Flaws but not dealbreakers Also great: Lifestation At Home Also great: Apple Watch Also great: Amazon Echo with Ask My Buddy What to ask in a test call Why we don’t recommend Life Alert The competition Sources Why you should trust us Medical alert systems (sometimes referred to as personal emergency response systems) have been around for decades. Perhaps the most recognizable name brand, Life Alert, with its ear-worm of a slogan “I’ve fallen and I can’t get up,” was founded in the 1980s. To understand how people use medical alerts, I spoke to George Demiris, PhD, a professor in the department of biomedical informatics and medical education at the University of Washington; Marita Kloseck, PhD, director of the Sam Katz Community Health and Aging Research Unit at Western University in Ontario, Canada; and Majd Alwan, senior vice president of technology and executive director of the LeadingAge Center for Aging Services Technologies. I also spoke to experts who help people select medical alerts: Mindy Renfro, who has worked as a physical therapist and is currently a research assistant professor at the Rural Institute On Disabilities, part of the University of Montana; Richard Caro, who writes about medical alerts at Tech-enhanced Life; Tony Rovere, chair of the Long Island chapter of the National Aging in Place Council and blogger at StuffSeniorsNeed.com; and Melissa Kantor, the executive director of Long Island at Home, which sells medical alerts and aging-in-place services to local seniors. I fall well below the typical age at which people purchase a medical alert, so I approached the research as though I were selecting one for a loved one to use. According to experts, I’m not far off from a typical customer: Many medical alerts are purchased by adult children looking for ways to better support a parent who is aging in place. I spent weeks trying out the devices for myself. I also consulted a family member who already uses one, my great-aunt Kay. Who should get this Pull QuoteOut of the 30 medical alert users in Ontario, Canada, that Kloseck and her colleagues spoke to, 90 percent agreed that the devices helped them maintain their independence. My great-aunt Kay lives alone in Erie, Pennsylvania, near the farm where she and my grandma grew up. She’s had a couple nasty falls in the past five years, but she doesn’t want to live in a nursing home. She prefers her own house and her daily routines. But she wants to know that if she needs to, she’ll be able to call for help quickly. Her medical alert device helps her maintain an independent lifestyle, as it does for many. “I dread just being an ill person who can’t cope with daily looking after yourself,” said one participant in the focus groups researcher Marita Kloseck conducted on what it’s like to live with a medical alert. “The last thing I want to do is lose my independence and be an invalid, it’s my biggest fear.” If you’re living independently and at risk for falling or another medical emergency, a medical alert is one safety measure to consider. Out of the 30 medical alert users in Ontario, Canada, that Kloseck and her colleagues spoke to, 90 percent agreed that the devices helped them maintain their independence. Though many people, like Aunt Kay, turn to medical alert systems after they’ve had a scary incident, the best time to get one is before you need it. If you are having trouble standing up to get out of a chair, said Renfro, it’s a good time to consider one—especially if you live somewhere where neighbors are few and far between, as is the case in Montana, where Renfro works. It’s not just for falls. One interviewee in Kloseck’s focus group reported successfully contacting EMTs via a medical alert after indigestion-like pain for over a day. “I think they must have flew here!” Another said she liked having a medical alert in case someone suspicious showed up at the door. A medical alert might simply relieve anxiety about emergencies. “You hear about people who fall and then can’t get help and they lay there for sometimes hours, but it just scares you when you think that could happen,” noted one participant in Kloseck’s survey on what motivated her to get a medical alert. “Subscribers reported feeling a sense of security or peace of mind,” Kloseck writes. As Aunt Kay puts it: “I feel protected.” Pull QuoteYou should get a medical alert only if you’re committed to wearing and using it. A medical alert should be just one line of defense against any medical emergency, along with working with a physician or physical therapist to monitor or improve your health and eliminating any hazards around the house, said Alwan. No devices we tested worked perfectly, and no medical alert will undo the damage of a fall (or anything else). Though all experts I spoke to agreed that medical alert systems made you safer, it’s hard to tell by how much. Studies suggest that these systems can reduce the amount of time spent on the floor after a fall, but there’s nothing conclusive in the way of peer-reviewed work showing how many lives they save per year. (In fact, experts I spoke to mostly said that their own parents didn’t have medical alerts, preferring to rely instead on check-ins with a friend or neighbor). You should get a medical alert only if you’re committed to wearing and using it. “I don’t even move without it,” Aunt Kay said. It doesn’t do any good sitting atop your dresser, or if you don’t feel comfortable pushing the button in an emergency. They make poor surprise gifts, says Renfro. If you’d like to foot the bill for a device for a loved one, make sure it’s something they want, and involve them in the process of picking it out. Can I just use a cell phone, a smart watch, or Alexa/Google Voice? Today, many of us already carry around a emergency help button: our cell phones. In fact, for some, the ability to dial 911 is the main appeal of having a wireless device at all, according to the Federal Communications Commission. If you’re in the habit of carrying around a phone constantly, it might be a good alternative to a medical alert. There are a few downsides: Most important, your phone might not be water-resistant, or at least might be awkward to take in the shower and hard to reach if you slip. It doesn’t have an option for automatic fall detection, which some medical alerts do. In a nonemergency it can’t reach a call center, so you’ll have to dial family until someone picks up. And it’s not set up by default to automatically share your location, as many medical alerts will with the call center agent. A phone’s battery also doesn’t last as long as a medical alert system’s, if you’re using it to do other things. However, if you know you’ll have trouble remembering to wear a medical alert, or can’t afford $20+ per month, committing to keeping your phone on you at all times is better than nothing. Some companies have apps that provide access to a call center just like medical alerts do, at about a third of the price of a monthly medical alert subscription. However, some can be confusing to use, slow to load, and even sometimes freeze, according to Caro, who tested out three of the apps. They’re also stuck behind a lock screen. There are a few devices that offer medical-alert-like features that are not technically medical alerts, including the Apple Watch and other smart watches, Amazon Alexa devices, and Google Home. Though these devices can offer emergency features, none will be as reliable as a device that has the sole purpose of reaching help. If you know you won’t wear a traditional medical alert, these are better than nothing. How we picked There’s a medical alert for every lifestyle. The most important feature of a medical alert system is that it’s something you are willing to use. Not even the most reliable device will be of use if it’s stashed in a drawer. Comfort and stylishness tended to decrease with range, I found. (A notable exception to this was the Apple Watch, which was the most comfortable and stylish of the bunch, and can go anywhere as long as your iPhone goes too.) We ended up testing a wide range of devices that covered all the possible configurations, but here’s how to figure out which kind works best for you. (Finding a device that you like might involve some trial and error: it took Aunt Kay two tries to find a medical alert that works well for her needs, and some time after that to figure out how to wear the device in a way that’s comfortable.) If you have a living space that’s bigger than a couple rooms, or if you leave your house alone, a mobile medical alert, which can go anywhere there’s a cell signal, will work best to keep you safe. They consist of a unit a little smaller than a deck of cards that you wear around your neck or on a belt clip that houses a GPS system, a speaker, and a microphone. The button calls someone directly from the device, and you speak to them through the unit. The button on most mobile medical alerts dials a call center (though our favorite can also reach 911). Agents are available 24/7, and pick up anywhere between 15 seconds and two minutes after you press the button. They can send 911 to your house, call a friend or family member on your behalf, or simply keep you company while you troubleshoot the situation yourself. They typically have a $20 to $70 monthly fee that includes the cost of the service and the device. (There’s often an activation or equipment fee, too.) Mobile medical alerts work off mobile carriers (e.g., AT&T, Verizon), so you’ll need to check the coverage in your area before making a purchase. They also need to be charged daily, or every few days, depending on the model. If you don’t want to call 911 directly in a minor emergency or if you slip in the shower while you’re naked, mobile medical alerts offer a way to get a variety of help, via a call center. The call center employees are there 24/7, unlike family members who will inevitably be sleeping, in work meetings, or on vacation sometimes. Further, mobile systems that connect to a call center almost always come with an option for automatic fall detection for about $10 extra per month (if you don’t like it, you can turn it off). When the device senses a change in vertical acceleration, it calls for help. If you are totally knocked out, the operator will attempt to figure out your location via the GPS signal from the device. Fall detection is a great idea, in theory, said experts. In practice, it’s prone to registering false positives, or failing to detect actual falls. “It can be embarrassing, it can [disrupt] activities, it can be costly,” said Demiris. (Part of the problem: Stunt actors falling accidentally on purpose are often used to calibrate fall detection.) Even if the device does successfully make a call after you’ve slipped, if you’ve been knocked unconscious, the operator at the call center could still have trouble figuring out where to send help. Most mobile systems have built-in GPS, but the little dot that shows operators where you are is subject to drift around. (Have you ever opened Google Maps on your phone and had the blue dot appear somewhere you aren’t? That’s it.) There are technical improvements that can be made on bare-bones GPS—like a device that checks in with Wi-Fi signals, when possible—but no device will always pinpoint your location accurately. At-home medical alerts are devices that are for use just at home, with a base station that can be connected to cell service or a landline. With just a few exceptions, these consist of a small, light button that can be worn on your wrist or around your neck. Push the button within about 600 feet of the base station (they’re connected via a radio signal), and you can speak to an operator through the base, which looks kind of like an answering machine. These medical alert systems tend to have lower monthly costs, and a device that’s far less bulky and annoying to wear (it’s about the weight and diameter of a quarter). There’s nothing to remember to charge, either. But the limited range can be frustrating, according to participants in a survey conducted by researchers at Jönköping University in Sweden, not just because it limits movement. “In particular, they felt that the lack of new technical innovations in the alarm system, such as the inclusion of a global positioning system (GPS), was a clear indication that their needs were not considered priorities in society,” the researchers write. A homebound system can make you feel homebound, which isn’t useful for people who want to be active outside of the house. Some companies offer affordable devices that can be used to call a loved one or even 911 directly. They do not reach call centers or have their own cell service (the two features are typically paired). These are less expensive because they lack a monthly service fee; rather, they rely on Wi-Fi, a smartphone, or a landline. They range from specialized medical alerts to the Apple Watch. No matter what style of medical alert you want—mobile or traditional, with a call center or not—you have a few options for how you’ll wear the device. Medical alerts can hang around your neck or wrist, or clip to your belt; for any particular device, there are often at least two options. What works best is largely personal preference, though Demiris notes that a device worn around your neck can be easier to make a habit of wearing if you’re used to putting on jewelry in the morning (it’s also necessary if you are using automatic fall detection). Battery life varies broadly for medical alerts, anywhere from just a day to a week. Experts advised getting in the habit of charging the device every night, so we didn’t prioritize long battery life. There’s typically no volume control on medical alert systems, which are about as loud as a cell phone at top volume and on speaker. The advantage is that there’s no way to accidentally turn the volume down. However, if you’re hard of hearing, volume could potentially be an issue. We looked only at devices that came with the option to make monthly payments or required no payments at all, and we discarded any that require you to have an annual contract for the service on the advice of Tony Revere, who blogs at StuffSeniorsNeed.com. You should be able to send the device back without breaking a contract if you try a particular one and realize it—or the whole concept—just isn’t for you. There are various certifications that medical alert equipment and call center equipment can have to make sure they’re up to certain safety standards. For example, companies can pay Underwriter Laboratories to verify their device has certain features. Experts we spoke to disagreed on the level of importance of these certifications, but no one thought that it was a dealbreaker to not have one. And because we did our own testing, we were able to learn firsthand if a system was reliable. Some companies will advertise that their call center is based in the US. Caro, who writes about how to pick a medical alert on Tech-enhanced Life and logged many hours himself testing medical alerts, pointed out to me that all the call centers he’s encountered sounded like they were based in the States. How we tested It’s hard to tell a lot about how easy, effective, and comfortable a medical alert is from descriptions online or from people who may have only used one on their own with no point of comparison, so we decided to try the devices ourselves. I spent several weeks integrating the devices into my life, and then pushing their limits as much as possible. I went through the setup process for each device, which ranged from placing the device in a charging cradle (which all mobile medical alerts use) and following a few verbal instructions, to leafing through a fine-print manual. One device required a traditional landline; I trekked to a coworker’s parents’ apartment on the Upper West Side to use one after it wasn’t compatible with our VoIP system at work. I used each medical alert for at least a day, wearing the mobile medical alerts to work and out with friends, making test calls in all manner of locations. For a while in February, my outfit was consistently punctuated by a low-hanging, blinking device, my kitchen counter and bedside table littered with in-home devices and charging cradles. I made several test calls with each device and compared both response time and the quality of customer service. We prioritized devices that could be worn a variety of ways and made accommodations for people without fine motor skills, like a lanyard with a magnetic clasp that doesn’t need to be looped over a head. Some devices require dexterity not everyone has, like pushing a lanyard through a small hole, or attaching them to a belt clip. What medical alerts are like to use and wear When I wore the Medical Guardian Premium Guardian, a former runner-up pick, out one night with my coworkers, the diamond indicator light blinked red as I ate my food. In the course of chatting about work, I mentioned that I was trying out medical alerts. “Oh that’s what that thing is,” one coworker said. “I thought maybe you had an allergy.” While I was getting used to having a medical alert on me, they still read as a medical device and a little bit strange to the outside world. I was surprised and delighted to learn during this process that, despite the fact that advertising for these devices seems to prey on our fear of mortality and disaster, I didn’t have to be in a life-or-death situation in order to buzz the call centers. The operators are just as happy to help talk through a situation and provide support from afar, and never seemed to be itching to push me into declaring an emergency. The buttons for in-home medical alerts are all tiny and barely noticeable. Mobile medical alerts were the biggest nuisance to wear, in part because of their size, and in part because they tag along for all manner of social situations. They are heavier and can draw considerable attention. I got in the habit of tucking the medical alerts into my shirt, per Aunt Kay’s advice. Some made their presence known even when they were out of sight, chiming to indicate their charge status when I was in a crowded elevator at work, or even speaking up at inopportune times. One day at work, the Premium Guardian verbally announced to me and everyone in a two-cubicle range that its battery was low. I spent a lot of time doing the dreaded thing—pushing the button to ask for help—just to see what would happen. Some devices made chiming sounds, some vibrated, and some noted that they were dialing the call center. The best medical alerts continuously did something as I waited for someone to pick up, as long spaces of silence would leave me wondering if I had accidentally hung up or lost signal. For medical alerts with call centers, someone typically picked up within 30 seconds. Longer than that felt like an eternity, even from the safety of my desk or bed; I wouldn’t want a loved one waiting that long during an emergency. All call centers say more or less the same line when they pick up: “Hello, do you need help?” I usually said no, I was just placing a test call. In one instance, curious if the call center would be willing to help out in a truly minor situation, I asked the operator to call my boyfriend to tell him I was running late to meet him. The operator was happy to oblige. Voice-controlled units like the Amazon Echo don’t require you to wear anything, but work reliably only when you’re in the same room. I set up the Echo in my kitchen, and when the dishwasher was running, even when I was screaming “Hey, Alexa!”—the signal that you’re about to give the device a command—over and over from a room away, it could not hear me. (This is also a pitfall of relying on a device to play music and be able to hear you in an emergency.) I had similar experiences with traditional in-home medical alerts. The range on these devices is technically several hundred feet from the base station, and though the call center operators could hear me yelling from a room away, they had trouble understanding me. (I just moved closer to the base station, but in the event that you fall and they can’t hear you, they’ll follow a preplanned course of action that you decide when you sign up, like calling a family member and then EMTs.) Services that try to use both the button and a base station to communicate were suboptimal. In the case of one hybrid mobile and home device, an operator first tries to talk to you via a stationary home box, and then switches to the wearable if you don’t respond there. After pushing the button at work, I sat in an empty conference room for a full two minutes while, presumably, someone first tried asking my empty apartment if anyone needed help before switching over to the speaker around my neck. During test calls, I asked operators to identify my location. No GPS was consistently accurate, though they were often correct within a couple blocks. This makes backup measures attractive, like the GreatCall Lively Mobile’s Web interface where you can log your typical schedule. Sometimes the GPS was way off. Once, while testing the Bay Alarm Medical GPS Alert System, an operator said that I was at the New York Times building in midtown where the device had lost power, when in fact I’d gone home to Brooklyn. The device lost power downtown, and had only just been recharged when I placed the test call; I suspect that it hadn’t been on long enough to update its location. On another occasion, the GPS on a device wasn’t working at all, and took two phone calls to customer service to fix. I found that operators were rarely able to troubleshoot problems with the device or answer questions about service. Though call center employees were willing and able to help with even minor incidents, they weren’t inclined to make small talk. Once, after noting my location an operator did exclaim that she used to live on my street, and we had a short conversation about the rising rents in Brooklyn. But with few exceptions, the call center people hung up quickly after addressing my requests. Despite being vaguely worried when I started this project about accidentally having EMTs show up at my house, I never once pushed the button on a medical alert unintentionally during testing, including a few occasions when I just threw them in my purse. If you do accidentally hit the button, chances are you will be connected to a call center, and you can just clarify what happened with an operator. (Medical alerts make noise when they are placing a call, so a butt-dial will not go unnoticed.) Most medical alerts do not call 911 directly, and those that do require a more deliberate, prolonged push to reach emergency services. At first I skipped providing my emergency contacts, in part because I didn’t expect to be in immediate danger, but also because it was such an easy step to overlook. In all but one case, it was possible to get through the activation process without providing them, which you typically have to do over email, fax, or via snail mail to ensure that the contact information is entered correctly. Only one model, the GreatCall Lively Mobile, allowed you to enter them in an online interface. Our pick: GreatCall Lively Mobile The GreatCall Lively Mobile is intuitive to use, and has a plain design that won’t draw too much attention. Our pick GreatCall Lively Mobile The best medical alert system Our favorite medical alert system is comfortable to wear around your neck or on a belt clip. We found that the call center picks up faster than the competition, typically 15 seconds after you push the button. $40 from GreatCall $34 from Walmart The GreatCall Lively Mobile was one of the easiest mobile medical alerts to wear and use, and costs less than any other medical alert of its kind that we considered, with service starting at $20 per month, with one-time fees totaling $80. The rectangular silver and black (or gold and black) design draws minimal attention, and the call center consistently picks up quickly—up to eight times as fast as others. The battery life is 24 hours, according to the company, among the the shortest we considered, though I found it lasted nearly twice that long with minimal use. The device is a little smaller and lighter than a deck of cards. One big button in the middle dials the call center or—if you hold it down—911. A small button on the back turns it on and off. A small battery-indicator light changes colors when the Lively is low on charge, but it doesn’t draw a ton of attention to itself. When the Lively shuts off from low battery, it announces that it’s doing so. (It was loud enough to wake me up at 4 a.m. one day, a good feature if you’ve forgotten to charge it and have missed the battery light.) The Lively Mobile can go anywhere there’s Verizon cell service, including your shower, as it’s waterproof. In separate tests, we’ve found Verizon to be the most reliable network, though it doesn’t cover every part of the country. Check here to see if your area is covered. The Lively Mobile is one of the only medical alerts we looked at that has the option to call either a call center, or—by holding down the button—911 directly. The speaker and microphone in the device provide sound quality that’s better than that of many other devices we considered. If you dial an agent from the Lively, they’ll typically pick up about 15 seconds after you push the button; other devices left us hanging for what felt like forever. If you are lost, or unable to speak, the agent can look at a GPS signal and a list of places you frequent to help identify your location. The Lively Mobile is the only device that has an easy-to-use online interface where you can store emergency contact information. With all other devices, you have to email or snail mail your emergency contact information (this ensures accuracy compared with speaking the information over the phone). GreatCall offers the most affordable basic service packages of all the mobile medical alerts we tested, at $20. Fall detection costs an extra $15. GreatCall also has a middle tier, for $25, with access to doctors via the device (though they emphasize that this feature should not be used in an emergency), and allows family and friends to tell when you leave home or return via the GreatCall Link app. The first tier of service should work well for most people, though if the idea of being able to loosely track a loved one’s movements appeals to you, or if you want the extra security of (somewhat unreliable) fall detection, consider upgrading. The Lively Mobile has a separate on-off button, which means it’s impossible to accidentally turn it off when you’re calling for help. The lanyard is soft and black, shorter than those of much of the competition, and has a magnetic clasp so you don’t need to be able to lift your arms above your head to put it on or mess with a complicated closure. (There’s also an option to wear the device on a belt clip). The instruction booklet for the Lively Mobile is easy to read. This is a small point, but it was much better than the thick, tiny-print instruction books that some of the competition had. GreatCall has been around since 2006—the company is best known for making Jitterbug flip phones—and debuted the Lively Mobile in mid-2016. The device is an upgraded version of GreatCall’s previous mobile medical alert, the Splash, which garnered positive reviews. Medical alert reviewer Caro praised the Splash for the call center’s fast response time, ability to call 911 directly, and easy online interface, all qualities that the Lively shares. Flaws but not dealbreakers No medical alert is actively enjoyable to wear, and the Lively Mobile is no exception. It will likely take some time to get used to having the device around your neck. On the Lively, a white light flashes consistently to indicate that it’s in an area with service. Though this was less intrusive than the more colorful lights on some other devices, it could still be annoying; there were no mobile medical alerts without lights. The length of the Lively Mobile’s lanyard is not adjustable. Though I found the relatively short lanyard to be easier to wear than the competition’s, this might not be the case for everyone. Even though the lanyard can be easily swapped out, most traditional lanyards (which have a clasp that attaches to a badge) will be a little awkward. If you want a different lanyard with a specific length, you’ll need a little DIY savvy. If your area is not covered by Verizon, the Lively Mobile won’t work for you. Check your coverage here. Another flaw that all medical alerts share: the GPS signal can be unreliable. However, the Lively helps skirt this by prompting you to enter information into an online database (from a computer or a smartphone app) about your schedule and where you go during your days so the call center staff have something to fall back on. It’s the only medical alert that has this feature. Of all the medical alerts we tested, the Lively Mobile has one of the shortest advertised battery lives: 24 hours, as opposed to 36 hours or even several days. I found the battery lasted over 50 hours with minimal use, though I wouldn’t want a loved one counting on it working for that long on a single charge. Experts recommend getting in the habit of charging your medical alert nightly, so that you don’t have to think about it. If this will be hard for you, consider an in-home medical alert, which doesn’t need to be charged. Also great: Lifestation At Home The LifeStation At Home system has a small button and a base station. Also great Lifestation At Home An at-home medical alert system Call for help from a room or two in your house with this less-expensive and easier-to-wear system, available in versions that connect to a landline or cell service. $26 from LifeStation If you just need a medical alert to cover you in a couple rooms of your house, consider the Life Station At Home system, which is about $30 per month (there’s no activation fee). Like all in-home medical alert systems, it consists of a small button that you can wear around your neck or on your wrist that wirelessly connects to an answering-machine-like base station that lets you speak to a call center agent (there’s no option to dial 911 directly). Though it can’t leave your house, and you can’t speak through the button, it’s easier to wear than our top picks. There’s no charging required; the button’s battery lasts about three years. Home medical alert systems are all very similar, but Life Station’s is a little less expensive than other options we looked at, and didn’t give us any trouble during testing. The main perk of an at-home system is that the device is much easier to wear than those in mobile systems: The Life Station button is about the weight and diameter of a quarter, and just a little thicker. In comparison, our main pick and runner-up are just a little smaller and lighter than a deck of cards. If you don’t need a medical alert that you can leave the house with, are mostly concerned about slipping in one room—the bathroom, for example—or know that you just won’t wear anything but the least-intrusive device, the Life Station At Home might be a good option. The major downside of this or any at-home system is that its range is incredibly limited, even if you’re just using it in your home. The range of this device is several hundred feet—that is, the button can still communicate with the base station if you are on the other side of a small house. Though it’s difficult to communicate through the base station if you’re even one room away, you can choose at the time of setup what course of action the call center should take if you push the button and they don’t hear anything. Also great: Apple Watch A medical ID screen has a few details for paramedics. Apple Watch can dial 911 at the push of a button. The Emergency SOS feature dials 911 and texts your emergency contacts. Press the button on the right once to get this screen, or hold down to activate the SOS feature immediately. A medical ID screen has a few details for paramedics. Apple Watch can dial 911 at the push of a button. 1 of 3 Also great Apple Watch Series 3 (aluminum) No call center and no contract Bare-bones emergency features, but the most stylish. $330 from Apple Apple Watch Series 3 has basic emergency functions compared with most medical alerts we looked at, and requires a little tech savvy to use. Out of everything we tested, it’s the only wearable device that’s stylish and doesn’t look at all like a medical device. (We tested the Series 2 but it is no longer available). You will need to have an iPhone for the watch to work, but if you’re already paying for that service and you are comfortable with navigating Apple services, the watch may be relatively affordable—it currently costs $330, which will buy you less than 10 months of service with a typical mobile medical alert. (We recommend the version without cellular service; more on that in a minute.) The SOS feature (which was introduced on the Series 2 model) allows users to dial 911 by pushing and holding down the button on the side of the watch, and can automatically text up to three emergency contacts and give them your location when you do so. Apple Watch hasn’t had emergency features long enough for our experts to evaluate its usefulness as a medical alert, though they agreed it could be useful. Apple Watch’s battery lasts 18 hours with some use. You can speak to a 911 responder directly through the watch, or if it’s a nonemergency, you can dial a friend or family member through the watch verbally, by saying (for example), “Siri, call [name].” The sound quality of Apple Watch is better than any medical alert we considered. There are a variety of bands to choose from (some costing hundreds of dollars themselves, like a Hermes band), making Apple Watch Series 2 the most customizable of all the devices we looked at. Aside from the limited functionality, the major downside of Apple Watch is that you have to be within Bluetooth range or connected to the same Wi-Fi network as your phone for it to place a call. This means that you can’t necessarily just set your phone down in your house, wander away from it, and know that your Apple Watch is going to keep you safe in the event of an emergency (one of the key advantages of a true mobile medical alert system). There is an LTE version of the Series 3 that allows you to place calls without being in range of your phone, but we can’t recommend it. Preliminary reviews have noted connectivity and battery issues with the LTE version, plus you’ll need to pay about $10 a month for the Watch to have its own service. We plan to test the service for ourselves, and we’ll keep an eye out for improvements. I found that navigating the tiny screen on the watch could be challenging, though this was mostly an issue for using functions other than the SOS feature. (However, if you buy Apple Watch, you’ll likely want it to work for other things, too.) If you find yourself fairly comfortable with most Apple devices, and okay with the size of newspaper print (the font can be enlarged on some apps, like text messaging, but not all), scrolling through apps on your wrist shouldn’t be too much of an adjustment. Also great: Amazon Echo with Ask My Buddy At your verbal request, the Amazon Echo can send an alert to loved ones. Budget pick Ask My Buddy A bare-bones option for home A voice-controlled app that can give you added peace of mind, but can’t call 911. Buy from Ask My Buddy Amazon Echo Our favorite voice-controlled device It’s relatively easy to set up Ask My Buddy on the Amazon Echo, which can also play music, tell you the weather, and control smart devices. $180 from Amazon Buy from Amazon Buy from Amazon Buy from Amazon If you don’t carry your phone around in your home, won’t remember or want to wear even a small button, would have trouble using a button in an emergency but can vocalize and enunciate pretty clearly, or just want another layer of security, consider using Ask My Buddy paired with the Amazon Echo (you’ll need to have a smartphone or tablet to use it). When you say, “Alexa, ask my buddy for help” the service will send a text, email, and phone call to a list of contacts to let them know they should check in. You can also place a phone call through the Echo to anyone who has the free Amazon Alexa app on their phone. Though the Echo may be the easiest and least expensive device to fit into your lifestyle, this setup would not be helpful at all in emergencies, and only marginally helpful in nonemergencies. Still, it would be better than nothing. Pull Quote I wouldn’t want a loved one of mine counting on any medical alert alone to keep them safe, but especially not the Echo. There’s nothing to remember to wear or charge, and the device doesn’t look anything like a medical device because it’s not. You’ll get all the other capabilities of the Echo (here’s our full guide), and at $180, it costs less than four months’ worth of service with a traditional medical alert company. Unlike other services, this one can’t connect you to 911 directly or via a call center or confirm that someone received your request. The range is small; the feature works reliably only if you’re in the same room as the Echo (that said, multiple Echos or Echo Dots can all be linked together to cover many rooms or a larger home). As such, Ask My Buddy should be treated only as an additional tool for a little added peace of mind in addition to thoughtful design of a home around the person using it. I wouldn’t want a loved one of mine counting on any medical alert alone to keep them safe, but especially not the Echo. We think that the Echo will be the best device to pair with Ask My Buddy for most people, though it also works with other Amazon Alexa devices and Google Home (our full guide). The Echo indicates that it heard you with a ring of light at the top of the device, and it’s taller than Google Home (which has a slanted face with indicator lights) and other Alexa options, making it easier to see from across the room. Ask My Buddy was a little easier to set up and use on the Echo than on Google Home (you’ll still need a little app savvy, or have someone around who does, as you’ll need to connect the device to a smartphone and Wi-Fi). The range on any voice-controlled device is smaller than that of a home medical alert system with a base station. When I tried screaming “Hey, Alexa” over and over from a room away while the dishwasher was running, it didn’t pick up my voice. The sound quality on either end of the phone call placed through the Alexa wasn’t as good as it is on a traditional medical alert device. Though you may run into range problems with a home medical alert with a base station, it’s easier to get someone on the line (you just push a button), at which point, they’d at least know that you needed help even if you were unable to communicate; the same can’t be said of a call that’s not picked up or a text that goes unseen. Another concern we have about Ask My Buddy: It’s a free app run by volunteers. There’s no guarantee that it’s sticking around, and you can’t contact people otherwise through an Echo. There’s an email to send issues and questions to, but you can’t get in touch with a representative right away if you run into a problem with its service. Amazon does have a customer service line to help with Echo setup. Amazon’s Echo Connect, launched in September 2017, is the first Echo device with phone calling capabilities (rather than just Echo-to-Echo intercom communication). You can use it to call any landline, including 911 calls. It also has built-in speakerphone and caller ID features. We are looking into the possibility of using the Echo Connect as a medical alert system, and if we think it’s a better option than an Echo paired with Ask My Buddy we’ll update this section with our thoughts. What to ask in a test call For medical alerts that come with a monitoring service, experts recommend pushing the button on your medical alert at least once a month to confirm that it’s working well. This step is especially important when you first start using your medical alert. Pushing the button should feel like second nature during a true emergency. One participant in the University of Western Ontario focus group recalled forgetting about the device during a heart attack: “The button didn’t even come into my mind. All I knew I was in trouble.” Plus, you need to make sure that the device works the way you think it does. Aunt Kay fell outside her house, and, thinking she had a mobile system, pushed the button to call for help to no avail. Though she was able to crawl to her car to retrieve her cell phone, the incident left her and our family shaken. It’s scary to find out you don’t have help at the ready when you think you do. Through expert advice, and some trial and error, I learned that there are a few things that I’d want my loved ones doing during a test call to ensure that their device is working properly. Confirm that the company has your correct home address on file. In particular, if the device was shipped to another location, this could be wrong—and cause problems if you fall and say you’re at home. Ask the operator if they can tell you where you are right now. If they are off by more than a block, call customer service. In one case during my tests, the GPS wasn’t working at all, a problem that might not have come to light if I hadn’t asked about it. Again, in another case, when I called from my home in Brooklyn, the operator informed me that I was at the New York Times Building, in midtown Manhattan. If you think you have automatic fall detection, confirm that this is the case with the operator. Many devices will announce that they are activating automatic fall detection when you first plug them in, even if this feature isn’t something you are paying for and therefore won’t be able to use. If you do have automatic fall detection, do a test fall by dropping the device on the ground. I also learned that the agents at the call center are typically not able to help you troubleshoot any issues that the device itself is having. If you learn during a test call that anything is amiss, you’ll need to hang up and call customer service. Why we don’t recommend Life Alert We quickly eliminated Life Alert—the brand so ubiquitous it’s name is often used to describe medical alerts in general—from the running. The company requires users to sign a 36-month contract that can be broken only if you go into 24/7 care or die. That’s a dealbreaker because it’s hard to know if a particular medical alert (or any medical alert at all) is something you’ll use until you try it out. The ability to cancel your service with minimal penalties is key to a good medical alert. Beyond that, Life Alert’s marketing is aggressive, making perusing its products annoying at best. There are outsized claims about its products’ lifesaving abilities on the website, but minimal information on the devices themselves. When I called the customer service line for more information, a representative immediately asked for my address. As I asked questions about the service, a rep encouraged me to give my mom “the gift of life”—meaning its product—for Mother’s Day. The competition Our former runner-up pick, the Medical Guardian Premium Guardian, is no longer available. Medical Guardian now offers a different device, the Active Guardian instead. We tried this device, and don’t like it enough to recommend it as a pick, though if you don’t care about looks and are better covered in your area by AT&T service (as opposed to Verizon) it’s a fine choice.

      .

    1. Most people think of loyalty programs as an airline that gives miles to frequent fliers, a hotel that gives points toward a stay or a restaurant that offers a punch card incentive. While these may be called loyalty programs, I’ll argue that they are actually marketing programs disguised as loyalty programs. And while I don’t have a problem with this concept, we need to have a clear understanding of the differences between loyalty and marketing.
    1. Barry proposesaview of inventiveness as “an index of the degree to which an object or practice isassociated withopening upquestions and possibilities ... what is inventive is not thenovelty of artefacts in themselves, but the novelty of the arrangements with other activitiesand entities within which artefacts are situated. And might be situated in the future” (p. 4).He suggests further that there might actually be an inverse relation between the speedofchange, and the expansion of inventiveness– that “moving things rapidly may increase ageneral state of inertia; fixing things in place before alternatives have the chance ofdeveloping” (p. 6). I have made a similar argument (Suchman 1999)with respect totechnology innovation under the banner of ‘artful integration,’ attemptingto shift the frameof design practice and its objects from the figure of the heroic designer and associated nextnew thing, to ongoing, collective practices of sociomaterial configuration, andreconfiguration in use.

      I think this is an important point, that relates to a statement from the Acceleration manifesto:

      1. Given the enslavement of technoscience to capitalist objectives (especially since the late 1970s) we surely do not yet know what a modern technosocial body can do. Who amongst us fully recognizes what untapped potentials await in the technology which has already been developed? Our wager is that the true transformative potentials of much of our technological and scientific research remain unexploited, filled with presently redundant features (or pre-adaptations) that, following a shift beyond the short-sighted capitalist socius, can become decisive.

      http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/

    1. Wewant to find out what it is that each child can be passionate about and then how they cancontribute hopefully later on in life. We want to guide them into being good people

      This ending quote as well as the discussion in the paper introduction make me think of frameworks for the goals of public education, which often identify primary goals of “good people,” “good citizens,” and “good workers.” There’s a strong urging in this manuscript to shift emphasis away from “good workers” and toward the personal growth and community health goals learning. There’s also a historical perspective that may be worth considering here, as the way society views the primary goals of education has shifted back and forth over time. As Cohen and Malin (2010) note, “People have been worrying about the purposes of education for at least 2,500 years, from the times of the Buddha, Confucius, and Plato until today.”

  8. May 2018
    1. What questions am I left with? What would I like to know more about?

      There are a two questions that immediately spring to mind upon engaging with this article: First, I wonder, do engineers understand quite how deep the problem goes? Second, while I believe the solution to the problem is really quite simple, I am curious whether most Engineers would be willing to embrace it. I will now address each of these questions in turn.

      Do Engineers understand quite how deep the problem goes?

      There is now an overwhelming body of evidence which indicates that evolution engineered the brain to put analytic and empathic thinking into tension. Scientific and mechanical reasoning, along with calculation (math) and logic, are prototypical types of analytic thinking. When we engage in these types of thinking, we shut down the brain areas we need to understand the perspective of others (Anticevic et al., 2012; Jack et al., 2012) – an absolutely crucial component of ethical awareness and deliberation (Friedman & Jack, 2017; Jack, Dawson, & Norr, 2013).

      My main critique of this discussion article relates to the fact that the authors only touch on the surface of the large body of evidence which not only supports this view of the brain, but which also indicates that it impacts human behavior. First, the authors appear to have overlooked other work from my laboratory which is supportive of this view (e.g. a follow up study of the neural basis of dehumanizing (Jack et al., 2013)My laboratory has focused on this issue and its behavioral consequences for many years now. However, it would be very concerning if such claims about the function of the brain were wholly dependent on work from a single laboratory! Happily, this is not the case. The fact that attention demanding analytic tasks, such as mechanical reasoning, tend to deactivate brain regions essential for social, emotional and moral cognition might be fairly characterized as one of the most clearly established findings in cognitive neuroscience (Anticevic et al., 2012; Buckner, Andrews-Hanna, & Schacter, 2008; Shulman et al., 1997)]. This work is broken down in greater detail in many of my more recent articles. In addition, there are behavioral studies by researchers with no interest in the brain which clearly show that analytic thinking negatively impacts ethical thinking and behavior (Small, Loewenstein, & Slovic, 2007; Wang, Zhong, & Murnighan, 2014; Zhong, 2011)

      These studies provide powerful convergent evidence, since they were in no way motivated or informed by neuroscience – they are observations about patterns of human behavior made by behavioral economists and psychologists without reference to the underlying neural mechanism which explains why they occur. Since the authors do not appear to be aware of much of this literature, large parts of the article are quite tentative in tone. Nonetheless, in the section titled “Reflections on the implications for learning engineering”, the authors provide an excellent and incisive summary of exactly why our understanding of the brain should be a serious cause for concern for those who are concerned about educating ethical engineers.

      In this regard, it is worth mentioning how an emphasis on analytic thinking might increase one’s ability to ‘rationalize away’ the ethical consequences of one’s decision(s). The sorts of problem solving skills that engineers learn might then be applied to social and ethical dilemmas, which in turn removes the social and ethical significance.

      The authors do an excellent job of discussing why an engineering education is likely to reinforce students into predominately relying upon analytic thinking in preference to empathic thinking. They do this using the model shown in Figure 4, which illustrates how doing/behavior (i.e. practice at particular types of task) enables changes in structure (neural network function), and also vice-versa: changes in structure enable changes in doing/behavior. This creates a cycle of reinforcement. As the authors nicely put it: “the ‘doing’ that activates the Physical Stance [i.e. an engineering education] gives rise to the Physical stance as a preferred, low activation cognitive pathway….condition[ing] people to unconsciously apply mechanical reasoning to situations where social or moral reasoning would be a better fit for the purpose.” I believe the authors are exactly on point with this model. In my laboratory we have extended our neuroscience work by also measuring individual tendencies to think analytically and empathically with simple behavioral tests and self-report questionnaires. One way in which we measure analytic thinking is using the Intuitive Physics Test developed by Simon Baron-Cohen and colleagues, which can be easily found online for those who are interested and which is a good measure of everyday aptitude for mechanical thinking (Baron-Cohen, Wheelwright, Spong, Scahill, & Lawson, 2001)

      In these purely behavioral tests, we consistently find evidence for a small trade-off between these two ways of thinking (Friedman & Jack, 2017). That is, individuals who score higher on analytic thinking tend to score very slightly lower on empathic thinking. Some of these findings are reported incidentally in (Jack, Friedman, Boyatzis, & Taylor, 2016). We are currently preparing a manuscript which collates a much larger set of studies and focuses solely on this issue. The small trade-off between empathic and analytic thinking becomes accentuated for cases where it is less clear which way of thinking is best suited to the matter of hand. For instance, we have now performed a number of studies of religious belief. Supernatural claims, such as that God or a universal spirit exists, seem clearly incorrect from a purely mechanical way of thinking. However, we posited that they would seem correct from an empathic way of thinking. Correspondingly, we found that analytic thinking ability is associated with decreased religious belief, whereas empathic thinking is associated with increased religious belief (Jack et al., 2016). Other unpublished work in progress shows an even more robust association between these two ways of thinking and how much individuals care about dehumanized outgroups. Unsurprisingly, people higher in empathy care more about dehumanized outgroups than people lower in empathy. More remarkably, people who score higher on the Intuitive Physics Test care less about dehumanized outgroups than people who score lower on that measure of analytic thinking. I am comfortable informally reporting this finding since we have now replicated it. We aim to extend it with a further study before publication. The important point which emerges from this work is that individuals who have strongly reinforced one way of thinking without seeking to balance that out with another way of thinking show a clear tendency to default to (or prefer) one cognitive strategy over another when faced with any situation which is even remotely ambiguous. In other words, those whose training has focused predominantly on analytic thinking, such as engineers, will tend to overlook the more human perspective. This observation was made anecdotally long before cognitive neuroscience emerged as a discipline, or psychology developed the instruments to assess it quantitatively. The famous Oxford philosopher, P.F. Strawson, wrote in his groundbreaking article on the philosophy of free will “But what is above all interesting is the tension there is, in us, …between our humanity and our intelligence” (Strawson 1962, p.10).

      In the next part of their section titled “Reflections on the implications for learning engineering,” the authors make another insightful and important point. They observe that as we overdevelop one way of thinking, we may distort our whole picture of the world to fit with that way of thinking. They make this point by reference to Lakoff’s notion of frames, which is certainly a useful tool for thinking about how we think. However, I would suggest the basic point is perhaps more simply and more powerfully made by the well-known aphorism they aptly mention: “If one only has a hammer, everything looks like a nail.”

      The distortion of the world to fit with a purely scientific or analytic way of thinking is something I find I am obliged to wrestle with frequently. One label for this phenomenon is ‘scientism’. Since I teach at university dominated by the hard sciences and Engineering, such scientistic attitudes are quite common. They are also quite common in neuroscience and in some branches of psychology. An amusing example of this is a recent book by the very analytically smart psychologist Paul Bloom, who has written many warmly received popular science books. However, he recently published a book titled “Against Empathy” to resounding condemnation from virtually all intellectually informed readers, in particular other psychologists, philosophers, sociologists and anthropologists. In essence, Paul Bloom’s lack of connection to empathy motivated him to form a completely distorted picture of it, and motivated him to write a book which attacks empathy in ridiculous and contrived ways, such that it is barely coherent from an analytic (either scientific or logical) point of view. Such poor judgment is remarkable for such a smart and accomplished individual, both as a scientist and as a popular science author.

      Since the discussion article highlights the tension between mechanistic reasoning and ethics, it is important to note that ethics itself can be split into perspectives that are predominantly analytic (utilitarianism), versus more empathic perspectives that focus more on humanity. This is in itself a topic of some study in cognitive neuroscience. My own published view is that competent ethical thinking requires us to balance analytic and empathic perspectives on a situation (Friedman, Jack, Rochford, & Boyatzis, 2015; Friedman & Jack, 2017; Rochford, Jack, Boyatzis, & French, 2016). In general, the error that is most often made in current society is to tend towards emphasizing an analytic approach and excluding an empathic approach. That a purely analytic way of thinking ethically is problematic is illustrated by the fact it motivates actions without regard for human rights, including acts which would be categorized under current law as unjustified killing. Many of the recent ethical scandals that have plagued major corporations appear to have stemmed from an overemphasis on an analytic way seeing issues. An analytic, performance oriented way of thinking, especially in organizations, can have more subtle ethical consequences, such that those individuals are less likely to engage in helpful behaviors that bolster the social atmosphere at that organization (Bergeron, 2007; Bergeron, Shipp, Rosen, & Furst, 2011).

      While a solution may be straightforward, would most engineers be willing to embrace such a solution?

      There is one way in which the problem may not be quite so bad as the authors fear. They suggest that a focus on mechanical reasoning may actively weaken our social and ethical thinking skills. This is one way of reading what the neuroscience tells us, however I would suggest another way of understanding it. The fact that social, emotional and moral cognition areas are suppressed during mechanical reasoning need not mean they are being weakened. Another way of looking at this suppression is that it is protective – it gets those brain areas out of the way. In any case, there is no reason to suppose the suppression actively weakens the function of those brain areas. It merely fails to develop them. This is a problem which, I suggest, can be overcome by two strategies: (1) training to ensure the empathic, as opposed to the analytic brain network, is deployed during social and ethical decision making and (2) training to develop the empathic brain network. Neither of these strategies need impugn the intention to predominantly train engineering students in mechanical reasoning. They merely require engineering schools to embrace and positively require a modest number of courses which can help stem one side of their students brains’ from withering away. Training of the empathic network may be accomplished by classes on empathic listening, social skills for leadership, art appreciation, history, literature, as well as philosophy classes on happiness and ethics. Such classes are likely to greatly help students to possess more balanced brains and live more balanced lives. As a result they are likely to improve student performance, rather than detract from their engineering skills. My concern, however, is that many professional engineers are likely to be so unfamiliar with these forms of understanding, that they will find it hard to endorse including them in the curriculum. Indeed, some engineering faculty may be so unfamiliar with the value of these types of thinking that they find them threatening, and so react by being dismissive and actively fighting their introduction in the curriculum. In other words, they might be ‘blind’ to the value of such social, ethical and moral insights because they lack they the cognitive ability to perceive them, let alone appreciate them. Hence, while there may be many enlightened Engineering faculty, such as those who have written this discussion piece, I suspect a different voice will triumph in faculty meetings. If psychology has taught us anything, it is that human decision making is rarely guided by passion for the truth as much as by other emotions, especially emotions associated with a sense of threat to self. Hence, while it may be that no reasonable person could deny the brain has been engineered by evolution such that it needs to be educated in different modalities, I rather doubt this truth will guide engineering faculty.

      I'd like to acknowledge Jared Friedman for thoughtful input to the reflections that I offer here.

      Anticevic, A., Cole, M. W., Murray, J. D., Corlett, P. R., Wang, X. J., & Krystal, J. H. (2012). The role of default network deactivation in cognition and disease. Trends Cogn Sci, 16(12), 584-592. doi:10.1016/j.tics.2012.10.008 S1364-6613(12)00244-6 [pii]

      Baron-Cohen, S., Wheelwright, S., Spong, A., Scahill, V., & Lawson, J. (2001). Are intuitive physics and intuitive psychology independent? A test with children with Asperger Syndrome. Journal of Developmental and Learning Disorders, 5(1), 47-78.

      Bergeron, D. M. (2007). The potential paradox of organizational citizenship behavior: Good citizens at what cost? Academy of Management Review, 32(4), 1078-1095. Bergeron, D. M., Shipp, A. J., Rosen, B., & Furst, S. A. (2011). Organizational citizenship behavior and career outcomes the cost of being a good citizen. Journal of Management, 0149206311407508.

      Buckner, R. L., Andrews-Hanna, J. R., & Schacter, D. L. (2008). The Brain's Default Network: Anatomy, Function, and Relevance to Disease. Annals of the New York Academy of Sciences, 1124(1), 1-38. doi:10.1196/annals.1440.011 Cech, E. A. (2014). Culture of disengagement in engineering education? Science, Technology, & Human Values, 39(1), 42-72.

      Friedman, J., Jack, A. I., Rochford, K., & Boyatzis, R. (2015). Antagonistic Neural Networks Underlying Organizational Behavior. Organizational Neuroscience (Monographs in Leadership and Management, Volume 7) Emerald Group Publishing Limited, 7, 115-141.

      Friedman, J. P., & Jack, A. I. (2017). Mapping cognitive structure onto the landscape of philosophical debate: An empirical framework with relevance to problems of consciousness, free will and ethics. Review of Philosophy and Psychology.

      Jack, Dawson, A. J., Begany, K. L., Leckie, R. L., Barry, K. P., Ciccia, A. H., & Snyder, A. Z. (2012). fMRI reveals reciprocal inhibition between social and physical cognitive domains. Neuroimage, 66C, 385-401. doi:S1053-8119(12)01064-6 [pii] 10.1016/j.neuroimage.2012.10.061

      Jack, Dawson, A. J., & Norr, M. E. (2013). Seeing human: distinct and overlapping neural signatures associated with two forms of dehumanization. Neuroimage, 79, 313-328. Jack, A. I., Friedman, J. P., Boyatzis, R. E., & Taylor, S. N. (2016). Why do you believe in God? Relationships between religious belief, analytic thinking, mentalizing and moral concern. PloS one, 11(3), e0149989.

      Rochford, K. C., Jack, A. I., Boyatzis, R. E., & French, S. E. (2016). Ethical leadership as a balance between opposing neural networks. Journal of Business Ethics, 1-16.

      Shulman, G. L., Fiez, J. A., Corbetta, M., Buckner, R. L., Miezin, F. M., Raichle, M. E., & Petersen, S. E. (1997). Common Blood Flow Changes across Visual Tasks: II. Decreases in Cerebral Cortex. J Cogn Neurosci, 9(5), 648-663. doi:10.1162/jocn.1997.9.5.648

      Small, D. A., Loewenstein, G., & Slovic, P. (2007). Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102(2), 143-153. doi:10.1016/j.obhdp.2006.01.005

      Wang, L., Zhong, C.-B., & Murnighan, J. K. (2014). The social and ethical consequences of a calculative mindset. Organizational Behavior and Human Decision Processes, 125(1), 39-49.

      Zhong, C.-B. (2011). The ethical dangers of deliberative decision making. Administrative Science Quarterly, 56(1), 1-25.

  9. lorraine-chuen-e9f3.squarespace.com lorraine-chuen-e9f3.squarespace.com
    1. We provide strategic support on how they can most effectively make their data openly available and useful.

      i think we may have never done copy on this...

      it sounds a bit generic and maybe not that useful, nor actually tied to our mission - maybe something on building infrastructure (like policies, practices, and culture) around open government...for gov't to better publish and mobilise their data and information assets

      also maybe we should be more general than data - we have been doing alot around information as well.

    1. We rarely heard a teacher praise a student; he or she is more likely to make a comment thatreflects a factual observation and recognizes the student’s effort, such as “you took a lot of timewith that drawing.” This is the kind of comment that is now recognized as supporting a growthmindset and contributing to student success. Even

      how much back ground should be included? I notice my own discomfort about wanting the article to fit within a more traditional paper. I am wanting a reference on mindset, and felt the same on the market-based education model. I think it may be worth citing major sources so if people would like to read more, they can know where to start.

    1. He kills her in her own humour.

      Spoken by Peter, one of Petruchio’s servants, this line serves as a turning point in the attitude of the reader as well as the servants present during Act IV Scene 1. Following the wedding, Petruchio brings Kate home and begins slandering his servants, and the reader is left confused and wondering where this shift in demeanor is coming from. Prior to meeting Kate, Petruchio comes off as intelligent, outgoing, and strong willed but is now widely regarded as mean and unforgiving. As Petruchio acts out, this line provides a brief explanation of the intention of his actions. He was aware of Kate’s reputation and shrewish behavior before marrying her, thus concocting a plan for how to “tame” her, yet the solution is unbeknownst to the reader. As Peter concludes, Petruchio essentially hoped to give her a taste of her own medicine all along. Shakespeare’s choice of the word “humour” is indicative of the irony in the situation. Though one of the denotations of humour is temperament, we can almost take it as comical. She may think her behavior is amusing, but in reality it is her biggest flaw. Petruchio lashing out to his servants and creating a fuss about everything is reflective of Kate’s own conduct. As we read through the scene, we are almost worried about the change in dynamic, but all of this is gone once we understand that it is an act; Petruchio simply means to show her that the torture he is putting her and his servants through is not dissimilar to her own actions.

    1. Our review of the literature indicates that parental participation in schools is strongly shaped by perceptions of parents’ background and of the roles expected of them by school administrators and teachers and by the organizations (whether local or federal) that fund family literacy and parent involve-ment programs

      This reminds me of our work we did with unintentional bias in our social foundations course. We have so many biases that come through in our everyday practice as educators that may come through without our knowledge. In the case of family involvement in the school, I think as educators who understand and value the importance of family involvement, it is our job to encourage participation and help the family's come in, whether or not we have a bias as to if they can or want to come into the classroom.

    2. We think that an examination of these tropes could be useful for teachers and other professionals to critically assess the goals of programs and initiatives and the effects that they might have in creating inclusive or dismissive roles for parents. Although “antideficit rhetoric” is common-place in contemporary parent involvement program models (e.g., the ubiquitous use of a discourse of “strengths”), E. Auerbach (1995) warns that this shift may operate as a neodeficit ideology in which even “strength-based” program models could continue to function within a deficit framework.

      Yes, this is crucial. We must be self-reflective and think critically about the ideologies we are asked to engage in as educators. So many of the boxed curriculum, or boxed diversity/ engagement models use language that would appear to indicate an understanding of these tropes, thereby rejecting a deficit view/ model are actually perpetuating deficit thinking. I appreciate the authors' term "neodeficit ideology."

    3. Lareau and Horvat (1999) report on a study of school–home relations across class statuses showing how particular forms of social capital used by low-income African American parents were rejected by school personnel who dismissed critique and only accepted praise. In contrast, White parents, who began their relationships with the school from a more trusting stance (given also their less-problematic framings in the history of U.S. education) were welcomed to class-rooms. Middle-class African American parents were able to negotiate their relation-ships with teachers by hiding concerns about racial discrimination while staying actively involved and alert. Howard and Reynolds (2008) urge us to consider the variability within middle-class African American parents; in spite of their economic position, some parents still experience racist attitudes as they advocate for their children and other parents may be reluctant to engage in the already set structures of predominantly White middle-class school settings

      I chose this section of the article because it stood out to me. As I reflected on what it meant for parents to have to hide their concerns because of the way the school would treat them, I started to think about the conversations of biases. This is almost in a way "a dream within a dream", we come into school as educators already having some set biases and then work at a school where the other educators have their own biases, and then the administration has their own biases and then it continues down the line. We are conditioned to then view parents as difficult or easy going based one our experiences with other parents. Since we enjoy putting people in categories, we then find ourselves meeting new parents and maybe thinking, "Wow, they remind me of Sara's mom, she seems so nice." It shocks me that schools would even begin the school year already knowing and rejecting some parents' opinions because they base them on the their view of the parents race, class, or race and class.

  10. doc-14-6k-docs.googleusercontent.com doc-14-6k-docs.googleusercontent.com
    1. He writes, “Faith in people is an a priorirequirement for dialogue; the ‘dialogical person’ believes in others even before he meets them face to face” (pp. 90-91). In Pedagogy of Freedom, Freire (1998) reiterates the importance of leaders’ faith in the people: “On no account may I make little of or ignore in my contact with such groups the knowledge they acquire from direct experience and out of which they live” (p. 76).

      I completely agree with this statement and how it relates to Freire's concept of "dialogical love." I think it is of utmost importance to have faith in people, and give them opportunities just like you would any other. The only thing that creates some dissonance with me is that this seems so easy to do theoretically, but really very hard to put into practice. When I was reading this, I thought back to our discussions in Social Foundations surrounding unconscious biases and how they are frequently present in our daily interactions with others without even realizing it. Our society perpetuates these biases, stereotypes, and generalizations towards people that cause us to not have faith in people or believe in them before we even meet them as Freire states. Our society, as well as how our brains are wired, are based on forming generalizations and categorizing people or groups. This is why, although I completely agree with Freire's statement, I think it is more theoretical than practical.

    2. Therefore, with such humility and faith, dialogue centers the contextual expertise of the people as active advocates for social transformation. This is done for both moral and pragmatic reasons as, from Freire’s perspective, the oppressed deserve such humane engagement, but—more importantly—they are uniquely experienced and strategically positioned to instigate authentic change directed at widespread humanization.

      After reading a lot of Freire in our Social Foundations class I noticed him mentioning the importance of dialogue and how we have to do it often. When we choose to sit back and not talk about issues or things we notice or make us uncomfortable, we are allowing these situations to keep happening. I appreciate the author saying that the oppressed has experiences that most people will not be able to talk about it, and that they should have the floor to speak freely and allow connections to happen. I recently picked up a book that is a large set of letters written by trans women to other women transitioning. I have had many friends transition and knowing that some of my friends may identify with my gender, but in my case I don't have the perspective of transitioning. I thought the book was such a gift to women transitioning to hear from other women that went through the same thing. I think when we start to speak up or understand what we can add and what we can support, then we become active advocates.

    3. Why do we put so much of our attention and resources into trying to fix what goes on inside low-performing schools when the causes of low performance may reside outside of the school? Is it possible that we might be better off devoting more of our attention and resources than we now do toward helping the families in the communities that are served by those schools (p. 963)

      Yes. I feel like this all the time. I did Americorps and one of my projects was teaching in an elementary school in Brownsville, Brooklyn and then setting up and facilitating an after school program for the children that attended the school I taught at and lived in the same project complex as me and my team members. It became very apparent to me that year that the community had so much to offer that was not being utilized and all the conversation about education inside of the school walls was about deficits and lack as they were constantly in a struggle to follow state and federal guidelines that did not make room for the unique offerings and challenges of the neighborhood. Home and school were very much attempted to be kept separate and the families were rarely asked to engage except on the occasion that problems arose.The after school program was meant to address these family-school life gaps. Additionally, I think the demand on teachers is far too high, as we talked about in class with Maslow's hierarchy; food, clothing and a safe home environment need to be met before learning can take place. This tends to fall on teachers when students are "underperforming" but we forget that our society has failed the children first by not ensuring basic needs have been met. In terms of dialogic love, Freire is clear that each of us is unfinished, meaning there is still more to learn. This learning occurs in our encounters with others. If we are open to the "funds of knowledge" of others, even when they conflict, we can actually create community in a way that deeply supports children's development and growth. This requires not only active listening, but I think a desire to seek connections in places that are usually not connected, making children feel whole in their lives, rather than compartmentalizing school and home.

    1. If students are living their lives in preparation for life, when will they start living? When do rules and regulations pay off? The answer is never. If students aren’t free to be curious, engaged, and invested in what they’re learning, then they may never be curious, engaged, or invested in their lives. Education is about more than passing a test or being accepted to the “right” school, it’s about self-discovery and personal growth as an individual.

      Woah. That's really interesting to think about how we just continually just prep for the future.

    1. just.

      And finally, I wish somebody who had read this book had been in the room:

      http://a.co/j3j13fb

      for myself: I am reading 'the highest poverty'

      But Agamben's thesis is that the true novelty of monasticism lies not in the confusion between life and norm, but in the discovery of a new dimension, in which "life" as such, perhaps for the first time, is affirmed in its autonomy, and in which the claim of the "highest poverty" and "use" challenges the law in ways that we must still grapple with today.

      How can we think a form-of-life, that is, a human life released from the grip of law, and a use of bodies and of the world that never becomes an appropriation? How can we think life as something not subject to ownership but only for common use?

      http://amzn.eu/9fmL3sr

      May be if we learn to tackle our consumption habits we will stop being seduced by the wall-e future on offer by AI.

  11. Apr 2018
    1. The jury trial, especially politically considered, is by far the most important feature in the judicial department in a free country, and the right in question is far the most valuable part, and the last that ought to be yielded, of this trial. Juries are constantly and frequently drawn from the body of the people, and freemen of the country; and by holding the jury’s right to return a general verdict in all cases sacred, we secure to the people at large, their just and rightful controul in the judicial department. If the conduct of judges shall be severe and arbitrary, and tend to subvert the laws, and change the forms of government, the jury may check them, by deciding against their opinions and determinations, in similar cases. It is true, the freemen of a country are not always minutely skilled in the laws, but they have common sense in its purity, which seldom or never errs in making and applying laws to the condition of the people, or in determining judicial causes, when stated to them by the parties. The body of the people, principally, bear the burdens of the community; they of right ought to have a controul in its important concerns, both in making and executing the laws, otherwise they may, in a short time, be ruined. Nor is it merely this controul alone we are to attend to; the jury trial brings with it an open and public discussion of all causes, and excludes secret and arbitrary proceedings. This, and the democratic branch in the legislature, as was formerly observed, are the means by which the people are let into the knowledge of public affairs — are enabled to stand as the guardians of each others rights, and to restrain, by regular and legal measures, those who otherwise might infringe upon them.

      I like this highlighted passage (I would leave out everything else). In regular sophomore US History, I don't teach them the difference between a civil case and a criminal case (I save that for Law & AP Gov't), but we do talk about the role of juries in our justice system. So, I think this passage is properly placed under the 7th Amendment, but I would use it as a justification for why the jury system exists not just for civil cases but in our justice system as a whole. This passage would be really good at explaining the reason why we have it. It is realistic about how jurors are not experts in the law, but the system still works.

    1. I love humor. I think humor is one of the most effective ways to diffuse tension, especially with patients as they have found themselves in a place they may not want to be. I found humor to be an excellent way to diffuse tension throughout all 3 of my fieldworks. I feel confident I have mastered this skill. I found myself using humor with children just as much as I did with older adults. One intervention in particular that I found to create much tension was working on handwriting with pediatrics. Every single one of my kids hated to work on handwriting. I got on Pinterest and researched many different ways to attempt to make it more exciting. One thing I found to diffuse the tension and decease the anger in the patients was to create humor with sentences. By coming up with funny sentences and stories, the kids would laugh, keep better attention, and be anxious to see what the next sentence would be that they were going to write. We would also take turns creating funny sentences. Humor is always a great remedy! 

      Approved. You used a very good example of how you incorporated humor and it was very appropriate for the setting. Encourage the use of client vs. patient.

    1. And as to the faculties of the mind, setting aside the arts grounded upon words, and especially that skill of proceeding upon general and infallible rules, called science, which very few have and but in few things, as being not a native faculty born with us, nor attained, as prudence, while we look after some what else, I find yet a greater equality amongst men than that of strength. For prudence is but experience, which equal time equally bestows on all men in those things they equally apply themselves unto. That which may perhaps make such equality incredible is but a vain conceit of one’s own wisdom, which almost all men think they have in a greater degree than the vulgar; that is, than all men but themselves, and a few others, whom by fame, or for concurring with themselves, they approve. For such is the nature of men that how so ever they may acknowledge many others to be more witty, or more eloquent or more learned, yet they will hardly believe there be many so wise as themselves; for they see their own wit at hand, and other men’s at a distance. But this proveth rather that men are in that point equal, than unequal. For there is not ordinarily a greater sign of the equal distribution of any thing than that every man is contented with his share.

      I might remove this paragraph entirely..... I think students would spend a LOT of time on it, but I am not that would be time well spent. I think it can go from the big idea of equality in the state of nature to how that plays out.

    1. Forty or fifty years ago, the workforce was overwhelmingly a man’s world. In the design field, many women may have been assistants or “office girls” and so few held the top titles, such as art director or creative director. In a basic sense, women’s careers have rarely followed the same path of men’s, since there has historically been immense pressure placed on women to be solely homemakers and nurture families (see: Beyond The Glass Ceiling: an open discussion, Astrid Stavro, Elephant #6) with more sinister pressures of socially-accepted sexism and segregation discouraging, or even disqualifying, the career ambitions of capable women

      Unfortunately there is persecution, and I think it still exists, but with some differences, for example, women's salaries are lower than men's salaries so far in many countries. in my opinion it is not fair, Whereas women have talent in many areas and may outperform men.

    1. Fear of stigma and being discriminated against has continued to be a companion of the HIV pandemic since its outbreak more than 30 years ago, despite significant efforts at individual, community and institutional levels (Holzemer & Uys, 2004; Pulerwitz et al., 2010; Aggleton et al., 2011)

      <br>

      Analytic Note (Source 1): Stigma is still perceived as a major limiting factor in HIV/AIDS prevention and care. HIV/AIDS stigma and discrimination continue to influence people living with and affected by HIV disease as well as their health care providers, wherever they live. This review article discusses HIV/AIDS stigma, the impact of AIDS stigma and how healthcare providers can help to manage HIV/AIDS stigma in South Africa. In defining stigma, authors look into different definitions of stigma by various authors and make particular allusion to Goffman’s definition of stigma as ‘an attribute that is deeply discrediting within a particular social interaction’, as a ‘spoiled social identity’ and ‘a deviation from the attributes considered normal and acceptable by society. On the impact of AIDS stigma, the article gives an insight into the fear of, and the stigma associated with, HIV were significantly related to the HIV-affected person’s decision to not seek timely healthcare services. This article provided a base for our study.

      Analytic Note (Source 2): According to the late Jonathan Mann, the former WHO Head of Global Program on AIDS recognized stigma, discrimination, blame, and denial as potentially the most difficult aspects of HIV/AIDS to address. This article summarizes the key contributions of the Horizons stigma research portfolio to describing the drivers of stigma, identifying effective interventions and approaches for reducing stigma in different settings, and improving methods for measuring stigma. Horizons developed and implemented a wide range of activities in collaboration with numerous local and international partners in Africa, Asia, and Latin America. This article is cited to corroborate the existence of stigma and discrimination in different settings including healthcare.

      Analytic Note (Source 3): This article reflects on the progress over the last 30 years with respect to older and more emergent forms of education concerning HIV and AIDS. This education includes treatment education, HIV prevention education, and education to encourage positive and supportive community response. The article emphasizes on the careful analysis of different forms of HIV-related education, the consequences, effects and identifying specific effectivity of HIV-related education in order to achieve positive outcomes. The article also discusses new ways of seeing and understanding the need to support processes that “think” ahead of the epidemic. This article does not discuss HIV educational strategies that may reduce stigma and discrimination in healthcare settings in the globalized world. However, this article is relevant, though not the basis of our research, because it highlights the difficulty to provide HIV education to stigmatized and discriminated groups after more than 30 years since the start of the epidemic. Giving the negative social responses to HIV and AIDS, education is vital in contributing to a positive response. Individual and community empowerment are the necessary mechanisms by which social inclusion can be promoted.

      Source Excerpt (Source 1): Other researchers have reported that health care personnel knew very little about the potential for HIV contamination in the workplace. In Kuwait, some family doctors knew less than they should have about HIV and looked upon AIDS patients negatively, even in the third decade of the AIDS pandemic. Similarly, AIDS stigma and discrimination continue to influence people living with and affected by HIV disease as well as their health care providers, particularly in southern Africa where the burden of AIDS is so significant. Stigma is perceived as a major limiting factor in primary and secondary HIV/AIDS prevention and care. Similar to the findings of Adebajo, Bamgbala and Oyediran (2003) in Nigeria, the results in Kuwait showed that nurses and laboratory technicians also had negative attitudes toward AIDS patients. Health care workers’ poor attitudes and inadequate care in these anecdotal reports could be related to stigma. This argument was reaffirmed in our study.

      Source Excerpt (Source 2): Horizons and partners recognized from the outset that to improve the environment for people with HIV in health-care settings, it was important for management and health providers to acknowledge that stigma exists in their facilities. They found that a participatory approach that included ongoing sharing of data about levels and types of stigma in institutions helped build staff and management support for stigma-reduction activities

      Source Excerpt (Source 3): “As we approach the 30th anniversary of the date on which AIDS was first identified, it is salutary to reflect on progress made in tackling the epidemic. This is especially so in relation to education and HIV, a field that is laden with possibilities but one that has in many ways eluded its potential. Despite these difficulties, education in a variety of forms holds the potential to arrest and turn back the three epidemics (Mann, 1987) to which HIV has given rise: (a) the epidemic of AIDS, which has caused untold harm through serious illness and death; (b) the epidemic of HIV, in many ways silent, unseen, and often unknown; and (c) the epidemic of fear, shame and denial that has led people living with HIV to be stigmatized, ostracized, discriminated against, and denied their human rights”.

      Full Citation (Source 1): Holzemer, W. L. & Uys, L. R. (2004) Managing AIDS stigma. SAHARA Journal 1(3), 165–174. doi/pdf/10.1080/17290376.2004.9724839

      Full Citation (Source 2): Pulerwitz, J., Michaelis, A., Weiss, E., Brown, L. & Mahendra, V. (2010) Reducing HIV-related stigma: lessons learned from Horizons research and programs. Public Health Reports 125(2), 272–281. doi: 10.1177/003335491012500218

      Full Citation (Source 3): Aggleton, P., Yankah, E. & Crewe, M. (2011) Education and HIV/AIDS-30 years on. AIDS Education and Prevention 23(6), 495–507DOI: 10.1521/aeap.2011.23.6.495

    1. In order to understand why so many college students do poorly in the first semester at college, we may need to break the question up into parts and look at study time, social life, living away from home, economic issues, mental health, oppression, and more. Studying these parts will help us get a sense of the overall reasons for the problem.

      This makes me think of my interdisciplinary major Holistic Health and Wellness. When an individual is being evaluated for a health issue, there are many factors that may contribute to their health issue. Often times when an unwell individual goes to the doctors only the physical parts of them are being observed. The roots of the problem could arise from the mind and the spirit as well as the body. To ensure overall health all of these parts should be evaluated and cared for.

    1. Building capacity between the private emergency food system and the local food movement: Working toward food justice and sovereignty in the global NorthMcEntee, Jesse C; Naumova, Elena N. Journal of Agriculture, Food Systems, and Community Development; Ithaca Vol. 3, Iss. 1,  (Fall 2012): 235-253. Full textFull text - PDFDetailsReferences 74Hide highlighting $j(function($) { //$('.psycArticlesImage, .textPlusGraphicsImageClass').addClass('imageViewerZoom'); jQuery('.psycArticlesImage, .textPlusGraphicsImageClass') .filter(function() { return jQuery(this).closest(".research_topic").length == 0; }) .addClass('imageViewerZoom'); var viewerOptions = { url: 'data-original', hidden: function(e) { //console.log("Closed Image Viewer"); pqga.pushEventToGA('closeImageViewerFullTextView'); } }; var fieldDivId = fulltext_field_MSTAR_1285242266.id; if (jQuery(".research_topic").length == 0) { $('#' + fieldDivId).viewer(viewerOptions); } else { var decodeHtml = function(html) { var txt = document.createElement("textarea"); txt.innerHTML = html; return txt.value; } jQuery(".research_topic .intraDocLink") .each(function(idx, el) { var imgEl = jQuery("img", jQuery(el).parent()); imgEl.wrap(el.outerHTML); imgEl.attr('title', decodeHtml(imgEl.attr('title'))); imgEl.attr('alt', decodeHtml(imgEl.attr('alt'))); }); } }); Translate Full textUndo Translation TranslateUndo Translation Press the Escape key to close FromArabicChinese (Simplified)Chinese (Traditional)EnglishFrenchGerman ItalianJapaneseKoreanPolishPortugueseRussianSpanishTurkishToArabicChinese (Simplified)Chinese (Traditional)FrenchGermanItalianJapaneseKoreanPolishPortugueseRussianSpanishTurkishTranslateTips.images = '/assets/r20181.3.0.390.2236/pqc/javascript/prototip/images/prototip/';Translation in progress... [[missing key: loadingAnimation]]The full text may take 40-60 seconds to translate; larger documents may take longer.Cancel OverlayEndTurn on search term navigationTurn on search term navigationJump to first hitListen   Headnote Abstract One area of food system research that remains overlooked in terms of making urban-rural distinctions explicit is the private emergency food system of food banks, food pantries, soup kitchens, and emergency shelters that exists throughout the United States. This system is an important one for millions of food-insecure individuals and today serves nearly as many individuals as public food assistance. In this article, we present an exploratory case that presents findings from research looking at the private emergency food system of a rural county in northern New England, U.S. Specifically, we examine the history of this national network to contextualize our findings and then discuss possibilities for collaboration between this private system and the local food movement (on behalf of both the public and the state). These collaborations present an opportunity in the short term to improve access to high quality local foods for insecure populations, and in the long term to challenge the systemic income and race-based inequalities that increasingly define the modern food system and are the result of prioritizing market-based reforms that re-create inequality at the local and regional levels. We propose alternatives to these approaches that emphasize the ability to ensure adequate food access for vulnerable populations, as well as the right to define, structure, and control how food is produced beyond food consumerism (i.e., voting with our dollars), but through efforts increasingly aligned with a food sovereignty agenda. Keywords emergency food, food justice, food sovereignty, rural and urban Introduction The rural private emergency food system is an overlooked area of research. The popularity of local food has increased in urban and rural areas alike, yet despite the social and economic capital driving this innovative food movement, foodinsecure populations remain ignored to a large degree. We know that the rural food environment is substantively different than the urban food environment (Sharkey, 2009). People in rural areas generally have less money to spend on food and they live further from markets where local food producers sell their products (Morton & Blanchard, 2007). Producers are predominantly located in rural areas where land and water resources are abundant, yet the most profitable markets for their products more often than not are located in urban centers where they can more easily access a concentrated population center with greater financial capital. These urban-rural distinctions can be made about multiple aspects of food systems research. For instance, early applications of the food desert concept (and the corresponding efforts to identify them) were overwhelmingly situated in urban places. Today, there is recognition that there is not a single food desert definition that can be universally applied. Researchers as well as government authorities have recognized this; for instance, the United States Department of Agriculture (USDA) has adopted different criteria for urban and rural food deserts. In examinations of local food, some have identified key urban-rural distinctions. For example, McEntee's (2010) contemporary and traditional conceptualization has been used to distinguish between a broad base of activities that are local in terms of geographical scale, but potentially exclusive in terms of their social identity and obstacles to adequate access. Access in this sense is not represented by a Cartesian notion of physical proximity, however; it is also indicative of access barriers in terms of financial ability as well as structural and historical (e.g., institutional racism) processes that privilege some, but harm others (McEntee 2011a).1 These concerns are increasingly recognized as part of growing food justice and food sovereignty agendas. The private emergency food system (PEFS) is a national network of food banks, food pantries, soup kitchens, and shelters that operate largely to redistribute food donated by individuals, businesses, and the state. This is a tremendously important system that serves both urban and rural food-insecure populations. Based on a review of this system's functionality, urban-based critiques of this system, and findings from an exploratory qualitative study, we propose that there are key distinctions between the urban and rural PEFSs that have been overlooked (in the same manner that urban and rural local food systems are conflated). The PEFS serves as a safety net for many, yet it struggles financially and lacks access to the high-quality foods (e.g., fresh produce and meat) that clients of this system often prefer. In this article we present emergent opportunities to develop the collaborative capacity between the PEFS and the rural local food system in ways that address the needs of the PEFS and utilize the assets of the burgeoning local food movement. Furthermore, we explain how these synergies potentially contribute to food justice by providing high-quality food to low-income populations. We begin the article with a review of pertinent literatures. This is followed by a depiction of the PEFS, summary of existent critiques, and presentation of our data. We propose that livelihood strategies related to traditional localism (McEntee, 2010) contribute to food justice and food sovereignty agendas by focusing on the natural and social assets of rural communities. We conclude with a discussion of the possibilities for not only remediating the PEFS, but challenging the corporate food regime that currently institutionalizes it. Local Foods, Food Justice, and Food Sovereignty Consumer confidence in the conventional food sector has decreased as a result of food scares (Morgan, Marsden, & Murdoch, 2006), with consumers feeling alienated from modern-day food production (Sims, 2009). From these consumerbased concerns over food safety and a general alienation from modern-day food production, alternative food initiatives and movements have surfaced (including local food initiatives). Feenstra (1997) made the case for local foods as an economically viable alternative to the global industrial system by providing specific steps to be taken by citizens to facilitate the transition between the local and the global; it is these forces that have become the focus of food provisioning studies (Winter, 2003). These efforts include more sustainable farming methods, fair trade, and food and farming education, among others; these have been reviewed extensively elsewhere, such as by Kloppenburg, Lezberg, De Master, Stevenson, and Hendrickson (2000) and Allen, FitzSimmons, Goodman, and Warner (2003). Essentially, all are categorized by a desire to create socially just, economically viable, and environmentally sustainable food systems (Allen et al., 2003) and the majority are now collectively referred to as the dominant food movement narrative (Alkon & Agyeman, 2011). It is from this narrative that the local food movement emerges. Food justice efforts have successfully utilized food localization efforts to improve food access opportunities for low-income and minority communities. These efforts typically occur in urban areas and target low-income minority populations (Alkon & Norgaard, 2009; Gottlieb and Joshi 2010; Wekerle, 2004; Welsh & MacRae, 1998). The concept of food justice supports the notion that people should not be viewed as consumers, but as citizens (Levkoe, 2006); by linking low-income and minority populations with alternative modes of food production and consumption, advocates prioritize human well-being above profit and alongside democratic and social justice values (Welsh & MacRae, 1998). This represents "more than a name change" departure from conventional food security concerns; it is rather a systemic transformation that alters people's involvement in food production and consumption (Wekerle, 2004, p. 379). Increasingly substantiated by racial and income-based exclusion, food justice operates to prioritize just production, distribution, and access to food within the communities being impacted. This is the focus of the food justice movement, though environmental and economic benefits often result from these efforts as well. A recently published volume edited by Alkon and Agyeman (2011) unpacks various forms of food justice, ranging from issues of production (e.g., farmworker rights) to distribution, consumption, and access. In this article we are concerned with the consumption element of the food chain; food justice efforts in this realm often take the form of alternative food initiatives that create new market-based or charity-based solutions to inadequate food access (e.g., farm-to-school programming that link schools and local farmers, slidingscale payment plans for low-income consumers at farmers' markets that are subsidized by wealthier patrons, or agricultural gleaning programs) that stress social equity and solutions that are implemented by and for the people impacted by inadequate access to food. This latter element is a definitive characteristic of food justice initiatives. Most recently, Alkon and Mares (2012) situated food justice in relation to food sovereignty, finding that although food justice and community food security frameworks often challenge conventional agricultural and food marketing systems, the food sovereignty framework is the only one to explicitly underscore "direct opposition to the corporate food regime" (p. 348). This is because both contemporary food justice and (community) food security frameworks often operate within traditional markets that are agents of the industrial agricultural system representative of a neoliberal political economy. This marks a departure between food justice and food sovereignty; La Via Campesina, a major proponent of food sovereignty, defines the concept as: the right of peoples to healthy and culturally appropriate food produced through sustainable methods and their right to define their own food and agriculture systems. It develops a model of small scale sustainable production benefiting communities and their environment. It puts the aspirations, needs and livelihoods of those who produce, distribute and consume food at the heart of food systems and policies rather than the demands of markets and corporations. (La Via Campesina, 2011, para. 2) Whereas food justice often works to create solutions in sync with market structures by filling the gaps in government services, food sovereignty focuses on dismantling the corporate food regime. History and Structure of the PEFS An area of the food system where food justice advocates have increasingly engaged in an urban setting is the PEFS. Operating on a charity basis, emergency food assistance provides food to individuals whose earnings, assets, and social insurance options have not met their needs (Wu & Eamon, 2007). Public government-run assistance programs include welfare, the Supplemental Nutrition Assistance Program (SNAP), Medicaid, and subsidized housing. Private emergency food assistance is provided by nonprofit organizations and includes soup kitchens, food pantries, food banks, food rescue operations (Poppendieck, 1998), and "emergency shelters serving short-term residents" (emphasis added) (Feeding America, 2010a, p. 1). Largely in reaction to dissatisfaction with the federal food stamp program, Congress passed the Omnibus Budget Reconciliation Act in 1982. This act allowed federally owned surplus commodity food to be distributed by the government for free to needy populations. Prior to its passage, the vast majority of food assistance in the U.S. was governmentally provided through the food stamp program (now the Supplemental Nutrition Assistance Program [SNAP]) and the majority of food that food pantries received came from individuals and businesses. The act's success was followed by the Temporary Emergency Food Assistance Act (TEFAP) in 1983, which began the process of routinely distributing excess commodities through private emergency food programs, such as food banks and food pantries (Daponte & Bade, 2006). Food pantries flourished as a result of commoditysourcing, since they now began receiving a reliable stream of food. Businesses that previously did not want to be involved in emergency food provisioning activities could now dispose of unwanted inventory for a much cheaper rate by giving it away (Daponte & Bade, 2006) (see figure 1). In fiscal year 2009, Congress appropriated USD299.5 million for the program, made up of USD250 million for food purchases and USD49.5 million for administrative support (USDA FNS, 2010). In the U.S., companies defined as C corporations by tax code (the majority of U.S. companies) can collect an enhanced tax deduction for donating surplus property, including food. Thus when food businesses donate food to a charity, including food banks and pantries, the businesses can take a deduction equal to 50 percent of the donated food's appreciated value. In addition, the Bill Emerson Good Samaritan Food Donation Act of 1996 provides safeguards for entities donating food and groceries to charitable organizations by minimizing the risk of legal action against donors. Companies are not required to publicly disclose deductions for food donations, though in 2001 corporations wrote offUSD10.7 billion in deductions (Alexander, 2003). Feeding America received USD663,603,071 in charitable donations in 2006. In a 2003 Chicago Tribune article, Delroy Alexander described how America's Second Harvest received USD450 million in donated provisions in 2001, USD210 million of which came from just 10 major food companies, such as Kraft, Coca-Cola, General Mills, ConAgra Foods, Pfizer, and Tropicana (Alexander, 2003). The top five donors each gave more than USD20 million in food, with the top contributor at USD38 million. Current figures are unavailable, though many companies proudly display pounds of food donated on their websites. For instance, Walmart's website states: From November 2008 to November 2009, the Walmart stores and Sam's Club locations have already donated more than 90 million pounds [41,000,000 kg] of food....By giving nutritious produce, meat, and other groceries, we've become Feeding America's largest food donor. (Walmart, 2010) This arrangement allows for unwanted food (food that would otherwise be considered waste) to be utilized; it acts as a vent for unwanted food, allowing large corporate entities to dump surplus product of questionable nutritional quality upon the PEFS. Simultaneously, these corporations are receiving tax breaks and benefiting from policies that minimize their legal risk. Approximately 80 percent of food banks belong to Feeding America, a member organization that acts as an advocate and mediator in soliciting food from major food companies and bulk emergency food providers. This network has 205 food bank members that distribute food and grocery products to charitable organizations. Nationwide, more than 37 million people accessed Feeding America's private food assistance network in 2009 (up 46 percent from 2005), while 127,200 accessed it in New Hampshire (Feeding America, 2010b). Critiques of the PEFS Critical assessments of the PEFS range from those focused on political-economic relations to on-theground implementation of this redistributive system. In the following section we have grouped these appraisals into four main points. First, the PEFS is largely "emergency" in name only. Second, distribution of food in the PEFS is largely unregulated. Third, nutritional content of donated items is frequently overlooked for the sake of its quantity. Fourth, because of their limited budget and foodstorage capacity, the PEFS requests nonperishable, and resultantly, low-nutrition donations. Related to this point, perpetuation of the PEFS as it currently operates supports a short-term food strategy that supports immediate caloric need while sacrificing long-term health (and ignoring its associated costs). A prominent critique of the PEFS is that it is "emergency" in name only, and examples highlight the emergency programmatic emphasis of programs even though their services appear to be operating in a nonemergency manner. The U.S. government describes TEFAP as a program that "helps supplement the diets of low-income needy persons...by providing them with emergency food" (USDA FNS, 2010). Feeding America, "the nation's largest organization of emergency food providers," describes food pantries as "distributing food on a short-term or emergency basis" (the NHFB shares this definition) (Feeding America, 2010a, p. 13). According to Feeding America's Hunger in America 2010 report, approximately 79.2 percent of clients interviewed reported that they had used a pantry in the past year, indicating that they were not new clients. Multiple researchers have observed that many food pantries are being used on a regular, long-term basis (Beggs, 2006; Bhattarai, Duffy, & Raymond, 2005; Daponte, Lewis, Sanders, & Taylor, 1998; Hilton, 1993; Molnar, Duffy, Claxton, & Conner, 2001; Mosley & Tiehen, 2004; Tarasuk & Eakin, 2005; Warshawsky, 2010). Along these lines, others have cited how the PEFS is unregulated to its detriment; for instance, many private donations do not have any federal or state laws regulating their distribution (Bhattarai et al., 2005). The unregulated nature of any charity brings both benefits and burdens, and one benefit to the PEFS has been the ability to utilize the efforts of a large volunteer base. However, it has been proposed that pantries that operate with a largely volunteer workforce employ subjective eligibility criteria and a "they should be satisfied with whatever they get" mindset on behalf of workers (volunteers as well as paid staff) (Tarasuk & Eakin, 2005, p. 182). Food pantry clients may have limited rights and entitlement to the food being distributed, "further reinforcing that people are unable to provide for themselves" (Molnar et al., 2001, p. 189) in this redistributive system. In fact, it has been shown that workers "routinely eschew the aesthetic values that dominate our retail system" where "distribution of visibly substandard or otherwise undesirable products is achieved because clients have few if any rights" and "are in desperate need of food" (Tarasuk & Eakin, 2005, p. 184). The belief of some workers that clients should be satisfied with whatever items they receive underlies the non-nutritional focus threaded throughout the private emergency food system. This is especially evident from the supply side. Government commodities serve as a major source of food for the PEFS. Commodity foods are provided to food banks, directly to independent agencies, and to Feeding America (Feeding America, 2012c). The original intents of this commodity program were to distribute surplus agricultural commodities and reduce federal food inventories and storage costs, while simultaneously helping food-insecure populations. In 1988, however, much of the federal government's surplus had been exhausted, and as a result the Hunger Prevention Act of 1988 appropriated funds for the purchase of commodities for TEFAP (USDA FNS, 2010). The PEFS's other major contributor, private corporations, do not explicitly concentrate on the nutritional content of their donations. Corporations benefit from considerable tax incentives along with liability protection; they can donate food that would otherwise be wasted, forgoing dumping costs while engaging in what many of these entities now call "corporate social responsibility." For instance, pounds of donated food are showcased and used as progress markers to show how successfully hunger is being combated. Feeding America states that it distributes 3 billion pounds (1.4 billion kg) of food every year (Feeding America, 2012a). Clicking on a few of Feeding America's "Leadership Partners" on its homepage website (Feeding America, 2012b) yields similar language. For instance, ConAgra states that, "In the last dozen years, ConAgra Foods has provided more than 166 million pounds of food to families in need" (ConAgra, 2009, para. 5), Food Lion (part of the Delhaize Group) has "donated more than 21 million pounds of food" (Food Lion, 2010), and "just last year, Procter & Gamble contributed nearly 30 million pounds of product" (Procter & Gamble, 2010). These figures provide no indication of nutritional content, although one pound of naturally flavored drink boxes has different nutritional composition than one pound of fresh produce. If success is measured in terms of quantity, then this will be the criterion that drives emergency food provisioning. Charities are easy targets for critique; they often operate on a shoestring, use labor with different levels of knowledge and experience, and much of the time are put in a financially and socially powerless position, at the whims of donors. One result is that nonperishable or lowperishability items are preferred (Tiehen, 2002; Verpy, Smith, & Reicks, 2003); these last longer and do not require refrigeration. Their long shelf life means handling and transport is not timesensitive. These products cost less and are more likely to be donated. Nutrient-poor foods are less healthy overall (Monsivais & Drewnowski, 2007); previous food pantry investigations discovered the poor nutrient composition of donated items, especially in regards to adequate levels of calcium, vitamin A, and vitamin C (Akobundu, Cohen, Laus, Schulte, & Soussloff, 2004; Irwin, Ng, Rush, Nguyen & He, 2007). Donating large amounts is important since donation quantity is prioritized by agency recipients. Rock, McIntyre, and Rondeau (2009) found a misalignment between donor intent and client preference indicative of the "ignorance among food-secure people of what it is like to be food-insecure" (p. 167). Food banks and food pantries are pressured to accept foods on unfair grounds, just as clients are pressured to accept whatever food is handed to them. In at least one other case, food pantry donors "did not consciously consider nutrition when deciding which foods to donate" (Verpy et al., 2003, p.12). A demand-side perspective of private emergency food provisioning reveals somewhat complementary conditions that support the acquisition and distribution of low-quality foods. The longterm health consequences associated with the consumption of low-quality foods can be overlooked to satisfy immediate food needs, thereby reinforcing the value placed on the low-quality supply being donated. While expenses like shelter, heat, and medical expenses are relatively inelastic, food is flexible and can be adjusted based on these demands. On a limited budget, it is often the case that whatever money is leftover is used for food (Furst, Connors, & Bisogni, 1996; McEntee, 2010). As reported by McEntee, a homeless shelter resident commented: It's likes this, your oil's almost out, your electricity's high and they're going to shut it off, what are going to do? Well, we're going to have to cut down on our food budget. Do what you gotta do. . . you can buy your family packs and suck it up and eat ramen noodles. (McEntee, 2010, p. 795) Sometimes these types of food are chosen out of necessity (that is the only type of food offered) and other times it is out of habit (they are used to eating it).2 With the recent recession in the U.S. economy, purchases of cheap, ready-to-eat processed foods have increased. An Associated Press article entitled, "ConAgra Foods 3Q profit rises, maintains outlook" (Associated Press, 2010, para. 1) states: Strong sales of low-priced meals such as Banquet and Chef Boyardee and lower costs pushed ConAgra Foods Inc.'s third-quarter profit up 19 percent. Cheap prepared foods like those that ConAgra offers have appealed to customers during the recession as they look for ways to save money and eat at home more. Methods and Research Setting Approximately 7.7 percent of New Hampshire's population is food-insecure (Nord, Andrews & Carlson, 2008); 8 percent of the state's population lives in poverty, while 9.4 percent of Grafton County's population lives in poverty (U.S. Census Bureau, 2008). Grafton County was selected as the research site based on proximity to researchers as well as the existence of food insecurity. Grafton County (figure 2) has a population of 81,743 and a population density of 47.7 people per square mile (18.4 people per square kilometer) (U.S. Census Bureau, 2008). Unlike the other two primarily rural northern counties of New Hampshire (Carroll and Coos counties), Grafton County contains two universities that serve as educational and cultural centers (Dartmouth University in Hanover and Plymouth State University in Plymouth). Accordingly these areas attract residents with above-average educational attainment and income, thus offering a variegated set of social and economic conditions which are differentiated from the rest of the county. There are 14 registered food pantries in Grafton County (of a total of 165 in New Hampshire) (New Hampshire Food Bank, 2010). In 2012, there were 92 SNAP-authorized stores within the county, marking a 13 percent increase from 2008 (USDA FNS, 2012a). Approximately 16 percent of students were free lunch eligible in 2008 (USDA FNS, 2012b). In terms of local food potential, there were 10 farmers' markets in 2010 (USDA AMS, 2012) with 3.3 percent of farm sales attributable to direct to consumer sales ; U.S. Census of Agriculture, 2012). A purposive sampling method (Light, Singer, & Willett, 1990) was used to identify respondents (N = 16) who work regularly in Grafton County's PEFS. This included state employees, although the majority were workers and volunteers at food banks, soup kitchens, food pantries, and homeless shelters. These respondents were selected based on their above-average knowledge about hunger, food insecurity, and private emergency food provisioning in Grafton County (beyond their personal experience). Although some questions were specific to the respondent's area of expertise, the same general open-ended question template was used to facilitate informative discussion on topics related to food access, such as affordability, nutrition, and food provisioning (see table 1). The one-on-one semistructured interviews (Morgan & Krueger, 1998) with this group of respondents lasted between 60 and 90 minutes and took place in an office setting, community center, or over the phone (when in-person meetings were difficult to arrange). All interviews were recorded, transcribed, and coded. Participant observation (Flowerdew & Martin, 1997) was conducted at a Plymouth-area soup kitchen that served weekly hot meals for free to attendees. Data from interviews as well as field notes were coded and analyzed using NVivo, qualitative analysis software ( QSR International, 2010). After data was cleaned, data was examined as a whole to gain a general sense of overall meaning and depth. Open coding was undertaken, where material was organized into groups or segments of related information (Rossman & Rallis, 1998). We developed a qualitative codebook for efficient and consistent code assignment. Codes were examined, as well as the overall corpus of information. We identified underlying themes based primarily on respondent narratives. Over time, themes and trends emerged. Overlaps and differences between themes were identified, thus allowing their properties to be refined, ultimately resulting in progressively clear theme categories. Following theme assessment, interconnections and relations between themes were identified through concept mapping and triangulation (Fielding & Fielding, 1986). The authors conducted all interviews and observation, processed all data, and conducted all analysis. Institutional Review Board approval was obtained and all standard research protocols used. Findings from Grafton County Some of the data emerging out of the Grafton County case echoes previous observations about the PEFS. The preliminary data we present in this article is the product of field work, policy evaluation, and literature review. We do not claim that these findings are externally generalizable, although we do see similarities between our observations and those of other researchers, indicating that our data may be indicative of trends elsewhere, especially in rural areas of the northeastern United States where similar demographic and cultural traits exist. In this way, we also see potential in terms of research trajectories and policy reforms for those looking to build capacity between the PEFS and the local food system. Reliance upon Volunteers In relation to the existing criticisms that the PEFS is actually serving a long-term and sustained need and not a short-term or emergency one, many food pantry workers indicated that longterm usage by clients was common. For instance, one pantry worker explained that "most of the people that come in here are...I don't know if I would say chronic, but regulars" (0607).3 In these pantries, representatives talked about getting to know clients over the course of months and years of use; some clients stay and talk with pantry workers for emotional support during food pickups. This long-term usage has been critiqued and connected to the fact that the PEFS is so heavily reliant upon volunteer labor that resultantly there are opportunities for inconsistencies to develop (Lipsky, 1985; Molnar et al., 2001). Ad hoc administration of private emergency food distribution has consequences, such as inconsistent eligibility requirements and quality control (Daponte & Bade, 2006). In Grafton County pantries, eligibility was determined through a combination of criteria, such as pantry worker's personal judgment and preset income criteria. In one large pantry, more refined conditions were followed by staffand volunteers. In this pantry, if it was a client's first visit, then they were allowed to get food no matter what. However, in order to get food on subsequent visits they would need to bring proof of income (their income had to be below a certain amount based on number of household members). The director of this pantry explained, "the only time I turn them away is if they're using the other food pantries....Most of the time they trip themselves up" (0505). When asked about the consequences of using more than one pantry, the same respondent said, "I turn them offfor a whole year....To me, that's stealing food because that's government food involved in both places" (0505). This was not a set rule or policy of the pantry, but a guideline created by the director. Another worker explained that clients needed to fill out a TEFAP form (which determines eligibility under the rubrics of "Program" (already receiving a form of public assistance) and "Income" (one-person weekly income at or below USD370)), but that "it [the form] doesn't turn anybody away" (1215). The downside of a more subjective, informal system is that pantries can be run in a potentially inequitable manner (Daponte & Bade, 2006). In addition, a client who offended a staffmember or volunteer in the past will not be safeguarded against as they would be in a government-run system. A pantry director from a small church-run pantry was asked about assistance eligibility and replied that: We don't ask a lot of questions...We don't take any financial information and you don't need to qualify. I just tell people, "if you need it, you can use it."...You can tell by looking at them, you know? The car they drive, their clothes, you could tell they're not living high offthe hog, so to speak. (0607, emphasis added) In New Hampshire, 92 percent of food pantries and 100 percent of soup kitchens use volunteer labor, while 64 percent of pantries and 46 percent of soup kitchens rely completely on volunteer labor (Feeding America, 2010b). Volunteers partnered with pantry staffto perform tasks. Food has to be inspected, sorted, organized, and in some cases cleaned before it is handed out; how these tasks are carried out varies by pantry. In all pantries visited as part of this research, clients waited in line with other recipients (visible to each other) where nonpantry visitors to the agency could see them openly. In one venue, while pantry clients picked their food from a closet in a church, people working to set up a church dinner worked in the same room; these individuals and the pantry clients were visible openly to each other. These patterns show that by engaging in this private form of food assistance, clients give up any right to confidentiality they may be afforded through other forms of assistance, such as those offered by federal or state forms of food assistance. Another consequence of reliance on volunteer labor is that food standards are frequently disregarded. A set of pantry workers explained how they went to great lengths to utilize some squash donated from a nearby farm: We discovered a couple years ago that he can't keep it here [the pantry] because it will spoil...and then I said I'll take it, I got a place....So now I've got squashes and I keep an eye on them to make sure they aren't spoiling....So I have a room downstairs [in her house] that has no windows and it's about 55 [degrees]. And I put them down in the basement and then I bring them up into the garage and they're stored in the garage where it doesn't freeze. (0506) Pantry and food bank workers often clean and repackage food that is inconveniently packaged (e.g., in bulk) or has been broken open.4 These findings not only underscore the role of volunteer subjectivity, but they more broadly illustrate the negative externalities that can emerge in this unregulated system. Food Preferences: "Change Your Taste Buds" Depending on the agency, food preferences of clients may have minimal influence over foods received. Nutritional, cultural, or taste preferences can be disregarded, while pantry staffbeliefs dictate allotments. A volunteer who worked at a pantry and soup kitchen and also served on the board of the pantry said, "the younger ones [clients] are very, very fussy, they are turning their nose up at different things....Whereas if you're hungry, you accept and you learn to do it and change your taste buds" (0506, emphasis added). In the same interview as the one quoted above, this respondent reflected that "we're a spoiled society" and "there's a lot of honest need, but I think there's also those that are needy who don't help themselves" (0506). This respondent seems to believe that clients should be thankful for whatever they get, no matter what, since it is better than nothing. This is similar in a sense to how pantries are pressured into being thankful for all donations out of fear that refusal of items would jeopardize future giving (for an example, see Winne (2005)). Believing that clients should "change their taste buds" to accommodate the food available at the pantry food represents a misalignment between clients' nutritional well-being and the pantry objective of efficiently distributing all donated food. This respondent held a position of power within the pantry and was able to make managerial-level decisions. Following through on her sentiments means that clients should adjust their personal taste preferences to whatever donors decide to donate. Client preferences are interpreted by pantry staffin number of ways; consider the experience of this employee who worked at a smaller pantry in a northern part of the county: I had a guy call me today and wanted me to take his name offthe list here and I said "OK." I said "did you get a job?" I know he was looking for a job, "no, but I can't eat that crap." He said, "I like to eat organic now, natural food." He said, "I can't eat this stuff, processed kind of food." He said, "not that I don't appreciate what you're doing for me, but I just can't eat that kind of food." I said, "well, get a job" or that's what I felt like saying....Do you know how much that stuffcosts? We're not the end all, we're just supplemental here, we can't provide food for you for the week. I mean its just not going to happen. (0607) This employee appeared offended by this man's decision to stop accessing the pantry. By participating in the PEFS, these individuals relinquish rights and standards they may have in the public retail sphere (i.e., where federally and state enforced food safety regulations are upheld) and as a result are forced to gamble on the whims of the largely unregulated PEFS . This removal of food rights places food-insecure individuals in an even more food-precarious state, disempowering them beyond that which is accomplished through retail markets. One pantry worker explained that when individuals donate food, "lots of times it's ramen noodles because you can donate a lot at a low price" (0709). Food-pantry representatives working with a food-insecure population indicated that this group prefers quick and easy meals in the form of processed products, and also lacks adequate knowledge about nutrition and cooking to make informed food selections. Simultaneously, those accessing pantries revealed that food was a flexible budget item that could be adjusted according to the demands of other expenses. This often leads to trading down of items purchased - from more expensive, healthy items to cheaper, less healthy items. Food pantry representatives commented on how clients, especially young ones, prefer quick and easy products because "it's so much easier to open a can...things that are quick" (0506). Another pantry worker commented that "it's great when they say they cook....It just makes it so much easier to give them bags of nutritional food, but sometimes they'll just want the canned spaghetti, macaroni and cheese, hot dogs...foods that are easy to prepare for families," which she acknowledged as "a problem" (0709). Efforts to reform these eating habits were evident; one pantry worker reflected on how they had tried to switch from white to wheat bread, but found that "the wheat bread was not a hit" (1215). A nutrition professional working at a nonprofit described an attempt to change her clients' eating habits. She explained that her efforts were aimed at making people more nutritionally informed by showing them that eating healthier can be more affordable: We will do a comparison and we will make a meal with Hamburger Helper and we'll make basically homemade Hamburger Helper....I'll do a comparison of what Hamburger Helper costs and what it costs to make it from scratch. It's always of course cheaper to make it from scratch and then we do a taste test. And unfortunately many of the people have grown up with Hamburger Helper so that's what they like....They don't see the difference; how salty and awful it tastes....We'll do a whole cost analysis and they'll see it's about 59 cents a serving if you make it from scratch compared to about 79 cents a serving for Hamburger Helper. (1013) Another pantry worker explained: I think it's pricing, but then we have people, you know I believe it comes from how you grew up. You know, a lot of people shop the way their moms or dads shopped. And some people were just brought up on frozen boxed food and not cooked homemade meals and so that's all they know how to purchase. (0303) This may explain why pantries experience a demand for these easy-to-cook processed foods. While some pantries might push more nutritional options, others send contradictory nutritional messages. Not far from where the abovementioned nutritional professional worked, another pantry worker at the same agency remarked that "the stuffthat's easy for us to get is pasta, canned stuff, pasta mixes, and it's not highly nutritional....Tuna or some kind of a tinned meat, you know, with a Tuna Helper, that's the kind of stuffwe get here because we don't have any way to give them fresh meat" (0607). The food being donated is free for the pantry and free for the clients, made possible through private, often corporate donors. This represents a seemingly collaborative alignment between the need to dispose of unwanted food on behalf of corporate donors and the need for foodinsecure clients to consume food, yet this arrangement is rooted in a short-term outlook and power imbalance where corporate food entities are able to dump unwanted food for free upon a foodinsecure population, thereby realizing short-term profit gains (for the business) at the cost of longterm health of food-insecure individuals and its effect on governments. Assessing Collaborative Potential The rural PEFS appears to be similar to the urban PEFS in a number of ways. It is heavily reliant upon volunteer labor and it serves a significant proportion of the population, often on a regular basis. In the rural context there is a dispersed population. While centralized population centers like cities provide efficient and short-distance transportation networks, rural networks are decentralized with people living in remote areas, often requiring automobile access. This has a few practical consequences. A dispersed population also means that community food-growing opportunities like neighborhood gardens are more difficult to organize and implement when compared to a city where a group of neighbors can have a small vegetable plot within walking distance. Contrastingly, in many rural places the transportation cost of getting to a community space where a garden may be located represents another financial and logistical barrier. Cities are also places where people can more easily congregate to meet and organize reactive and proactive responses to inadequate food access (for example, to grow a neighborhood garden in response to being located in a "food desert"). In urban areas for instance, these have manifested in food justice efforts. In rural areas, the PEFS is the chief response to hunger and food insecurity (in addition to federal and state mandated programs). However, the rural PEFS operates on a smaller scale with fewer numbers of people accessing it and a high degree of malleability. As described earlier in this essay, this informality has been criticized; however, this ability to adapt means that individuals who operate PEFS entities (like food pantries) can take advantage of opportunities without having to obtain approval from higher levels of bureaucracy. In addition, the rural PEFS is often located where the land, soil, water, and air resource base for growing food is abundant. In contrast to the literature that supports the claims that low-income populations prefer processed foods (Drewnowski & Specter, 2004), data from the Grafton County case shows that in the pantries that were able to obtain small amounts of fresh, perishable foods (meats and fresh fruits and vegetables), these quickly became the most popular items. As one pantry worker explained: Most people know that an apple is healthier than a hot dog, but those [hot dogs] are way cheaper, you know, not that they're the same in any way....Here [at the food pantry] they would go for the things that they don't normally get their hands on, which is why those dairy products go fast and those veggies go fast. But I think in general when they are shopping they go for the cheapest, easiest thing to get through to the next week. (1215) In another study of Grafton County, a food pantry employee described how a local hunter donated moose meat: Interviewer: What are the most popular items that you have here in the pantry? Respondent 1: Meat. It's the most expensive... Respondent 2: Oh, was it last year we got the moose meat? We got 500 pounds [230 kg]. And we're thinking, what are we gonna do with all this moose meat? And it flew out of here. I mean, people were calling us and asking us for some. (McEntee, 2011b, p. 251) A key question emerging from this research is, "how do we harness the assets of both the PEFS and local food system to better serve the needs of food-insecure populations?" There is a demand for locally produced produce and meat on behalf of food-insecure individuals (as others have shown; see Hinrichs and Kremer (2002)). The desires of low-income consumers to eat fresh meat and produce (which often is locally produced) as well as to participate in some local food production activities (whether it be hunting or growing vegetables) have been overlooked by researchers. People accessing the PEFS in rural areas are accessing pantries, but also growing their food because it is an affordable way to obtain high-quality food they may otherwise not be able to afford (McEntee, 2011b). Based on the information provided in this article, potential synergies between the PEFS and the local food system in the rural context exist. Specifically, a traditional localism engages "participants through non-capitalist, decommodified means that are affordable and accessible" where "food is grown/raised/hunted, not with the intention to gain profit, but to obtain fresh and affordable food" (McEntee, 2011, pp. 254-255). Traditional localism allows for local food to become an asset for many food-insecure and poor communities that are focusing on the need to address inadequate food access. How could the rural PEFS source more food locally, thereby strengthening the local economy? How could private emergency food entities like food pantries and local food advocates promote food-growing, food-raising, and hunting activities as a means to increase grassroots, local, and affordable access to food? Like many places throughout the U.S., Grafton County is home to small-scale local agriculture operations supported by an enthusiastic public and sympathetic state. Simultaneously, there is the presence of food insecurity and a PEFS seeking to remediate this persistent problem. The actual structure of the PEFS could be thoroughly assessed (beyond the borders of Grafton County). If warranted, this system could be redesigned to prioritize privacy and formalize procedures in terms of ensuring that client food choices are respected. A crucial next step in reforming this system to benefit lowincome and minority clients is to emphasize the ability to grow, raise, and hunt food for their own needs5 through the traditional local concept. This would represent a transformation in which these activities could not only be supported by the PEFS, but also draw upon the social capital of communities in the form of memories and practices of rural people from the near past, all while reducing reliance upon corporate waste. If traditional local efforts were organized on a cooperative model, based on community need and not only the needs of individuals, it would benefit all those participating, drawing on collective community resources, such as food-growing knowledges and skills, access to land, and tools, thereby enhancing the range of rural livelihood strategies. In this sense, these activities are receptive to racial and economic diversity as well as alliance-formation across social groups and movements, all of which are characteristic of the food sovereignty movement (Holt- Giménez & Wang, 2011). In moving forward additional research is needed. While our findings highlight potential shortcomings, there is a lack of data exploring the rural PEFS experience. Specifically, from the demand side, we need more data about the users of this system, specifically in regard to their satisfaction with food being given to them. Are they happy with it? Do they want something different that is not available? Do they lack the ability to cook certain foods being handed out by the pantry? Feeding America's Hunger in America survey asks about client satisfaction; in its 2010 report, only 62.7 percent of surveyed clients were "very satisfied" with the overall quality of the food provided.6 Additionally, the fact that this survey is administered by the same personnel who are distributing food donations raises methodological biases. More needs to be discovered about why such a large proportion of users is not "very satisfied." From the supply side, we need to know more about food being distributed and its nutritional value. Currently, the food being donated and distributed is unregulated to a large degree, especially in rural pantries. Also on the supply side, the source of food provided to Feeding America as well as individual state food banks and food pantries needs to be inventoried with more information beyond just its weight. Knowing the quantity of specific donated products as well as the financial benefit (in terms of tax write-offs) afforded to donors would add transparency. Conclusion: Neoliberal Considerations and Future Directions The findings we have presented in this article are intended to reveal important policy questions about the PEFS and local food movement; we do acknowledge, however, that it also has raised some important questions. In summary, we see opportunities to move forward in enacting a food sovereignty agenda with both local and global scales in mind. First, value-added, market-based local solutions used to address the inadequacies of the current food system are immediately beneficial. However, these should not be accepted as the endall solution. Looking beyond them to determine what else can be accomplished to change the structure of the food system to shiftpower away from oligarchic food structures of the corporate food regime to food citizens, not only food consumers, would result in systemic change. A key consideration in realizing any reform in the PEFS, and simultaneously challenging and transforming the unsustainable global food regime, is recognizing the neoliberal paradigm in which government and economic structures exist. Neoliberalism can be defined as a political philosophy that promotes market-based rather than state-based solutions to social problems, while masking social problems as personal deficiencies. The PEFS is essentially acting as a vent for unwanted food in this system that also provides a financial benefit to the governing food entities (i.e., food businesses). Too often alternatives are hailed as opposing the profit-driven industrial food system simply because they are geographically localized; in reality, they may re-create the classist and racist structures that permeate the larger global system.7 The PEFS is an embedded neoliberal response to food insecurity; while public-assistance enrollment is on the rise, so is participation in the PEFS. This is a shiftin responsibility in who is providing assistance to food-insecure populations from the government to the private sector. In this sense it is a market-based approach to addressing food insecurity (i.e., by dumping food on the private charity sector, market retailers cut their own waste disposal costs), and the result is continual scarcity and the establishment of a system that reinforces the idea that healthy food is a privilege, only accessible to those with adequate financial and social capital. Along these same lines, a form of food localism exists that is arguably detrimental to those without financial and social capital; these efforts have and continue to frame food access solely as an issue of personal responsibility related to economic status and nutritional knowledge (a narrative thoroughly discussed by Guthman (2007, 2008)). This prioritizes market-based solutions to developing local food systems as well as universal forms of food education that emphasize individual health. As Alkon and Mares (2012) explain, Neoliberalism creates subjectivities privileging not only the primacy of the market, but individual responsibility for our own wellbeing. Within U.S. food movements, this refers to an emphasis on citizen empowerment, which, while of course beneficial in many ways, reinforces the notion that individuals and community groups are responsible for addressing problems that were not of their own making. Many U.S. community food security and food justice organizations focus on developing support for local food entrepreneurs, positing such enterprises as key to the creation of a more sustainable and just food system. The belief that the market can address social problems is a key aspect of neoliberal subjectivities. (p. 349) Though elements of both the PEFS and the local food system have arguably been folded into neoliberalization processes through market-based mechanisms, incremental steps to change these dynamics are possible. Reframing issues of food accessibility (including food insecurity, hunger, food deserts, etc.) as issues of food justice moves us beyond an absolute spatial understanding of food issues. For instance, when we only look at physical access to food, we often disregard the more important considerations of class, race, gender (see Alkon and Agyeman, 2011), and sexual orientation that define a person's present position (and over which they often have no control) and which dictate how they engage with the food system. These considerations are present in current food-justice efforts, which seek to ensure that communities have control over the food grown, sold, and consumed there. Rural food justice has been defined using the traditional localism concept: Traditional localism in rural areas engages participants through non-capitalist, decommodified means that are affordable and accessible. Food is grown/raised/hunted, not with the intention to gain profit, but to obtain fresh and affordable food. A traditional localism disengages from the profit-driven food system and illustrates grassroots food production where people have direct control over the quality of the food they consume - a principal goal of food justice. (McEntee, 2011b, pp. 254-255) Utilizing this rural form of food justice involves more than promoting individual food acquiring techniques; it involves developing organizational and institutional strategies that improve the quality of food available to PEFS entities. This is currently accomplished by some, such as when pantries obtain fresh produce through farmer donations or when a food bank develops food-growing capacity. 8 But these types of entities are in the minority. The next stage of realizing food justice, we posit, is to determine how a food sovereignty approach can be utilized in a global North context. Food justice predominantly operates to find solutions within a capitalist framework (and it has been criticized as such) while food sovereignty is explicitly geared toward the dismantling of this system in order to achieve food justice. Regime change and transformation requires more than recognition and control over food-growing resources; it requires alliance and partnership-building between groups to "to address ownership and redistribution over the means of production and reproduction" (Holt- Giménez & Wang, 2011, p.98). Adopted by organizations predominantly located in the global South, food sovereignty is focused on the causes of food system failures and subsequently looks toward "local and international engagement that proposes dismantling the monopoly power of corporations in the food system and redistributing land and the rights to water, seed, and food producing sources" (Holt-Giménez, 2011, p. 324). There is an opportunity for people in the global North not only to learn from the global South food sovereignty movements, but to form connections and alliances between North and South iterations of these movements.9 As discussed above, the dominant food movement narrative is in sync with the economic and development goals of government (e.g., state-sanctioned buy-local campaigns) as well as marketing prerogatives of global food corporations (e.g., "local" being used as marketing label). Building a social movement powerful enough to place meaningful political pressure upon government to support a food system that prioritizes human wellbeing, not profit, is an immediate challenge. Incremental solutions are necessary in order to improve the lives of people now. However, these local solutions, such as innovative farm-to-school programming and other viable models between the local food environment and the PEFS that we have discussed in this article, would be more effective at affecting long-term systemic change if they were coupled with collective approaches to acknowledge and limit the power of the corporate food regime to prevent injustice, while also holding the state accountable for its responsibility to citizens, which it has successfully "relegated to voluntary and/or market-based mechanisms" (Alkon and Mares, 2012, p. 348). Food sovereignty offers more than an oppositional view of neoliberalism, however. The food sovereignty movement advances a model of food citizenship that asserts food as a nutritional and cultural right and the importance of democratic on-the-ground control over one's food. These qualities resonate with food-insecure and disenfranchised communities, urban and rural, in both the global North and South. Sidebar Citation: McEntee, J. C., & Naumova, E. N. (2012). Building capacity between the private emergency food system and the local food movement: Working toward food justice and sovereignty in the global North. Journal of Agriculture, Food Systems, and Community Development, 3(1), 235-253. http://dx.doi.org/10.5304/jafscd.2012.031.012 Copyright © 2012 by New Leaf Associates, Inc. Footnote 1 Cartesian understandings of space utilize a grid-based measurement of physical proximity. These types of proximitybased understandings of food access (i.e., food access is primarily a matter of bringing people physically closer to food retailers, as is promoted by the USDA Food Desert Locator) tend to overlook other nuanced forms of food access based on knowledge, culture, race, and class. 2 The amount of processed food, especially in the form of prepared meals and meals eaten outside the home, is steadily increasing in the United States (Stewart, Blisard, & Jolliffe, 2006). 3 The four-digit number indicates interview location and respondent IDs. 4 A leading antihunger effort in New Hampshire is the New Hampshire Food Bank (NHFB), the state's only food bank and a member of Feeding America. In 2008 the NHFB "distributed over 5 million pounds of donated, surplus food to 386 food pantries, soup kitchens, shelters, day care centers and senior citizen homes" (N.H. Food Bank, 2010). In total N.H. has 441 agencies registered with NHFB that provide food to 71,417 people annually. Grafton County has 18 food pantries, which "distribute non-prepared foods and other grocery products to needy clients, who then prepare and use these items where they live" and where "[F]ood is distributed on a short-term or emergency basis until clients are able to meet their food needs" (N.H. Food Bank, 2010). 5 A noteworthy example of an organization that has begun to accomplish these objectives is The Stop Community Food Centre in Toronto, which was recently described by Levkoe and Wakefield (2012). 6 The remaining categories are: "Somewhat satisfied" (31.3 percent), "Somewhat dissatisfied" (4.8 percent), and "Very dissatisfied" (1.3 percent). 7 For additional discussion of the political economic transition from government to governance, such as the transfer of state functions to nonstate and quasistate entities, see Purcell (2002). 8 An example of this type of effort is that of the Vermont Food Bank, which purchased a farm in 2008 in order to supply the food bank with fresh, high-quality produce as well as to sell the produce. 9 The U.S. Food Sovereignty Alliance has recognized the importance of building these coalitions: "As a US-based alliance of food justice, anti-hunger, labor, environmental, faith-based, and food producer groups, we uphold the right to food as a basic human right and work to connect our local and national struggles to the international movement for food sovereignty" (US Food Sovereignty Alliance, n.d., para. 1). References References Akobundu, U.O., Cohen, N. L., Laus, M. J., Schulte, M. J., & Soussloff, M. N. (2004). Vitamins A and C, calcium, fruit, and dairy products are limited in food pantries. Journal of the American Dietetic Association, 104(5), 811-813. http://dx.doi.org/10.1016/j.jada.2004.03.009 Alexander, D. (2003, 25 May). Bigger portions for food banks. Chicago Tribune. Retrieved from http://www.chicagotribune.com Alkon, A. H., & Agyeman, J. (Eds.) (2011). Cultivating Food Justice. Cambridge, Massachusetts: MIT Press. Alkon, A. H., & Mares, T. M. (2012). Food sovereignty in US food movements: Radical visions and neoliberal constraints. Agriculture and Human Values, 29(3), 347-359. http://dx.doi.org/10.1007/s10460- 012-9356-z Alkon, A. H., & Norgaard, K. M. (2009). Breaking the food chains: An investigation of food justice activism. Sociological Inquiry, 79(3), 289-305. http://dx.doi.org/10.1111/j.1475- 682X.2009.00291.x Allen, P., FitzSimmons, M., Goodman, M., and Warner, K. (2003). Shifting plates in the agrifood landscape: the tectonics of alternative agrifood initiatives in California. Journal of Rural Studies, 19(1), 61-75. http://dx.doi.org/10.1016/S0743-0167(02)00047-5 Associated Press. (2010, 25 March). ConAgra Foods 3Q profit rises, maintains outlook. Associated Press. New York. Retrieved from http://www.boston.com/ business/articles/2010/03/25/conagra_foods_ 3q_profit_rises_maintains_outlook/ Beggs, J. J. (2006). Coping with food vulnerability: The role of social networks in the lives of Missouri food pantry clients. Unpublished graduate thesis). University of Missouri, Columbia, Missouri. Bhattarai, G. R., Duffy, P. A., & Raymond, J. (2005). Use of food pantries and food stamps in lowincome households in the United States. The Journal of Consumer Affairs, 39(2), 276-298. http://dx.doi.org/10.1111/j.1745- 6606.2005.00015.x ConAgra. (2009). ConAgra Foods' First Corporate Responsibility Report Now Available. Retrieved 1 January 2011 from http://media.conagrafoods. com/phoenix.zhtml?c=202310&p=irolnewsArticle& ID=1269902&highlight= Daponte, B. O., & Bade, S. (2006). How the private food assistance network evolved: Interactions between public and private responses to hunger, Nonprofit and Voluntary Sector Quarterly, 35(4), 668- 690. Daponte, B. O., Lewis, G. H., Sanders, S., & Taylor, L. (1998). Food pantry use among low-income households in Allegheny County, Pennsylvania. Journal of Nutrition Education, 30(1), 50-57. http://dx.doi.org/10.1016/S0022-3182(98)70275-4 Drewnowski, A., & Specter, S. E. (2004). Poverty and obesity: The role of energy density and energy costs. The American Journal of Clinical Nutrition, 79(1), 6-16. Feeding America. (2010a). Hunger in America 2010 national report. Chicago: Feeding America and Mathematica Policy Research, Inc. Retrieved from http://feedingamerica.issuelab.org/resource/ hunger_in_america_2010_national_report Feeding America. (2010b.) Hunger in America 2010: Local report prepared for the New Hampshire Food Bank. Chicago: Feeding America. Retrieved from http://www.nhfoodbank.org/about-hunger/ hunger-study.html Feeding America. (2012a). Food, Grocery Donations and Food Drives. Retrieved from http://feedingamerica.org/ways-to-give/foodgrocery- food-drives.aspx Feeding America. (2012b). Leadership Partners. Retrieved from http://feedingamerica.org/howwe- fight-hunger/our-partners/leadershippartners. aspx Feeding America. (2012c). Programs & Services. Retrieved from http://feedingamerica.org/howwe- fight-hunger/programs-and-services.aspx Feenstra, G. W. (1997). Local food systems and sustainable communities. American Journal of Alternative Agriculture, 12(1), 28-36. http://dx.doi.org/10.1017/S0889189300007165 Fielding, N., & Fielding, J. (1986). Linking data. Beverly Hills, California: Sage. Flowerdew, R., & Martin, D. (1997). Methods in human geography: A guide for students doing a research project. London: Sage. Food Lion. (2010). Food Lion community connections. Retrieved 1 January 2011 from http://www.foodlion.com/Charities/Feeding- America Furst, T., Connors, M., & Bisogni, C. (1996). Food choice: A conceptual model of the process. Appetite, 26(3), 247-266. http://dx.doi.org/10.1006/appe.1996.0019 Gottlieb, R., & Joshi, A. (2010). Food justice. Cambridge, Massachusetts: MIT Press. Guthman, J. (2007). The Polanyian way? Voluntary food labels as neoliberal governance. Antipode, 39(3), 456-478. http://dx.doi.org/10.1111/j.1467- 8330.2007.00535.x Guthman, J. (2008). "If they only knew": Color blindness and universalism in California alternative food institutions. The Professional Geographer, 60(3), 387-397. http://dx.doi.org/10.1080/00330120802013679 Hilton, K. (1993). Close down the food banks. Canadian Dimension, 27(4), 22-23. Hinrichs, C. C., & Kremer, K. S. (2002). Social inclusion in a Midwest local food system project. Journal of Poverty, 6(1), 65-90. http://dx.doi.org/10.1300/J134v06n01_04 Holt-Giménez, E. (2011). Food security, food justice, or food sovereignty? Crises, food movements, and regime change. In A. H. Alkon & J. Agyeman (Eds.), Cultivating food justice (pp. 309-330). Cambridge, Massachusetts: MIT Press. Holt-Giménez, E., & Wang, Y. (2011). Reform or transformation? The pivotal role of food justice in the U.S. food movement. Race/Ethnicity: Multidisciplinary Global Contexts, 5(1), 83-102. http://dx.doi.org/10.2979/racethmulglocon.5.1.83 Irwin, J. D., Ng, V. K., Rush, T. J., Nguyen, C., & He, M. (2007). Can food banks sustain nutrient requirements? A case study in Southwestern Ontario. Canadian Journal of Public Health, 98(1), 17- 20. Kloppenburg, Jr., J., Lezberg, S., De Master, K., Stevenson, G. W., & Hendrickson, J. (2000). Tasting food, tasting sustainability: Defining the attributes of an alternative food system with competent, ordinary people. Human Organization, 59(2), 177-186. La Via Campesina. (2011). Defending food sovereignty. Retrieved 9 November 2012 from http://viacampesina.org/en/index.php/ organisation-mainmenu-44 Levkoe, C. (2006). Learning democracy through food justice movements. Agriculture and Human Values, 23(1), 89-98. http://dx.doi.org/10.1007/s10460- 005-5871-5 Levkoe, C. Z., & Wakefield, S. (2012). The Community Food Centre: Creating space for a just, sustainable, and healthy food system. Journal of Agriculture, Food Systems, and Community Development, 2(10), 249-268. Light, R. J., Singer, J., & Willett, J. (1990). By design: Conducting research on higher education. Cambridge, Massachusetts: Harvard University Press. Lipsky, M. (1985). Prepared statement before the Subcommittee on Domestic Marketing, Consumer Relations, and Nutrition of the Committee on Agriculture of the U.S. House of Representatives, 99th Cong., 2nd session. McEntee, J. C. (2010). Contemporary and traditional localism: A conceptualisation of rural local food. Local Environment, 15(9), 785-803. http://dx.doi.org/10.1080/13549839.2010.509390 McEntee, J.C. (2011a). Shifting rural food geographies and the spatial dialectics of just sustainability. (Doctoral dissertatiaon). Cardiff, UK: CardiffUniversity. http://library.cardiff.ac.uk/vwebv/ holdingsInfo?bibId=945965 McEntee, J. C. (2011b). Realizing rural food justice: Divergent locals in the northeastern United States. In A. H. Alkon and J. Agyeman (Eds.), Cultivating food justice (pp. 239-260). Cambridge, Massachusetts: MIT Press. Molnar, J. J., Duffy, P. A., Claxton, L. & Conner, B. (2001). Private food assistance in a small metropolitan area: Urban resources and rural needs. Journal of Sociology and Social Welfare, 28(3), 187-209. Monsivais, P., & Drewnowski, A. (2007). The rising cost of low-energy-density foods. Journal of the American Dietetic Association, 107(12), 2071-2076. http://dx.doi.org/10.1016/j.jada.2007.09.009 Morgan, D. L., & Krueger, R. A. (1998). The focus group kit. Thousand Oaks, California: Sage Publications. Morgan, K., Marsden, T., & Murdoch, J. (2006). Worlds of food. Oxford: Oxford University Press. Morton, L. W., & Blanchard, T. C. (2007). Starved for access: Life in rural America's food deserts. Rural Realities, 1(4), 1-10. Mosley, J., & Tiehen, L. (2004). The food safety net after welfare reform: Use of private and public food assistance in the Kansas City metropolitan area. Social Service Review, 78(2), 267-283. http://dx.doi.org/10.1086/382769 New Hampshire Food Bank [N.H. Food Bank]. (2010). New Hampshire Food Bank. Retrieved 1 April 2010 from http://www.nhfoodbank.org/index.php? option=com_content&view=frontpage&Itemid=1 Nord, M., Andrews, M., & Carlson, S. (2008). Measuring food security in the United States: Household food security in the United States, 2007 (Food Assistance and Nutrition Research Report No. 66). Washington, D.C.: United States Department of Agriculture. QSR International. (2010). NVivo [qualitative research software]. Cambridge, Massachusetts: QSR International. Procter & Gamble, 2010. P&G and Feeding America: Fighting hunger. Retrieved 2 April 2010 from http://www.pg.com/en_US/sustainability/ social_responsibility/feeding_america.shtml Poppendieck, J. (1998). Sweet charity? Emergency food and the end of entitlement. New York: Penguin. Purcell, M. (2002). Excavating Lefebvre: The right to the city and its urban politics of the inhabitant. Geoforum, 58(2-3), 99-108. Rock, M., McIntyre, L., & Rondeau, K. (2009). Discomforting comfort foods: Stirring the pot on KraftDinner and social inequality in Canada. Agriculture and Human Values, 26(3), 167-176. http://dx.doi.org/10.1007/s10460-008-9153-x Rossman, G., & Rallis, S. F. (1998). Learning in the field: An introduction to qualitative research. Thousand Oaks, California: Sage. Sharkey, J. R. (2009). Measuring potential access to food stores and food-service places in rural areas in the U.S. American Journal of Preventative Medicine, 36(4S), S151-S155. Sims, R. (2009). Food, place and authenticity: Local food and the sustainable tourism experience. Journal of Sustainable Tourism, 17(3), 321-336. http://dx.doi.org/10.1080/09669580802359293 Stewart, H., Blisard, N., & Jolliffe, D. (2006). Let's eat out: Americans weigh taste, convenience, and nutrition (Economic Information Bulletin No. EIB-19). Washington, D.C.: United State Department of Agriculture. Tarasuk, V., & Eakin, J. M. (2005). Food assistance through ''surplus'' food: Insights from an ethnographic study of food bank work. Agriculture and Human Values, 22(2), 177-186. http://dx.doi.org/10.1007/s10460-004-8277-x Tiehen, L. (2002). Issues in food assistance: Private provision of food aid: The emergency food Assistance system (Food Assistance and Nutrition Research Report No. 26-5). Washington, D.C.: United States Department of Agriculture. U.S. Census Bureau. (2008). Census 2000, American FactFinder. Retrieved from http://factfinder. census.gov/home/saff/main.html?_lang=en U.S. Department of Agriculture. (2012). United States Department of Agriculture, Census of Agriculture. Available from: http://www.agcensus.usda.gov/ Publications/2007/index.php U.S. Department of Agriculture, Agricultural Marketing Service [USDA AMS]. (2012). United States Department of Agriculture, Agricultural Marketing Service. Retrieved from http://search.ams.usda.gov/farmersmarkets/ U.S. Department of Agriculture, Food and Nutrition Service [USDA FNS]. (2010). United States Department of Agriculture, Food and Nutrition Service. The Emergency Food Assistance Program. Retrieved from http://www.fns.usda.gov/fdd/programs/tefap/ USDA FNS. (2012a). United States Department of Agriculture, Food and Nutrition Service. SNAP Retailer Locator. Retrieved from http://www.snapretailerlocator.com/ USDA FNS. (2012b). Program data. Retrieved from http://www.fns.usda.gov/pd/cnpmain.htm/ U.S. Food Sovereignty Alliance. (n.d.). About the Alliance. Retrieved 1 June 2012 from http://www.usfoodsovereigntyalliance.org/about Verpy, H., Smith, C., & Reicks, M. (2003). Attitudes and behaviors of food donors and perceived needs and wants of food shelf clients. Journal of Nutrition Education and Behavior, 35(1), 6-15. http://dx.doi.org/10.1016/S1499-4046(06)60321-7 Walmart. (2010). Walmart Corporate: Feeding America. Retrieved 1 March 2010 from http://walmartstores.com/CommunityGiving/ 8803.aspx Warshawsky, D. N. (2010). New power relations served here: The growth of food banking in Chicago. Geoforum, 41(5), 763-775. http://dx.doi.org/10.1016/j.geoforum.2010.04.008 Wekerle, G. R. (2004). Food justice movements: Policy, planning, and networks. Journal of Planning Education and Research, 23(4), 378-386. http://dx.doi.org/10.1177/0739456X04264886 Welsh, J., & MacRae, R. (1998). Food citizenship and community food security. Canadian Journal of Development Studies, 19, 237-55. http://dx.doi.org/10.1080/02255189.1998.9669786 Winne, M. (2005). Waste not, want not? Agriculture and Human Values, 22(2), 203-205. http://dx.doi.org/10.1007/s10460-004-8279-8 Winter, M. (2003). Embeddedness, the new food economy and defensive localism. Journal of Rural Studies, 19(1), 23-32. http://dx.doi.org/10.1016/S0743-0167(02)00053-0 Wu, C., & Eamon, M. K. (2007). Public and private sources of assistance for low-income households. Journal of Sociology & Social Welfare, 34(4), 121-149. AuthorAffiliation Jesse C. McEntee a Food Systems Research Institute and Tufts Initiative for the Forecasting and Modeling of Infectious Diseases Elena N. Naumova b Department of Civil and Environmental Engineering, Tufts University, and Tufts Initiative for the Forecasting and Modeling of Infectious Diseases Submitted 2 May 2012 / Revised 28 June and 26 July 2012 / Accepted 27 July 2012 / Published online 4 December 2012 aCorresponding author: Jesse C. McEntee, PhD, Managing Partner, Food Systems Research Institute LLC; P.O. Box 1141; Shelburne, Vermont 05482 USA; +1-802-448-2403; www.foodsystemsresearchinstitute.com; jmcentee@foodsri.com b Elena N. Naumova, PhD, Associate Dean for Research, School of Engineering; Professor, Department of Civil and Environmental Engineering, Tufts University; also Tufts Initiative for the Forecasting and Modeling of Infectious Diseases (InForMID) (http://informid.tufts.edu/); elena.naumova@tufts.edu Acknowledgments: The authors are grateful to the Economic and Social Research Council's Centre for Business Relationships, Accountability, Sustainability and Society at CardiffUniversity as well as the Center for Rural Partnerships at Plymouth State University for financial support during this research. The authors are also grateful to the three anonymous reviewers who provided constructive feedback on earlier drafts of this article. Word count: 11055Show lessYou have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimerNeither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer Translations powered by LEC.Translations powered by LEC. Copyright New Leaf Associates, Inc. Fall 2012More like this 
    2. More like this Building capacity between the private emergency food system and the local food movement: Working toward food justice and sovereignty in the global NorthMcEntee, Jesse C; Naumova, Elena N. Journal of Agriculture, Food Systems, and Community Development; Ithaca Vol. 3, Iss. 1,  (Fall 2012): 235-253. Full textFull text - PDFDetailsReferences 74Hide highlighting $j(function($) { //$('.psycArticlesImage, .textPlusGraphicsImageClass').addClass('imageViewerZoom'); jQuery('.psycArticlesImage, .textPlusGraphicsImageClass') .filter(function() { return jQuery(this).closest(".research_topic").length == 0; }) .addClass('imageViewerZoom'); var viewerOptions = { url: 'data-original', hidden: function(e) { //console.log("Closed Image Viewer"); pqga.pushEventToGA('closeImageViewerFullTextView'); } }; var fieldDivId = fulltext_field_MSTAR_1285242266.id; if (jQuery(".research_topic").length == 0) { $('#' + fieldDivId).viewer(viewerOptions); } else { var decodeHtml = function(html) { var txt = document.createElement("textarea"); txt.innerHTML = html; return txt.value; } jQuery(".research_topic .intraDocLink") .each(function(idx, el) { var imgEl = jQuery("img", jQuery(el).parent()); imgEl.wrap(el.outerHTML); imgEl.attr('title', decodeHtml(imgEl.attr('title'))); imgEl.attr('alt', decodeHtml(imgEl.attr('alt'))); }); } }); Translate Full textUndo Translation TranslateUndo Translation Press the Escape key to close FromArabicChinese (Simplified)Chinese (Traditional)EnglishFrenchGerman ItalianJapaneseKoreanPolishPortugueseRussianSpanishTurkishToArabicChinese (Simplified)Chinese (Traditional)FrenchGermanItalianJapaneseKoreanPolishPortugueseRussianSpanishTurkishTranslateTips.images = '/assets/r20181.3.0.390.2236/pqc/javascript/prototip/images/prototip/';Translation in progress... [[missing key: loadingAnimation]]The full text may take 40-60 seconds to translate; larger documents may take longer.Cancel OverlayEndTurn on search term navigationTurn on search term navigationJump to first hitListen   Headnote Abstract One area of food system research that remains overlooked in terms of making urban-rural distinctions explicit is the private emergency food system of food banks, food pantries, soup kitchens, and emergency shelters that exists throughout the United States. This system is an important one for millions of food-insecure individuals and today serves nearly as many individuals as public food assistance. In this article, we present an exploratory case that presents findings from research looking at the private emergency food system of a rural county in northern New England, U.S. Specifically, we examine the history of this national network to contextualize our findings and then discuss possibilities for collaboration between this private system and the local food movement (on behalf of both the public and the state). These collaborations present an opportunity in the short term to improve access to high quality local foods for insecure populations, and in the long term to challenge the systemic income and race-based inequalities that increasingly define the modern food system and are the result of prioritizing market-based reforms that re-create inequality at the local and regional levels. We propose alternatives to these approaches that emphasize the ability to ensure adequate food access for vulnerable populations, as well as the right to define, structure, and control how food is produced beyond food consumerism (i.e., voting with our dollars), but through efforts increasingly aligned with a food sovereignty agenda. Keywords emergency food, food justice, food sovereignty, rural and urban Introduction The rural private emergency food system is an overlooked area of research. The popularity of local food has increased in urban and rural areas alike, yet despite the social and economic capital driving this innovative food movement, foodinsecure populations remain ignored to a large degree. We know that the rural food environment is substantively different than the urban food environment (Sharkey, 2009). People in rural areas generally have less money to spend on food and they live further from markets where local food producers sell their products (Morton & Blanchard, 2007). Producers are predominantly located in rural areas where land and water resources are abundant, yet the most profitable markets for their products more often than not are located in urban centers where they can more easily access a concentrated population center with greater financial capital. These urban-rural distinctions can be made about multiple aspects of food systems research. For instance, early applications of the food desert concept (and the corresponding efforts to identify them) were overwhelmingly situated in urban places. Today, there is recognition that there is not a single food desert definition that can be universally applied. Researchers as well as government authorities have recognized this; for instance, the United States Department of Agriculture (USDA) has adopted different criteria for urban and rural food deserts. In examinations of local food, some have identified key urban-rural distinctions. For example, McEntee's (2010) contemporary and traditional conceptualization has been used to distinguish between a broad base of activities that are local in terms of geographical scale, but potentially exclusive in terms of their social identity and obstacles to adequate access. Access in this sense is not represented by a Cartesian notion of physical proximity, however; it is also indicative of access barriers in terms of financial ability as well as structural and historical (e.g., institutional racism) processes that privilege some, but harm others (McEntee 2011a).1 These concerns are increasingly recognized as part of growing food justice and food sovereignty agendas. The private emergency food system (PEFS) is a national network of food banks, food pantries, soup kitchens, and shelters that operate largely to redistribute food donated by individuals, businesses, and the state. This is a tremendously important system that serves both urban and rural food-insecure populations. Based on a review of this system's functionality, urban-based critiques of this system, and findings from an exploratory qualitative study, we propose that there are key distinctions between the urban and rural PEFSs that have been overlooked (in the same manner that urban and rural local food systems are conflated). The PEFS serves as a safety net for many, yet it struggles financially and lacks access to the high-quality foods (e.g., fresh produce and meat) that clients of this system often prefer. In this article we present emergent opportunities to develop the collaborative capacity between the PEFS and the rural local food system in ways that address the needs of the PEFS and utilize the assets of the burgeoning local food movement. Furthermore, we explain how these synergies potentially contribute to food justice by providing high-quality food to low-income populations. We begin the article with a review of pertinent literatures. This is followed by a depiction of the PEFS, summary of existent critiques, and presentation of our data. We propose that livelihood strategies related to traditional localism (McEntee, 2010) contribute to food justice and food sovereignty agendas by focusing on the natural and social assets of rural communities. We conclude with a discussion of the possibilities for not only remediating the PEFS, but challenging the corporate food regime that currently institutionalizes it. Local Foods, Food Justice, and Food Sovereignty Consumer confidence in the conventional food sector has decreased as a result of food scares (Morgan, Marsden, & Murdoch, 2006), with consumers feeling alienated from modern-day food production (Sims, 2009). From these consumerbased concerns over food safety and a general alienation from modern-day food production, alternative food initiatives and movements have surfaced (including local food initiatives). Feenstra (1997) made the case for local foods as an economically viable alternative to the global industrial system by providing specific steps to be taken by citizens to facilitate the transition between the local and the global; it is these forces that have become the focus of food provisioning studies (Winter, 2003). These efforts include more sustainable farming methods, fair trade, and food and farming education, among others; these have been reviewed extensively elsewhere, such as by Kloppenburg, Lezberg, De Master, Stevenson, and Hendrickson (2000) and Allen, FitzSimmons, Goodman, and Warner (2003). Essentially, all are categorized by a desire to create socially just, economically viable, and environmentally sustainable food systems (Allen et al., 2003) and the majority are now collectively referred to as the dominant food movement narrative (Alkon & Agyeman, 2011). It is from this narrative that the local food movement emerges. Food justice efforts have successfully utilized food localization efforts to improve food access opportunities for low-income and minority communities. These efforts typically occur in urban areas and target low-income minority populations (Alkon & Norgaard, 2009; Gottlieb and Joshi 2010; Wekerle, 2004; Welsh & MacRae, 1998). The concept of food justice supports the notion that people should not be viewed as consumers, but as citizens (Levkoe, 2006); by linking low-income and minority populations with alternative modes of food production and consumption, advocates prioritize human well-being above profit and alongside democratic and social justice values (Welsh & MacRae, 1998). This represents "more than a name change" departure from conventional food security concerns; it is rather a systemic transformation that alters people's involvement in food production and consumption (Wekerle, 2004, p. 379). Increasingly substantiated by racial and income-based exclusion, food justice operates to prioritize just production, distribution, and access to food within the communities being impacted. This is the focus of the food justice movement, though environmental and economic benefits often result from these efforts as well. A recently published volume edited by Alkon and Agyeman (2011) unpacks various forms of food justice, ranging from issues of production (e.g., farmworker rights) to distribution, consumption, and access. In this article we are concerned with the consumption element of the food chain; food justice efforts in this realm often take the form of alternative food initiatives that create new market-based or charity-based solutions to inadequate food access (e.g., farm-to-school programming that link schools and local farmers, slidingscale payment plans for low-income consumers at farmers' markets that are subsidized by wealthier patrons, or agricultural gleaning programs) that stress social equity and solutions that are implemented by and for the people impacted by inadequate access to food. This latter element is a definitive characteristic of food justice initiatives. Most recently, Alkon and Mares (2012) situated food justice in relation to food sovereignty, finding that although food justice and community food security frameworks often challenge conventional agricultural and food marketing systems, the food sovereignty framework is the only one to explicitly underscore "direct opposition to the corporate food regime" (p. 348). This is because both contemporary food justice and (community) food security frameworks often operate within traditional markets that are agents of the industrial agricultural system representative of a neoliberal political economy. This marks a departure between food justice and food sovereignty; La Via Campesina, a major proponent of food sovereignty, defines the concept as: the right of peoples to healthy and culturally appropriate food produced through sustainable methods and their right to define their own food and agriculture systems. It develops a model of small scale sustainable production benefiting communities and their environment. It puts the aspirations, needs and livelihoods of those who produce, distribute and consume food at the heart of food systems and policies rather than the demands of markets and corporations. (La Via Campesina, 2011, para. 2) Whereas food justice often works to create solutions in sync with market structures by filling the gaps in government services, food sovereignty focuses on dismantling the corporate food regime. History and Structure of the PEFS An area of the food system where food justice advocates have increasingly engaged in an urban setting is the PEFS. Operating on a charity basis, emergency food assistance provides food to individuals whose earnings, assets, and social insurance options have not met their needs (Wu & Eamon, 2007). Public government-run assistance programs include welfare, the Supplemental Nutrition Assistance Program (SNAP), Medicaid, and subsidized housing. Private emergency food assistance is provided by nonprofit organizations and includes soup kitchens, food pantries, food banks, food rescue operations (Poppendieck, 1998), and "emergency shelters serving short-term residents" (emphasis added) (Feeding America, 2010a, p. 1). Largely in reaction to dissatisfaction with the federal food stamp program, Congress passed the Omnibus Budget Reconciliation Act in 1982. This act allowed federally owned surplus commodity food to be distributed by the government for free to needy populations. Prior to its passage, the vast majority of food assistance in the U.S. was governmentally provided through the food stamp program (now the Supplemental Nutrition Assistance Program [SNAP]) and the majority of food that food pantries received came from individuals and businesses. The act's success was followed by the Temporary Emergency Food Assistance Act (TEFAP) in 1983, which began the process of routinely distributing excess commodities through private emergency food programs, such as food banks and food pantries (Daponte & Bade, 2006). Food pantries flourished as a result of commoditysourcing, since they now began receiving a reliable stream of food. Businesses that previously did not want to be involved in emergency food provisioning activities could now dispose of unwanted inventory for a much cheaper rate by giving it away (Daponte & Bade, 2006) (see figure 1). In fiscal year 2009, Congress appropriated USD299.5 million for the program, made up of USD250 million for food purchases and USD49.5 million for administrative support (USDA FNS, 2010). In the U.S., companies defined as C corporations by tax code (the majority of U.S. companies) can collect an enhanced tax deduction for donating surplus property, including food. Thus when food businesses donate food to a charity, including food banks and pantries, the businesses can take a deduction equal to 50 percent of the donated food's appreciated value. In addition, the Bill Emerson Good Samaritan Food Donation Act of 1996 provides safeguards for entities donating food and groceries to charitable organizations by minimizing the risk of legal action against donors. Companies are not required to publicly disclose deductions for food donations, though in 2001 corporations wrote offUSD10.7 billion in deductions (Alexander, 2003). Feeding America received USD663,603,071 in charitable donations in 2006. In a 2003 Chicago Tribune article, Delroy Alexander described how America's Second Harvest received USD450 million in donated provisions in 2001, USD210 million of which came from just 10 major food companies, such as Kraft, Coca-Cola, General Mills, ConAgra Foods, Pfizer, and Tropicana (Alexander, 2003). The top five donors each gave more than USD20 million in food, with the top contributor at USD38 million. Current figures are unavailable, though many companies proudly display pounds of food donated on their websites. For instance, Walmart's website states: From November 2008 to November 2009, the Walmart stores and Sam's Club locations have already donated more than 90 million pounds [41,000,000 kg] of food....By giving nutritious produce, meat, and other groceries, we've become Feeding America's largest food donor. (Walmart, 2010) This arrangement allows for unwanted food (food that would otherwise be considered waste) to be utilized; it acts as a vent for unwanted food, allowing large corporate entities to dump surplus product of questionable nutritional quality upon the PEFS. Simultaneously, these corporations are receiving tax breaks and benefiting from policies that minimize their legal risk. Approximately 80 percent of food banks belong to Feeding America, a member organization that acts as an advocate and mediator in soliciting food from major food companies and bulk emergency food providers. This network has 205 food bank members that distribute food and grocery products to charitable organizations. Nationwide, more than 37 million people accessed Feeding America's private food assistance network in 2009 (up 46 percent from 2005), while 127,200 accessed it in New Hampshire (Feeding America, 2010b). Critiques of the PEFS Critical assessments of the PEFS range from those focused on political-economic relations to on-theground implementation of this redistributive system. In the following section we have grouped these appraisals into four main points. First, the PEFS is largely "emergency" in name only. Second, distribution of food in the PEFS is largely unregulated. Third, nutritional content of donated items is frequently overlooked for the sake of its quantity. Fourth, because of their limited budget and foodstorage capacity, the PEFS requests nonperishable, and resultantly, low-nutrition donations. Related to this point, perpetuation of the PEFS as it currently operates supports a short-term food strategy that supports immediate caloric need while sacrificing long-term health (and ignoring its associated costs). A prominent critique of the PEFS is that it is "emergency" in name only, and examples highlight the emergency programmatic emphasis of programs even though their services appear to be operating in a nonemergency manner. The U.S. government describes TEFAP as a program that "helps supplement the diets of low-income needy persons...by providing them with emergency food" (USDA FNS, 2010). Feeding America, "the nation's largest organization of emergency food providers," describes food pantries as "distributing food on a short-term or emergency basis" (the NHFB shares this definition) (Feeding America, 2010a, p. 13). According to Feeding America's Hunger in America 2010 report, approximately 79.2 percent of clients interviewed reported that they had used a pantry in the past year, indicating that they were not new clients. Multiple researchers have observed that many food pantries are being used on a regular, long-term basis (Beggs, 2006; Bhattarai, Duffy, & Raymond, 2005; Daponte, Lewis, Sanders, & Taylor, 1998; Hilton, 1993; Molnar, Duffy, Claxton, & Conner, 2001; Mosley & Tiehen, 2004; Tarasuk & Eakin, 2005; Warshawsky, 2010). Along these lines, others have cited how the PEFS is unregulated to its detriment; for instance, many private donations do not have any federal or state laws regulating their distribution (Bhattarai et al., 2005). The unregulated nature of any charity brings both benefits and burdens, and one benefit to the PEFS has been the ability to utilize the efforts of a large volunteer base. However, it has been proposed that pantries that operate with a largely volunteer workforce employ subjective eligibility criteria and a "they should be satisfied with whatever they get" mindset on behalf of workers (volunteers as well as paid staff) (Tarasuk & Eakin, 2005, p. 182). Food pantry clients may have limited rights and entitlement to the food being distributed, "further reinforcing that people are unable to provide for themselves" (Molnar et al., 2001, p. 189) in this redistributive system. In fact, it has been shown that workers "routinely eschew the aesthetic values that dominate our retail system" where "distribution of visibly substandard or otherwise undesirable products is achieved because clients have few if any rights" and "are in desperate need of food" (Tarasuk & Eakin, 2005, p. 184). The belief of some workers that clients should be satisfied with whatever items they receive underlies the non-nutritional focus threaded throughout the private emergency food system. This is especially evident from the supply side. Government commodities serve as a major source of food for the PEFS. Commodity foods are provided to food banks, directly to independent agencies, and to Feeding America (Feeding America, 2012c). The original intents of this commodity program were to distribute surplus agricultural commodities and reduce federal food inventories and storage costs, while simultaneously helping food-insecure populations. In 1988, however, much of the federal government's surplus had been exhausted, and as a result the Hunger Prevention Act of 1988 appropriated funds for the purchase of commodities for TEFAP (USDA FNS, 2010). The PEFS's other major contributor, private corporations, do not explicitly concentrate on the nutritional content of their donations. Corporations benefit from considerable tax incentives along with liability protection; they can donate food that would otherwise be wasted, forgoing dumping costs while engaging in what many of these entities now call "corporate social responsibility." For instance, pounds of donated food are showcased and used as progress markers to show how successfully hunger is being combated. Feeding America states that it distributes 3 billion pounds (1.4 billion kg) of food every year (Feeding America, 2012a). Clicking on a few of Feeding America's "Leadership Partners" on its homepage website (Feeding America, 2012b) yields similar language. For instance, ConAgra states that, "In the last dozen years, ConAgra Foods has provided more than 166 million pounds of food to families in need" (ConAgra, 2009, para. 5), Food Lion (part of the Delhaize Group) has "donated more than 21 million pounds of food" (Food Lion, 2010), and "just last year, Procter & Gamble contributed nearly 30 million pounds of product" (Procter & Gamble, 2010). These figures provide no indication of nutritional content, although one pound of naturally flavored drink boxes has different nutritional composition than one pound of fresh produce. If success is measured in terms of quantity, then this will be the criterion that drives emergency food provisioning. Charities are easy targets for critique; they often operate on a shoestring, use labor with different levels of knowledge and experience, and much of the time are put in a financially and socially powerless position, at the whims of donors. One result is that nonperishable or lowperishability items are preferred (Tiehen, 2002; Verpy, Smith, & Reicks, 2003); these last longer and do not require refrigeration. Their long shelf life means handling and transport is not timesensitive. These products cost less and are more likely to be donated. Nutrient-poor foods are less healthy overall (Monsivais & Drewnowski, 2007); previous food pantry investigations discovered the poor nutrient composition of donated items, especially in regards to adequate levels of calcium, vitamin A, and vitamin C (Akobundu, Cohen, Laus, Schulte, & Soussloff, 2004; Irwin, Ng, Rush, Nguyen & He, 2007). Donating large amounts is important since donation quantity is prioritized by agency recipients. Rock, McIntyre, and Rondeau (2009) found a misalignment between donor intent and client preference indicative of the "ignorance among food-secure people of what it is like to be food-insecure" (p. 167). Food banks and food pantries are pressured to accept foods on unfair grounds, just as clients are pressured to accept whatever food is handed to them. In at least one other case, food pantry donors "did not consciously consider nutrition when deciding which foods to donate" (Verpy et al., 2003, p.12). A demand-side perspective of private emergency food provisioning reveals somewhat complementary conditions that support the acquisition and distribution of low-quality foods. The longterm health consequences associated with the consumption of low-quality foods can be overlooked to satisfy immediate food needs, thereby reinforcing the value placed on the low-quality supply being donated. While expenses like shelter, heat, and medical expenses are relatively inelastic, food is flexible and can be adjusted based on these demands. On a limited budget, it is often the case that whatever money is leftover is used for food (Furst, Connors, & Bisogni, 1996; McEntee, 2010). As reported by McEntee, a homeless shelter resident commented: It's likes this, your oil's almost out, your electricity's high and they're going to shut it off, what are going to do? Well, we're going to have to cut down on our food budget. Do what you gotta do. . . you can buy your family packs and suck it up and eat ramen noodles. (McEntee, 2010, p. 795) Sometimes these types of food are chosen out of necessity (that is the only type of food offered) and other times it is out of habit (they are used to eating it).2 With the recent recession in the U.S. economy, purchases of cheap, ready-to-eat processed foods have increased. An Associated Press article entitled, "ConAgra Foods 3Q profit rises, maintains outlook" (Associated Press, 2010, para. 1) states: Strong sales of low-priced meals such as Banquet and Chef Boyardee and lower costs pushed ConAgra Foods Inc.'s third-quarter profit up 19 percent. Cheap prepared foods like those that ConAgra offers have appealed to customers during the recession as they look for ways to save money and eat at home more. Methods and Research Setting Approximately 7.7 percent of New Hampshire's population is food-insecure (Nord, Andrews & Carlson, 2008); 8 percent of the state's population lives in poverty, while 9.4 percent of Grafton County's population lives in poverty (U.S. Census Bureau, 2008). Grafton County was selected as the research site based on proximity to researchers as well as the existence of food insecurity. Grafton County (figure 2) has a population of 81,743 and a population density of 47.7 people per square mile (18.4 people per square kilometer) (U.S. Census Bureau, 2008). Unlike the other two primarily rural northern counties of New Hampshire (Carroll and Coos counties), Grafton County contains two universities that serve as educational and cultural centers (Dartmouth University in Hanover and Plymouth State University in Plymouth). Accordingly these areas attract residents with above-average educational attainment and income, thus offering a variegated set of social and economic conditions which are differentiated from the rest of the county. There are 14 registered food pantries in Grafton County (of a total of 165 in New Hampshire) (New Hampshire Food Bank, 2010). In 2012, there were 92 SNAP-authorized stores within the county, marking a 13 percent increase from 2008 (USDA FNS, 2012a). Approximately 16 percent of students were free lunch eligible in 2008 (USDA FNS, 2012b). In terms of local food potential, there were 10 farmers' markets in 2010 (USDA AMS, 2012) with 3.3 percent of farm sales attributable to direct to consumer sales ; U.S. Census of Agriculture, 2012). A purposive sampling method (Light, Singer, & Willett, 1990) was used to identify respondents (N = 16) who work regularly in Grafton County's PEFS. This included state employees, although the majority were workers and volunteers at food banks, soup kitchens, food pantries, and homeless shelters. These respondents were selected based on their above-average knowledge about hunger, food insecurity, and private emergency food provisioning in Grafton County (beyond their personal experience). Although some questions were specific to the respondent's area of expertise, the same general open-ended question template was used to facilitate informative discussion on topics related to food access, such as affordability, nutrition, and food provisioning (see table 1). The one-on-one semistructured interviews (Morgan & Krueger, 1998) with this group of respondents lasted between 60 and 90 minutes and took place in an office setting, community center, or over the phone (when in-person meetings were difficult to arrange). All interviews were recorded, transcribed, and coded. Participant observation (Flowerdew & Martin, 1997) was conducted at a Plymouth-area soup kitchen that served weekly hot meals for free to attendees. Data from interviews as well as field notes were coded and analyzed using NVivo, qualitative analysis software ( QSR International, 2010). After data was cleaned, data was examined as a whole to gain a general sense of overall meaning and depth. Open coding was undertaken, where material was organized into groups or segments of related information (Rossman & Rallis, 1998). We developed a qualitative codebook for efficient and consistent code assignment. Codes were examined, as well as the overall corpus of information. We identified underlying themes based primarily on respondent narratives. Over time, themes and trends emerged. Overlaps and differences between themes were identified, thus allowing their properties to be refined, ultimately resulting in progressively clear theme categories. Following theme assessment, interconnections and relations between themes were identified through concept mapping and triangulation (Fielding & Fielding, 1986). The authors conducted all interviews and observation, processed all data, and conducted all analysis. Institutional Review Board approval was obtained and all standard research protocols used. Findings from Grafton County Some of the data emerging out of the Grafton County case echoes previous observations about the PEFS. The preliminary data we present in this article is the product of field work, policy evaluation, and literature review. We do not claim that these findings are externally generalizable, although we do see similarities between our observations and those of other researchers, indicating that our data may be indicative of trends elsewhere, especially in rural areas of the northeastern United States where similar demographic and cultural traits exist. In this way, we also see potential in terms of research trajectories and policy reforms for those looking to build capacity between the PEFS and the local food system. Reliance upon Volunteers In relation to the existing criticisms that the PEFS is actually serving a long-term and sustained need and not a short-term or emergency one, many food pantry workers indicated that longterm usage by clients was common. For instance, one pantry worker explained that "most of the people that come in here are...I don't know if I would say chronic, but regulars" (0607).3 In these pantries, representatives talked about getting to know clients over the course of months and years of use; some clients stay and talk with pantry workers for emotional support during food pickups. This long-term usage has been critiqued and connected to the fact that the PEFS is so heavily reliant upon volunteer labor that resultantly there are opportunities for inconsistencies to develop (Lipsky, 1985; Molnar et al., 2001). Ad hoc administration of private emergency food distribution has consequences, such as inconsistent eligibility requirements and quality control (Daponte & Bade, 2006). In Grafton County pantries, eligibility was determined through a combination of criteria, such as pantry worker's personal judgment and preset income criteria. In one large pantry, more refined conditions were followed by staffand volunteers. In this pantry, if it was a client's first visit, then they were allowed to get food no matter what. However, in order to get food on subsequent visits they would need to bring proof of income (their income had to be below a certain amount based on number of household members). The director of this pantry explained, "the only time I turn them away is if they're using the other food pantries....Most of the time they trip themselves up" (0505). When asked about the consequences of using more than one pantry, the same respondent said, "I turn them offfor a whole year....To me, that's stealing food because that's government food involved in both places" (0505). This was not a set rule or policy of the pantry, but a guideline created by the director. Another worker explained that clients needed to fill out a TEFAP form (which determines eligibility under the rubrics of "Program" (already receiving a form of public assistance) and "Income" (one-person weekly income at or below USD370)), but that "it [the form] doesn't turn anybody away" (1215). The downside of a more subjective, informal system is that pantries can be run in a potentially inequitable manner (Daponte & Bade, 2006). In addition, a client who offended a staffmember or volunteer in the past will not be safeguarded against as they would be in a government-run system. A pantry director from a small church-run pantry was asked about assistance eligibility and replied that: We don't ask a lot of questions...We don't take any financial information and you don't need to qualify. I just tell people, "if you need it, you can use it."...You can tell by looking at them, you know? The car they drive, their clothes, you could tell they're not living high offthe hog, so to speak. (0607, emphasis added) In New Hampshire, 92 percent of food pantries and 100 percent of soup kitchens use volunteer labor, while 64 percent of pantries and 46 percent of soup kitchens rely completely on volunteer labor (Feeding America, 2010b). Volunteers partnered with pantry staffto perform tasks. Food has to be inspected, sorted, organized, and in some cases cleaned before it is handed out; how these tasks are carried out varies by pantry. In all pantries visited as part of this research, clients waited in line with other recipients (visible to each other) where nonpantry visitors to the agency could see them openly. In one venue, while pantry clients picked their food from a closet in a church, people working to set up a church dinner worked in the same room; these individuals and the pantry clients were visible openly to each other. These patterns show that by engaging in this private form of food assistance, clients give up any right to confidentiality they may be afforded through other forms of assistance, such as those offered by federal or state forms of food assistance. Another consequence of reliance on volunteer labor is that food standards are frequently disregarded. A set of pantry workers explained how they went to great lengths to utilize some squash donated from a nearby farm: We discovered a couple years ago that he can't keep it here [the pantry] because it will spoil...and then I said I'll take it, I got a place....So now I've got squashes and I keep an eye on them to make sure they aren't spoiling....So I have a room downstairs [in her house] that has no windows and it's about 55 [degrees]. And I put them down in the basement and then I bring them up into the garage and they're stored in the garage where it doesn't freeze. (0506) Pantry and food bank workers often clean and repackage food that is inconveniently packaged (e.g., in bulk) or has been broken open.4 These findings not only underscore the role of volunteer subjectivity, but they more broadly illustrate the negative externalities that can emerge in this unregulated system. Food Preferences: "Change Your Taste Buds" Depending on the agency, food preferences of clients may have minimal influence over foods received. Nutritional, cultural, or taste preferences can be disregarded, while pantry staffbeliefs dictate allotments. A volunteer who worked at a pantry and soup kitchen and also served on the board of the pantry said, "the younger ones [clients] are very, very fussy, they are turning their nose up at different things....Whereas if you're hungry, you accept and you learn to do it and change your taste buds" (0506, emphasis added). In the same interview as the one quoted above, this respondent reflected that "we're a spoiled society" and "there's a lot of honest need, but I think there's also those that are needy who don't help themselves" (0506). This respondent seems to believe that clients should be thankful for whatever they get, no matter what, since it is better than nothing. This is similar in a sense to how pantries are pressured into being thankful for all donations out of fear that refusal of items would jeopardize future giving (for an example, see Winne (2005)). Believing that clients should "change their taste buds" to accommodate the food available at the pantry food represents a misalignment between clients' nutritional well-being and the pantry objective of efficiently distributing all donated food. This respondent held a position of power within the pantry and was able to make managerial-level decisions. Following through on her sentiments means that clients should adjust their personal taste preferences to whatever donors decide to donate. Client preferences are interpreted by pantry staffin number of ways; consider the experience of this employee who worked at a smaller pantry in a northern part of the county: I had a guy call me today and wanted me to take his name offthe list here and I said "OK." I said "did you get a job?" I know he was looking for a job, "no, but I can't eat that crap." He said, "I like to eat organic now, natural food." He said, "I can't eat this stuff, processed kind of food." He said, "not that I don't appreciate what you're doing for me, but I just can't eat that kind of food." I said, "well, get a job" or that's what I felt like saying....Do you know how much that stuffcosts? We're not the end all, we're just supplemental here, we can't provide food for you for the week. I mean its just not going to happen. (0607) This employee appeared offended by this man's decision to stop accessing the pantry. By participating in the PEFS, these individuals relinquish rights and standards they may have in the public retail sphere (i.e., where federally and state enforced food safety regulations are upheld) and as a result are forced to gamble on the whims of the largely unregulated PEFS . This removal of food rights places food-insecure individuals in an even more food-precarious state, disempowering them beyond that which is accomplished through retail markets. One pantry worker explained that when individuals donate food, "lots of times it's ramen noodles because you can donate a lot at a low price" (0709). Food-pantry representatives working with a food-insecure population indicated that this group prefers quick and easy meals in the form of processed products, and also lacks adequate knowledge about nutrition and cooking to make informed food selections. Simultaneously, those accessing pantries revealed that food was a flexible budget item that could be adjusted according to the demands of other expenses. This often leads to trading down of items purchased - from more expensive, healthy items to cheaper, less healthy items. Food pantry representatives commented on how clients, especially young ones, prefer quick and easy products because "it's so much easier to open a can...things that are quick" (0506). Another pantry worker commented that "it's great when they say they cook....It just makes it so much easier to give them bags of nutritional food, but sometimes they'll just want the canned spaghetti, macaroni and cheese, hot dogs...foods that are easy to prepare for families," which she acknowledged as "a problem" (0709). Efforts to reform these eating habits were evident; one pantry worker reflected on how they had tried to switch from white to wheat bread, but found that "the wheat bread was not a hit" (1215). A nutrition professional working at a nonprofit described an attempt to change her clients' eating habits. She explained that her efforts were aimed at making people more nutritionally informed by showing them that eating healthier can be more affordable: We will do a comparison and we will make a meal with Hamburger Helper and we'll make basically homemade Hamburger Helper....I'll do a comparison of what Hamburger Helper costs and what it costs to make it from scratch. It's always of course cheaper to make it from scratch and then we do a taste test. And unfortunately many of the people have grown up with Hamburger Helper so that's what they like....They don't see the difference; how salty and awful it tastes....We'll do a whole cost analysis and they'll see it's about 59 cents a serving if you make it from scratch compared to about 79 cents a serving for Hamburger Helper. (1013) Another pantry worker explained: I think it's pricing, but then we have people, you know I believe it comes from how you grew up. You know, a lot of people shop the way their moms or dads shopped. And some people were just brought up on frozen boxed food and not cooked homemade meals and so that's all they know how to purchase. (0303) This may explain why pantries experience a demand for these easy-to-cook processed foods. While some pantries might push more nutritional options, others send contradictory nutritional messages. Not far from where the abovementioned nutritional professional worked, another pantry worker at the same agency remarked that "the stuffthat's easy for us to get is pasta, canned stuff, pasta mixes, and it's not highly nutritional....Tuna or some kind of a tinned meat, you know, with a Tuna Helper, that's the kind of stuffwe get here because we don't have any way to give them fresh meat" (0607). The food being donated is free for the pantry and free for the clients, made possible through private, often corporate donors. This represents a seemingly collaborative alignment between the need to dispose of unwanted food on behalf of corporate donors and the need for foodinsecure clients to consume food, yet this arrangement is rooted in a short-term outlook and power imbalance where corporate food entities are able to dump unwanted food for free upon a foodinsecure population, thereby realizing short-term profit gains (for the business) at the cost of longterm health of food-insecure individuals and its effect on governments. Assessing Collaborative Potential The rural PEFS appears to be similar to the urban PEFS in a number of ways. It is heavily reliant upon volunteer labor and it serves a significant proportion of the population, often on a regular basis. In the rural context there is a dispersed population. While centralized population centers like cities provide efficient and short-distance transportation networks, rural networks are decentralized with people living in remote areas, often requiring automobile access. This has a few practical consequences. A dispersed population also means that community food-growing opportunities like neighborhood gardens are more difficult to organize and implement when compared to a city where a group of neighbors can have a small vegetable plot within walking distance. Contrastingly, in many rural places the transportation cost of getting to a community space where a garden may be located represents another financial and logistical barrier. Cities are also places where people can more easily congregate to meet and organize reactive and proactive responses to inadequate food access (for example, to grow a neighborhood garden in response to being located in a "food desert"). In urban areas for instance, these have manifested in food justice efforts. In rural areas, the PEFS is the chief response to hunger and food insecurity (in addition to federal and state mandated programs). However, the rural PEFS operates on a smaller scale with fewer numbers of people accessing it and a high degree of malleability. As described earlier in this essay, this informality has been criticized; however, this ability to adapt means that individuals who operate PEFS entities (like food pantries) can take advantage of opportunities without having to obtain approval from higher levels of bureaucracy. In addition, the rural PEFS is often located where the land, soil, water, and air resource base for growing food is abundant. In contrast to the literature that supports the claims that low-income populations prefer processed foods (Drewnowski & Specter, 2004), data from the Grafton County case shows that in the pantries that were able to obtain small amounts of fresh, perishable foods (meats and fresh fruits and vegetables), these quickly became the most popular items. As one pantry worker explained: Most people know that an apple is healthier than a hot dog, but those [hot dogs] are way cheaper, you know, not that they're the same in any way....Here [at the food pantry] they would go for the things that they don't normally get their hands on, which is why those dairy products go fast and those veggies go fast. But I think in general when they are shopping they go for the cheapest, easiest thing to get through to the next week. (1215) In another study of Grafton County, a food pantry employee described how a local hunter donated moose meat: Interviewer: What are the most popular items that you have here in the pantry? Respondent 1: Meat. It's the most expensive... Respondent 2: Oh, was it last year we got the moose meat? We got 500 pounds [230 kg]. And we're thinking, what are we gonna do with all this moose meat? And it flew out of here. I mean, people were calling us and asking us for some. (McEntee, 2011b, p. 251) A key question emerging from this research is, "how do we harness the assets of both the PEFS and local food system to better serve the needs of food-insecure populations?" There is a demand for locally produced produce and meat on behalf of food-insecure individuals (as others have shown; see Hinrichs and Kremer (2002)). The desires of low-income consumers to eat fresh meat and produce (which often is locally produced) as well as to participate in some local food production activities (whether it be hunting or growing vegetables) have been overlooked by researchers. People accessing the PEFS in rural areas are accessing pantries, but also growing their food because it is an affordable way to obtain high-quality food they may otherwise not be able to afford (McEntee, 2011b). Based on the information provided in this article, potential synergies between the PEFS and the local food system in the rural context exist. Specifically, a traditional localism engages "participants through non-capitalist, decommodified means that are affordable and accessible" where "food is grown/raised/hunted, not with the intention to gain profit, but to obtain fresh and affordable food" (McEntee, 2011, pp. 254-255). Traditional localism allows for local food to become an asset for many food-insecure and poor communities that are focusing on the need to address inadequate food access. How could the rural PEFS source more food locally, thereby strengthening the local economy? How could private emergency food entities like food pantries and local food advocates promote food-growing, food-raising, and hunting activities as a means to increase grassroots, local, and affordable access to food? Like many places throughout the U.S., Grafton County is home to small-scale local agriculture operations supported by an enthusiastic public and sympathetic state. Simultaneously, there is the presence of food insecurity and a PEFS seeking to remediate this persistent problem. The actual structure of the PEFS could be thoroughly assessed (beyond the borders of Grafton County). If warranted, this system could be redesigned to prioritize privacy and formalize procedures in terms of ensuring that client food choices are respected. A crucial next step in reforming this system to benefit lowincome and minority clients is to emphasize the ability to grow, raise, and hunt food for their own needs5 through the traditional local concept. This would represent a transformation in which these activities could not only be supported by the PEFS, but also draw upon the social capital of communities in the form of memories and practices of rural people from the near past, all while reducing reliance upon corporate waste. If traditional local efforts were organized on a cooperative model, based on community need and not only the needs of individuals, it would benefit all those participating, drawing on collective community resources, such as food-growing knowledges and skills, access to land, and tools, thereby enhancing the range of rural livelihood strategies. In this sense, these activities are receptive to racial and economic diversity as well as alliance-formation across social groups and movements, all of which are characteristic of the food sovereignty movement (Holt- Giménez & Wang, 2011). In moving forward additional research is needed. While our findings highlight potential shortcomings, there is a lack of data exploring the rural PEFS experience. Specifically, from the demand side, we need more data about the users of this system, specifically in regard to their satisfaction with food being given to them. Are they happy with it? Do they want something different that is not available? Do they lack the ability to cook certain foods being handed out by the pantry? Feeding America's Hunger in America survey asks about client satisfaction; in its 2010 report, only 62.7 percent of surveyed clients were "very satisfied" with the overall quality of the food provided.6 Additionally, the fact that this survey is administered by the same personnel who are distributing food donations raises methodological biases. More needs to be discovered about why such a large proportion of users is not "very satisfied." From the supply side, we need to know more about food being distributed and its nutritional value. Currently, the food being donated and distributed is unregulated to a large degree, especially in rural pantries. Also on the supply side, the source of food provided to Feeding America as well as individual state food banks and food pantries needs to be inventoried with more information beyond just its weight. Knowing the quantity of specific donated products as well as the financial benefit (in terms of tax write-offs) afforded to donors would add transparency. Conclusion: Neoliberal Considerations and Future Directions The findings we have presented in this article are intended to reveal important policy questions about the PEFS and local food movement; we do acknowledge, however, that it also has raised some important questions. In summary, we see opportunities to move forward in enacting a food sovereignty agenda with both local and global scales in mind. First, value-added, market-based local solutions used to address the inadequacies of the current food system are immediately beneficial. However, these should not be accepted as the endall solution. Looking beyond them to determine what else can be accomplished to change the structure of the food system to shiftpower away from oligarchic food structures of the corporate food regime to food citizens, not only food consumers, would result in systemic change. A key consideration in realizing any reform in the PEFS, and simultaneously challenging and transforming the unsustainable global food regime, is recognizing the neoliberal paradigm in which government and economic structures exist. Neoliberalism can be defined as a political philosophy that promotes market-based rather than state-based solutions to social problems, while masking social problems as personal deficiencies. The PEFS is essentially acting as a vent for unwanted food in this system that also provides a financial benefit to the governing food entities (i.e., food businesses). Too often alternatives are hailed as opposing the profit-driven industrial food system simply because they are geographically localized; in reality, they may re-create the classist and racist structures that permeate the larger global system.7 The PEFS is an embedded neoliberal response to food insecurity; while public-assistance enrollment is on the rise, so is participation in the PEFS. This is a shiftin responsibility in who is providing assistance to food-insecure populations from the government to the private sector. In this sense it is a market-based approach to addressing food insecurity (i.e., by dumping food on the private charity sector, market retailers cut their own waste disposal costs), and the result is continual scarcity and the establishment of a system that reinforces the idea that healthy food is a privilege, only accessible to those with adequate financial and social capital. Along these same lines, a form of food localism exists that is arguably detrimental to those without financial and social capital; these efforts have and continue to frame food access solely as an issue of personal responsibility related to economic status and nutritional knowledge (a narrative thoroughly discussed by Guthman (2007, 2008)). This prioritizes market-based solutions to developing local food systems as well as universal forms of food education that emphasize individual health. As Alkon and Mares (2012) explain, Neoliberalism creates subjectivities privileging not only the primacy of the market, but individual responsibility for our own wellbeing. Within U.S. food movements, this refers to an emphasis on citizen empowerment, which, while of course beneficial in many ways, reinforces the notion that individuals and community groups are responsible for addressing problems that were not of their own making. Many U.S. community food security and food justice organizations focus on developing support for local food entrepreneurs, positing such enterprises as key to the creation of a more sustainable and just food system. The belief that the market can address social problems is a key aspect of neoliberal subjectivities. (p. 349) Though elements of both the PEFS and the local food system have arguably been folded into neoliberalization processes through market-based mechanisms, incremental steps to change these dynamics are possible. Reframing issues of food accessibility (including food insecurity, hunger, food deserts, etc.) as issues of food justice moves us beyond an absolute spatial understanding of food issues. For instance, when we only look at physical access to food, we often disregard the more important considerations of class, race, gender (see Alkon and Agyeman, 2011), and sexual orientation that define a person's present position (and over which they often have no control) and which dictate how they engage with the food system. These considerations are present in current food-justice efforts, which seek to ensure that communities have control over the food grown, sold, and consumed there. Rural food justice has been defined using the traditional localism concept: Traditional localism in rural areas engages participants through non-capitalist, decommodified means that are affordable and accessible. Food is grown/raised/hunted, not with the intention to gain profit, but to obtain fresh and affordable food. A traditional localism disengages from the profit-driven food system and illustrates grassroots food production where people have direct control over the quality of the food they consume - a principal goal of food justice. (McEntee, 2011b, pp. 254-255) Utilizing this rural form of food justice involves more than promoting individual food acquiring techniques; it involves developing organizational and institutional strategies that improve the quality of food available to PEFS entities. This is currently accomplished by some, such as when pantries obtain fresh produce through farmer donations or when a food bank develops food-growing capacity. 8 But these types of entities are in the minority. The next stage of realizing food justice, we posit, is to determine how a food sovereignty approach can be utilized in a global North context. Food justice predominantly operates to find solutions within a capitalist framework (and it has been criticized as such) while food sovereignty is explicitly geared toward the dismantling of this system in order to achieve food justice. Regime change and transformation requires more than recognition and control over food-growing resources; it requires alliance and partnership-building between groups to "to address ownership and redistribution over the means of production and reproduction" (Holt- Giménez & Wang, 2011, p.98). Adopted by organizations predominantly located in the global South, food sovereignty is focused on the causes of food system failures and subsequently looks toward "local and international engagement that proposes dismantling the monopoly power of corporations in the food system and redistributing land and the rights to water, seed, and food producing sources" (Holt-Giménez, 2011, p. 324). There is an opportunity for people in the global North not only to learn from the global South food sovereignty movements, but to form connections and alliances between North and South iterations of these movements.9 As discussed above, the dominant food movement narrative is in sync with the economic and development goals of government (e.g., state-sanctioned buy-local campaigns) as well as marketing prerogatives of global food corporations (e.g., "local" being used as marketing label). Building a social movement powerful enough to place meaningful political pressure upon government to support a food system that prioritizes human wellbeing, not profit, is an immediate challenge. Incremental solutions are necessary in order to improve the lives of people now. However, these local solutions, such as innovative farm-to-school programming and other viable models between the local food environment and the PEFS that we have discussed in this article, would be more effective at affecting long-term systemic change if they were coupled with collective approaches to acknowledge and limit the power of the corporate food regime to prevent injustice, while also holding the state accountable for its responsibility to citizens, which it has successfully "relegated to voluntary and/or market-based mechanisms" (Alkon and Mares, 2012, p. 348). Food sovereignty offers more than an oppositional view of neoliberalism, however. The food sovereignty movement advances a model of food citizenship that asserts food as a nutritional and cultural right and the importance of democratic on-the-ground control over one's food. These qualities resonate with food-insecure and disenfranchised communities, urban and rural, in both the global North and South. Sidebar Citation: McEntee, J. C., & Naumova, E. N. (2012). Building capacity between the private emergency food system and the local food movement: Working toward food justice and sovereignty in the global North. Journal of Agriculture, Food Systems, and Community Development, 3(1), 235-253. http://dx.doi.org/10.5304/jafscd.2012.031.012 Copyright © 2012 by New Leaf Associates, Inc. Footnote 1 Cartesian understandings of space utilize a grid-based measurement of physical proximity. These types of proximitybased understandings of food access (i.e., food access is primarily a matter of bringing people physically closer to food retailers, as is promoted by the USDA Food Desert Locator) tend to overlook other nuanced forms of food access based on knowledge, culture, race, and class. 2 The amount of processed food, especially in the form of prepared meals and meals eaten outside the home, is steadily increasing in the United States (Stewart, Blisard, & Jolliffe, 2006). 3 The four-digit number indicates interview location and respondent IDs. 4 A leading antihunger effort in New Hampshire is the New Hampshire Food Bank (NHFB), the state's only food bank and a member of Feeding America. In 2008 the NHFB "distributed over 5 million pounds of donated, surplus food to 386 food pantries, soup kitchens, shelters, day care centers and senior citizen homes" (N.H. Food Bank, 2010). In total N.H. has 441 agencies registered with NHFB that provide food to 71,417 people annually. Grafton County has 18 food pantries, which "distribute non-prepared foods and other grocery products to needy clients, who then prepare and use these items where they live" and where "[F]ood is distributed on a short-term or emergency basis until clients are able to meet their food needs" (N.H. Food Bank, 2010). 5 A noteworthy example of an organization that has begun to accomplish these objectives is The Stop Community Food Centre in Toronto, which was recently described by Levkoe and Wakefield (2012). 6 The remaining categories are: "Somewhat satisfied" (31.3 percent), "Somewhat dissatisfied" (4.8 percent), and "Very dissatisfied" (1.3 percent). 7 For additional discussion of the political economic transition from government to governance, such as the transfer of state functions to nonstate and quasistate entities, see Purcell (2002). 8 An example of this type of effort is that of the Vermont Food Bank, which purchased a farm in 2008 in order to supply the food bank with fresh, high-quality produce as well as to sell the produce. 9 The U.S. Food Sovereignty Alliance has recognized the importance of building these coalitions: "As a US-based alliance of food justice, anti-hunger, labor, environmental, faith-based, and food producer groups, we uphold the right to food as a basic human right and work to connect our local and national struggles to the international movement for food sovereignty" (US Food Sovereignty Alliance, n.d., para. 1). References References Akobundu, U.O., Cohen, N. L., Laus, M. J., Schulte, M. J., & Soussloff, M. N. (2004). Vitamins A and C, calcium, fruit, and dairy products are limited in food pantries. Journal of the American Dietetic Association, 104(5), 811-813. http://dx.doi.org/10.1016/j.jada.2004.03.009 Alexander, D. (2003, 25 May). Bigger portions for food banks. Chicago Tribune. Retrieved from http://www.chicagotribune.com Alkon, A. H., & Agyeman, J. (Eds.) (2011). Cultivating Food Justice. Cambridge, Massachusetts: MIT Press. Alkon, A. H., & Mares, T. M. (2012). Food sovereignty in US food movements: Radical visions and neoliberal constraints. Agriculture and Human Values, 29(3), 347-359. http://dx.doi.org/10.1007/s10460- 012-9356-z Alkon, A. H., & Norgaard, K. M. (2009). Breaking the food chains: An investigation of food justice activism. Sociological Inquiry, 79(3), 289-305. http://dx.doi.org/10.1111/j.1475- 682X.2009.00291.x Allen, P., FitzSimmons, M., Goodman, M., and Warner, K. (2003). Shifting plates in the agrifood landscape: the tectonics of alternative agrifood initiatives in California. Journal of Rural Studies, 19(1), 61-75. http://dx.doi.org/10.1016/S0743-0167(02)00047-5 Associated Press. (2010, 25 March). ConAgra Foods 3Q profit rises, maintains outlook. Associated Press. New York. Retrieved from http://www.boston.com/ business/articles/2010/03/25/conagra_foods_ 3q_profit_rises_maintains_outlook/ Beggs, J. J. (2006). Coping with food vulnerability: The role of social networks in the lives of Missouri food pantry clients. Unpublished graduate thesis). University of Missouri, Columbia, Missouri. Bhattarai, G. R., Duffy, P. A., & Raymond, J. (2005). Use of food pantries and food stamps in lowincome households in the United States. The Journal of Consumer Affairs, 39(2), 276-298. http://dx.doi.org/10.1111/j.1745- 6606.2005.00015.x ConAgra. (2009). ConAgra Foods' First Corporate Responsibility Report Now Available. Retrieved 1 January 2011 from http://media.conagrafoods. com/phoenix.zhtml?c=202310&p=irolnewsArticle& ID=1269902&highlight= Daponte, B. O., & Bade, S. (2006). How the private food assistance network evolved: Interactions between public and private responses to hunger, Nonprofit and Voluntary Sector Quarterly, 35(4), 668- 690. Daponte, B. O., Lewis, G. H., Sanders, S., & Taylor, L. (1998). Food pantry use among low-income households in Allegheny County, Pennsylvania. Journal of Nutrition Education, 30(1), 50-57. http://dx.doi.org/10.1016/S0022-3182(98)70275-4 Drewnowski, A., & Specter, S. E. (2004). Poverty and obesity: The role of energy density and energy costs. The American Journal of Clinical Nutrition, 79(1), 6-16. Feeding America. (2010a). Hunger in America 2010 national report. Chicago: Feeding America and Mathematica Policy Research, Inc. Retrieved from http://feedingamerica.issuelab.org/resource/ hunger_in_america_2010_national_report Feeding America. (2010b.) Hunger in America 2010: Local report prepared for the New Hampshire Food Bank. Chicago: Feeding America. Retrieved from http://www.nhfoodbank.org/about-hunger/ hunger-study.html Feeding America. (2012a). Food, Grocery Donations and Food Drives. Retrieved from http://feedingamerica.org/ways-to-give/foodgrocery- food-drives.aspx Feeding America. (2012b). Leadership Partners. Retrieved from http://feedingamerica.org/howwe- fight-hunger/our-partners/leadershippartners. aspx Feeding America. (2012c). Programs & Services. Retrieved from http://feedingamerica.org/howwe- fight-hunger/programs-and-services.aspx Feenstra, G. W. (1997). Local food systems and sustainable communities. American Journal of Alternative Agriculture, 12(1), 28-36. http://dx.doi.org/10.1017/S0889189300007165 Fielding, N., & Fielding, J. (1986). Linking data. Beverly Hills, California: Sage. Flowerdew, R., & Martin, D. (1997). Methods in human geography: A guide for students doing a research project. London: Sage. Food Lion. (2010). Food Lion community connections. Retrieved 1 January 2011 from http://www.foodlion.com/Charities/Feeding- America Furst, T., Connors, M., & Bisogni, C. (1996). Food choice: A conceptual model of the process. Appetite, 26(3), 247-266. http://dx.doi.org/10.1006/appe.1996.0019 Gottlieb, R., & Joshi, A. (2010). Food justice. Cambridge, Massachusetts: MIT Press. Guthman, J. (2007). The Polanyian way? Voluntary food labels as neoliberal governance. Antipode, 39(3), 456-478. http://dx.doi.org/10.1111/j.1467- 8330.2007.00535.x Guthman, J. (2008). "If they only knew": Color blindness and universalism in California alternative food institutions. The Professional Geographer, 60(3), 387-397. http://dx.doi.org/10.1080/00330120802013679 Hilton, K. (1993). Close down the food banks. Canadian Dimension, 27(4), 22-23. Hinrichs, C. C., & Kremer, K. S. (2002). Social inclusion in a Midwest local food system project. Journal of Poverty, 6(1), 65-90. http://dx.doi.org/10.1300/J134v06n01_04 Holt-Giménez, E. (2011). Food security, food justice, or food sovereignty? Crises, food movements, and regime change. In A. H. Alkon & J. Agyeman (Eds.), Cultivating food justice (pp. 309-330). Cambridge, Massachusetts: MIT Press. Holt-Giménez, E., & Wang, Y. (2011). Reform or transformation? The pivotal role of food justice in the U.S. food movement. Race/Ethnicity: Multidisciplinary Global Contexts, 5(1), 83-102. http://dx.doi.org/10.2979/racethmulglocon.5.1.83 Irwin, J. D., Ng, V. K., Rush, T. J., Nguyen, C., & He, M. (2007). Can food banks sustain nutrient requirements? A case study in Southwestern Ontario. Canadian Journal of Public Health, 98(1), 17- 20. Kloppenburg, Jr., J., Lezberg, S., De Master, K., Stevenson, G. W., & Hendrickson, J. (2000). Tasting food, tasting sustainability: Defining the attributes of an alternative food system with competent, ordinary people. Human Organization, 59(2), 177-186. La Via Campesina. (2011). Defending food sovereignty. Retrieved 9 November 2012 from http://viacampesina.org/en/index.php/ organisation-mainmenu-44 Levkoe, C. (2006). Learning democracy through food justice movements. Agriculture and Human Values, 23(1), 89-98. http://dx.doi.org/10.1007/s10460- 005-5871-5 Levkoe, C. Z., & Wakefield, S. (2012). The Community Food Centre: Creating space for a just, sustainable, and healthy food system. Journal of Agriculture, Food Systems, and Community Development, 2(10), 249-268. Light, R. J., Singer, J., & Willett, J. (1990). By design: Conducting research on higher education. Cambridge, Massachusetts: Harvard University Press. Lipsky, M. (1985). Prepared statement before the Subcommittee on Domestic Marketing, Consumer Relations, and Nutrition of the Committee on Agriculture of the U.S. House of Representatives, 99th Cong., 2nd session. McEntee, J. C. (2010). Contemporary and traditional localism: A conceptualisation of rural local food. Local Environment, 15(9), 785-803. http://dx.doi.org/10.1080/13549839.2010.509390 McEntee, J.C. (2011a). Shifting rural food geographies and the spatial dialectics of just sustainability. (Doctoral dissertatiaon). Cardiff, UK: CardiffUniversity. http://library.cardiff.ac.uk/vwebv/ holdingsInfo?bibId=945965 McEntee, J. C. (2011b). Realizing rural food justice: Divergent locals in the northeastern United States. In A. H. Alkon and J. Agyeman (Eds.), Cultivating food justice (pp. 239-260). Cambridge, Massachusetts: MIT Press. Molnar, J. J., Duffy, P. A., Claxton, L. & Conner, B. (2001). Private food assistance in a small metropolitan area: Urban resources and rural needs. Journal of Sociology and Social Welfare, 28(3), 187-209. Monsivais, P., & Drewnowski, A. (2007). The rising cost of low-energy-density foods. Journal of the American Dietetic Association, 107(12), 2071-2076. http://dx.doi.org/10.1016/j.jada.2007.09.009 Morgan, D. L., & Krueger, R. A. (1998). The focus group kit. Thousand Oaks, California: Sage Publications. Morgan, K., Marsden, T., & Murdoch, J. (2006). Worlds of food. Oxford: Oxford University Press. Morton, L. W., & Blanchard, T. C. (2007). Starved for access: Life in rural America's food deserts. Rural Realities, 1(4), 1-10. Mosley, J., & Tiehen, L. (2004). The food safety net after welfare reform: Use of private and public food assistance in the Kansas City metropolitan area. Social Service Review, 78(2), 267-283. http://dx.doi.org/10.1086/382769 New Hampshire Food Bank [N.H. Food Bank]. (2010). New Hampshire Food Bank. Retrieved 1 April 2010 from http://www.nhfoodbank.org/index.php? option=com_content&view=frontpage&Itemid=1 Nord, M., Andrews, M., & Carlson, S. (2008). Measuring food security in the United States: Household food security in the United States, 2007 (Food Assistance and Nutrition Research Report No. 66). Washington, D.C.: United States Department of Agriculture. QSR International. (2010). NVivo [qualitative research software]. Cambridge, Massachusetts: QSR International. Procter & Gamble, 2010. P&G and Feeding America: Fighting hunger. Retrieved 2 April 2010 from http://www.pg.com/en_US/sustainability/ social_responsibility/feeding_america.shtml Poppendieck, J. (1998). Sweet charity? Emergency food and the end of entitlement. New York: Penguin. Purcell, M. (2002). Excavating Lefebvre: The right to the city and its urban politics of the inhabitant. Geoforum, 58(2-3), 99-108. Rock, M., McIntyre, L., & Rondeau, K. (2009). Discomforting comfort foods: Stirring the pot on KraftDinner and social inequality in Canada. Agriculture and Human Values, 26(3), 167-176. http://dx.doi.org/10.1007/s10460-008-9153-x Rossman, G., & Rallis, S. F. (1998). Learning in the field: An introduction to qualitative research. Thousand Oaks, California: Sage. Sharkey, J. R. (2009). Measuring potential access to food stores and food-service places in rural areas in the U.S. American Journal of Preventative Medicine, 36(4S), S151-S155. Sims, R. (2009). Food, place and authenticity: Local food and the sustainable tourism experience. Journal of Sustainable Tourism, 17(3), 321-336. http://dx.doi.org/10.1080/09669580802359293 Stewart, H., Blisard, N., & Jolliffe, D. (2006). Let's eat out: Americans weigh taste, convenience, and nutrition (Economic Information Bulletin No. EIB-19). Washington, D.C.: United State Department of Agriculture. Tarasuk, V., & Eakin, J. M. (2005). Food assistance through ''surplus'' food: Insights from an ethnographic study of food bank work. Agriculture and Human Values, 22(2), 177-186. http://dx.doi.org/10.1007/s10460-004-8277-x Tiehen, L. (2002). Issues in food assistance: Private provision of food aid: The emergency food Assistance system (Food Assistance and Nutrition Research Report No. 26-5). Washington, D.C.: United States Department of Agriculture. U.S. Census Bureau. (2008). Census 2000, American FactFinder. Retrieved from http://factfinder. census.gov/home/saff/main.html?_lang=en U.S. Department of Agriculture. (2012). United States Department of Agriculture, Census of Agriculture. Available from: http://www.agcensus.usda.gov/ Publications/2007/index.php U.S. Department of Agriculture, Agricultural Marketing Service [USDA AMS]. (2012). United States Department of Agriculture, Agricultural Marketing Service. Retrieved from http://search.ams.usda.gov/farmersmarkets/ U.S. Department of Agriculture, Food and Nutrition Service [USDA FNS]. (2010). United States Department of Agriculture, Food and Nutrition Service. The Emergency Food Assistance Program. Retrieved from http://www.fns.usda.gov/fdd/programs/tefap/ USDA FNS. (2012a). United States Department of Agriculture, Food and Nutrition Service. SNAP Retailer Locator. Retrieved from http://www.snapretailerlocator.com/ USDA FNS. (2012b). Program data. Retrieved from http://www.fns.usda.gov/pd/cnpmain.htm/ U.S. Food Sovereignty Alliance. (n.d.). About the Alliance. Retrieved 1 June 2012 from http://www.usfoodsovereigntyalliance.org/about Verpy, H., Smith, C., & Reicks, M. (2003). Attitudes and behaviors of food donors and perceived needs and wants of food shelf clients. Journal of Nutrition Education and Behavior, 35(1), 6-15. http://dx.doi.org/10.1016/S1499-4046(06)60321-7 Walmart. (2010). Walmart Corporate: Feeding America. Retrieved 1 March 2010 from http://walmartstores.com/CommunityGiving/ 8803.aspx Warshawsky, D. N. (2010). New power relations served here: The growth of food banking in Chicago. Geoforum, 41(5), 763-775. http://dx.doi.org/10.1016/j.geoforum.2010.04.008 Wekerle, G. R. (2004). Food justice movements: Policy, planning, and networks. Journal of Planning Education and Research, 23(4), 378-386. http://dx.doi.org/10.1177/0739456X04264886 Welsh, J., & MacRae, R. (1998). Food citizenship and community food security. Canadian Journal of Development Studies, 19, 237-55. http://dx.doi.org/10.1080/02255189.1998.9669786 Winne, M. (2005). Waste not, want not? Agriculture and Human Values, 22(2), 203-205. http://dx.doi.org/10.1007/s10460-004-8279-8 Winter, M. (2003). Embeddedness, the new food economy and defensive localism. Journal of Rural Studies, 19(1), 23-32. http://dx.doi.org/10.1016/S0743-0167(02)00053-0 Wu, C., & Eamon, M. K. (2007). Public and private sources of assistance for low-income households. Journal of Sociology & Social Welfare, 34(4), 121-149. AuthorAffiliation Jesse C. McEntee a Food Systems Research Institute and Tufts Initiative for the Forecasting and Modeling of Infectious Diseases Elena N. Naumova b Department of Civil and Environmental Engineering, Tufts University, and Tufts Initiative for the Forecasting and Modeling of Infectious Diseases Submitted 2 May 2012 / Revised 28 June and 26 July 2012 / Accepted 27 July 2012 / Published online 4 December 2012 aCorresponding author: Jesse C. McEntee, PhD, Managing Partner, Food Systems Research Institute LLC; P.O. Box 1141; Shelburne, Vermont 05482 USA; +1-802-448-2403; www.foodsystemsresearchinstitute.com; jmcentee@foodsri.com b Elena N. Naumova, PhD, Associate Dean for Research, School of Engineering; Professor, Department of Civil and Environmental Engineering, Tufts University; also Tufts Initiative for the Forecasting and Modeling of Infectious Diseases (InForMID) (http://informid.tufts.edu/); elena.naumova@tufts.edu Acknowledgments: The authors are grateful to the Economic and Social Research Council's Centre for Business Relationships, Accountability, Sustainability and Society at CardiffUniversity as well as the Center for Rural Partnerships at Plymouth State University for financial support during this research. The authors are also grateful to the three anonymous reviewers who provided constructive feedback on earlier drafts of this article. Word count: 11055Show lessYou have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimerNeither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer Translations powered by LEC.Translations powered by LEC. Copyright New Leaf Associates, Inc. Fall 2012More like this 
    1. Both Hobbes and Locke describe human beings as equal in the state of nature.  How do their understandings of equality in the state of nature differ?

      This question works well to help students compare the ideas of Hobbes & Locke (which are part of what I ask students to think of as the "bookshelf" or perhaps today the twitter feeds, etc. of the founders)! When working with H.S. students, I find it is useful to ask them to point out specific evidence or passages from documents we are using to support their assertions. While this may go without saying for college students, high school students sometimes need that directive to help them develop that habit rather than offer a more "intuitive" contribution to the discussion or writing prompt.

      For the high school student, many teachers (myself included) often ask the students early in the course to reflect upon, and define pivotal concepts such as "equality" to provide an anchor point before wrestling with what Hobbes, Locke, et. al have to say. It has been my experience that they are pleased with themselves any time their own thought process aligns with the political philosophers they are being introduced to. Any traction of that nature usually helps to encourage them to entertain the possibility that course content is relevant to their own lives.

      Please advise if my suggestions regarding the high school classroom are not the direction you are seeking in this editing / reflection process.

    1. Richness

      patterns of LENGTH and SHRINKAGE.

      would this test for the hypothesis that older (longer) individuals are more likely to be shrunken because time is the greater explaining variable over location (locations may not be experiencing as much difference in fishing pressure as the sample distribution leads to believe. i.e. we are sampling the fishery well enough!? yay. (In healthy years would kelp abundance be significantly different among sites? (location significant?) this could indicate that whole system is uniformly ailing?) ..... I think there may be preliminary data that supports the idea that variously sized abalone (visible to divers) recovered or died from starvation over equal intervals. therefore, i'm thinking time, not some metabolic cost of being big, is the explainer for now. this is a well documented, multi-year stressed system.

    1. f we think of someone who is a fluent reader or speaker, we generally do not think of a person who speaks or reads fast. Rather, we are more likely to think of someone who uses their voice to help convey meaning to a listener when speaking or reading orally. (

      This is one thing I definitely agree with. I don't really get the concept of "reading fast". I feel that when I "read fast" like in the manner they look for in those fluency tests I don't really take in all the details that I would normally look at while reading. This not only applies to reading but other subjects as well. I feel as though people should be allowed to go at their own pace so as to further their understanding as completely as possible. When I am orally reading out loud and fast it may sound fluent but honestly I am not comprehending as well as when I am silently reading at my own pace.

    1. But having said that too, I think the fact that he reaches out to a lot of people outside of the administration and I'm honored to be one of them, the fact that he reaches out to private citizens, I think is a strength of this administration, one of the reasons that we elected a businessman, the first entrepreneur President in American history, very different, very unorthodox, clearly has upset the swamp in a myriad of ways as we see, but I think also in positive ways, for instance, tax cuts and deregulation, what's going on in this country outside of the swamp, outside of the beltway, which is growth and optimism and security.

      Hello, fact-checkers! This statement, taken from CNN interviews and discussions, may be worth your scrutiny. It was selected using ClaimBuster, a tool developed in the computer science department at the University of Texas at Arlington.

    1. Most scholars of hypertext of the time pointed to Vannevar Bush's 1945 article "As We May Think" as an important precursor to the Web and as providing important guidance for necessary development. Bush's model of hypertext was much richer than that of the early Web. Among other things, he envisioned people who would put together articles (or "trails") by finding a sequence of useful pages in different sources, annotating those pages, inserting a few pages of their own, and linking it all together. While the Web had "live links", those links were limited to the original authors of the text, so The Web provided essentially none of the features necessary for Bush's more collaborative model.

      Great summary.

    1. Once you’ve created your masterpiece, you need to think about how you are going to print your books. The way we see it, you have three options:

      As we’ve talked extensively about the importance of distribution in the comic book industry, this article offers unique insight into the production aspect of book publishing but I’m also curious about the distribution process (from the perspective of a small press) and the types of challenges it may run into.

  12. Mar 2018
    1. The goals of the people in this small community could not be fully grasped through observations of their visible lives. Heath had to uncover their imag- ined lives as well to understand their perspectives on life, literacy, and edu- cation

      This is important to think about as a teacher because our students may have so much more going on than visible to us. Although this quote is not saying that exactly, I think it is pointing out that we need to dig deeper than the surface and learn more than the basic information to truly understand.

    1. Complexity Theory - Dynamical Systems Theory

      If we want to make change we should come at a problem from as many different areas as possible.

      We should be wary of the magic bullet. Complexity theory may be seen as post-structuralist or even further?

      This is part of an agency structure debate.

      There are varied factors that contribute to change.

      The connections of neurons are more important than the number of cells are more important for consciousness or the mind. This is a good analogy for why complexity theory is so essential.

      Consciousness emerges when critical mass is reached in a system.

      It's hard to know how much of a factor something can be in a causal system. For example, how much do we cause do we attribute to butterfly wings causing a storm in India.

      What causes change in the education system?

      We need to use words like compounding effects to explain change.

      We need to conceive of change in terms of speed and direction, like a mathematical function.

      We need to be wary of one dimensional change or one kind of initiative. You need to think of multiple factors.

      Effective intervention means intervention from every possible angle.

      We need to pump resources until we have autocatalysis.

      International Journal of Education Development Mark Mason

    1. If anything praiseworthy is to be done, it must be done through unity, and it is for that reason that the Universal Negro Improvement Association calls upon every Negro in the United States to rally to this standard. We want to unite the Negro race in this country.

      I think uniting together is a great way to keep their hopes and spirits strong. Individually they may not seem as much as a threat, but an organized group would surely make the white people at that time very uncomfortable.

    1. Critical realism is not an empirical program; it is not a methodology; it is not even truly a theory, because it explains nothing. It is, rather, a meta-theoretical position: a reflexive philosophical stance concerned with providing a philosophically informed account of science and social science which can in turn inform our empirical investigations. We might think of this in terms of three layers: our empirical data, the theories that we draw upon to explain our empirical data, and our metatheories—the theory and the philosophy behind our theories.While critical realism may be a heterogeneous series of positions, there is one loose genetic feature which unites it as a metatheory: a commitment to formulating a properly post-positivist philosophy. This commitment is often cast in the terms of a normative agenda for science and social science: ontological realism, epistemic relativism, judgmental rationality, and a cautious ethical naturalism.
    1. No doubt there is humorous exaggeration in this picture, but there is gross exaggeration in the frame of mind against which it was directed. You can not have omelettes without breaking eggs; you can not destroy the practises of barbarism, of slavery, of superstition, which for centuries have desolated the interior of Africa, without the use of force; but if you will fairly contrast the gain to humanity with the price which we are bound to pay for it, I think you may well rejoice in the result of such expeditions as those which have recently been conducted with such signal success in Nyassaland, Ashanti, Benin, and Nupé—expeditions which may have, and indeed have, cost valuable lives, but as to which we may rest assured that for one life lost a hundred will be gained, and the cause of civilization and the prosperity of the people will in the long run be eminently advanced. But no doubt such a state of things, such a mission as I have described, involves heavy responsibility. In the wide dominions of the queen the doors of the temple of Janus are never closed, and it is a gigantic task that we have undertaken when we have determined to wield the scepter of empire. Great is the task, great is the responsibility, but great is the honor; and I am convinced that the conscience and the spirit of the country will rise to the height of its obligations, and that we shall have the strength to fulfil the mission which our history and our national character have imposed upon us.

      They are trying to justify the violence.

    2.  We think and speak of them as part of ourselves, as part of the British Empire, united to us, altho they may be dispersed throughout the world, by ties of kindred, of religion, of history, and of language, and joined to us by the seas that formerly seemed to divide us.

      A united British Empire

    1. Going all the way back to the Hermann Ebbinghaus “forgetting curve” experiments of the 1880s, we have known -- and replicated in dozens if not hundreds of experiments -- that no matter how serious, responsible and dedicated we professors are to “covering” our “topic,” students retain and apply subsequently only what is meaningful to them. I like to call this “haunted by the 8 percent.” In experiment after experiment, if you test students with basically matched backgrounds (say, from the same educational institution and major) who took a big introductory course on a topic (say, Psychology 101) six months in the past and compare their results with those of other students who never took the course, the differential in test scores is only about 8 percent. Here’s a variation: recently, at one of the nation’s elite private prep schools, students were given, with no warning, the exact same exams one September that they had taken as final exams the previous May. The average grade on finals was about A-minus/B-plus. On the September retests: F.

      CIP_Point 5 (+link)

    1. Shaun King sees Marvel's Black Panther as a historically significant film.

      It’s going to make well over a billion dollars and may actually do so within a month. From a pure business standpoint, it is to film what Michael Jackson’s Thriller was to music.

      . . .

      There is a movement we call Afro-Futurism, where we imagine a Black way of life free of white supremacy and bigotry. Black Panther, I think, is the first blockbuster film centered in the ethos of Afro-Futurism, where the writers, and directors, and makeup and wardrobe team all imagined a beautiful, thriving Black Africa without colonialism.

      Wakanda showed us our families in one piece. No war on drugs. No mass incarceration. No KKK. No lynching. No racial profiling. No police brutality.

    1. I’d like to start out by proposing an analogy involving weapons. Weapons take on various forms and meanings throughout different mediums of use. When we think of the word “weapon” today, no doubt we think about the easily available lethal weapons of war such as the AR-15, an automatic riffle that has been the catalyst in so many mass shootings in America recently. What we may not immediately think about is how this particular weapon has been used to protect many American lives overseas.

      I wonder if you can drop this analogy to modern time AR-15s altogether because it does not add much to the rest of your paper. My original thought was that you could perhaps bring this analogy back at end, right before your conclusion.

    1. ven more careful readers, wefear, will not take it to heart – or if they do, may not have much sense of what to doabout it – because the learning outcomes on which it focuses seem so abstractedfrom the daily life of teaching. Because of this gap between general skills anddisciplinary learning, each of us, while fascinated by Arum and Roksa’s powerfulresearch, has for different reasons found the results inAcademically Adriftsome-what at odds with her own experiences. We feel that it is important to take thisdissonance seriously, and would like to suggest that it arises from differences inwhere we look for student learning.

      Heiland and Rosenthal agree a gap needs to be closed between general education skills and disciplinary skills but suggest we need to look for student learning in different places. I think experiential learning is one approach to creating learning opportunities that get both the student and the faculty engaged in assessment and the process of learning. Experiential learning within the disciplines could involve creating reflective writing assignments within the discipline that requires the student to examine their achievement within the discipline as well as the intersection of other general skills used in the learning activity. The authors suggest that faculty need to engage the general education skills within their own discipline through outcomes assessment efforts. I have observed faculty who have never received any training in this area or ever discussed it. Measuring the level of learning achieved by students using standardized tests seems very limiting and this is an area ripe for additional research. I think another approach to consider is how we get students and faculty to collaboratively measure the learning that they both know is occurring. It is interesting to note that as I reflect on the past year in this program I am experience the learning process but we rarely discuss it with each other or the faculty.

    1. Ironically, in this age, one may know much about a subject and yet know little about its ramifications. I for one know decent people who know everything about the chemistry of CFCs and nothing about the ozone layer (Nissani, 1996); everything about internal combustion engines and nothing about global warming; everything about minimum wage legislation and nothing about poverty. Compartmentalization, besides lack of education, is the enemy; an enemy that can only be conquered through holistic scholarship and education:

      I think this stresses the importance of interdisciplinarity in all forms of educations. Since we are realizing this more and more hopefully it will lead to the evolution of humanity as a whole, as well.

    1. 6Comparing their lives with those of their parents, people acknowledge that some things will be more difficult and others easier. By 49 percent to 30 percent, they think they will have more difficulty than their parents did in finding the money to put kids through college and by 41 percent to 32 percent they think owning a home will be more difficult. Conversely, however, by 47 percent to 14 percent, they believe it will be easier for them to be in good physical health. Of equal importance, by 44 percent to 16 percent they think it will be easier to have an interesting job and even easier (by 42 percent to 31 percent) to earn enough money for a good living. [18]These findings show that Americans today realize that they are better off in many ways than their parents were. But at the same time, they are aware that many of the new values and lifestyles threaten the family -- an institution that has come to mean more to them now that it can no longer be taken for granted.The Meaning of SuccessOne of the most sweeping changes in postwar American cultural values relates to the meaning of success. In the 1950s Americans shared a certain definition. Success meant getting married, raising children who would be better off than oneself, owning a home and an automobile, and working one's way up the ladder of social mobility. The trappings of success were largely external and material, a matter of keeping up with the Joneses.When in November 1962 Gallup queried cross-sections of the public on the "formula for success in today's America," two answers dominated all others: get a good education (50 percent) and work hard (31 percent), followed by honesty and integrity (17 percent). Only 6 percent mentioned having a job that one enjoys doing, and a paltry 3 percent cited the importance of self-confidence and self esteem.Then came the 1960s campus rebellion and its iconoclastic attitudes toward the 1950s. For millions of college students the shared meaning of success shifted from owning a Cadillac to fulfilling one's unique inner potential.In the 1970s the majority of Americans were intrigued but still unconvinced of the virtues of the new outlook. A January 1971 Harris poll asked people, "Do you think the desire on the part of many young people to . . . turn their backs on economic gain and success . .. is a healthy or unhealthy thing?" Only 13 percent thought it healthy. In mid-decade (April 1974) the leading responses to a Gallup question about "what really matters in your own life," were a decent and better standard of living (31 percent), good health (25 percent), adequate opportunities for one's children (23 percent), a happy marriage and family life (15 percent), and owning one's own home (12 percent) -- all traditional, fifties-like conceptions.By the 1980s, however, most Americans were attempting to graft the new values onto the old ones. In response to a May 1983 Gallup question about which factors are most important for "personal success in America today," the top ranking ones were good health (58 percent), a job one enjoys (49 percent), a happy family (45 percent), a good education (39 percent), peace of mind (35 percent), and good friends (25 percent). Note that the traditional emphasis on education and family remains an important part of the definition of success, but newer values such as having a job one enjoys, peace of mind, and good friends have now been elevated to a status equal to or higher than the traditional ones. It is characteristic of the 1980s that people wanted material well-being and the new forms of inner fulfillment extolled in the 1960s and 1970s.[19]Now in the 1990s we are witnessing a further evolution of the shared meaning of success. Increasingly, Americans are coming to think of success as self-defined rather than conformity to the expectations of others. Over a five-year period, the DYG SCANSM has measured a significant increase in the ratio of those embracing a conception of success as self-defined. By 1987 it had already reached a 5:1 ratio (63 percent to 12 percent), and by 1991 it had grown to more than a 7:1 ratio (68 percent to 9 percent).[20]There are some modest demographic differences, mainly related to age and income. People between 40 and 60 with higher incomes (above $50,000 a year) lean slightly more often toward the self-defined measure of

      I agree