- Oct 2017
-
www.theguardian.com www.theguardian.com
-
Rosenstein, who also helped create Gchat during a stint at Google, and now leads a San Francisco-based company that improves office productivity, appears most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.
The problem seems to be this: we have great minds focus on efficiency, but no social engineering to help us gain happiness through time-reuse post-efficiency. In that void the attention economy has sprung up - a product of good intentions that most view as socially detrimental.
-
-
-
Such a strategy reflects both business and academic realities, Allan said: Companies need to be able to quickly show investors that their products are being used, then follow that up by rapidly showing evidence of positive impact.
Sometimes. A few ed tech companies i know have private investors more interested in product efficacy than revenue or traction, and those are the ones that seem poised to impact the sector most effectively in the long run.
-
-
www.thoughtleadr.com www.thoughtleadr.com
-
To be clear, his position is hypocritical at best: The Guardian itself publishes native ads, and even Garfield’s own NPR show “On the Money” is underwritten by MailChimp. What’s the difference between a sponsored radio broadcast and sponsored digital content? Should we distrust the content Garfield presents on his show?
In a world where everything we've ever done is exposed, the old 'glass houses' metaphor might no longer work. If someone has reason, are we really going to say that piece of logic is fallible because the person is fallible? Or the organization they work for is fallible? Seems like puritanical thinking still at work in a world where no one ever was perfect.
-
-
www.nytimes.com www.nytimes.com
-
These menacing turns of events have been quite bewildering to the public,
Hyperbole.
-
Lately, however, the sins of Silicon Valley-led disruption have become impossible to ignore.
Or you could argue that anyone who did not see this coming had their heads in the sand. Perhaps the problem is more about business model - if investors and public markets drive Facebook to make ad revenue, then it's going to do that as best it can. Would we rather have a national money-making machine, or a platform for social connectedness for a fee?
-
-
www.theguardian.com www.theguardian.com
-
The label "advertising" is almost never applied. Instead they use confusing wiggle words like "sponsored content" or, even more obscurely, "from around the web". The result is not merely deceiving to readers, it bespeaks a conspiracy of deception among publishers, advertisers and their agencies.
This key issue, and the lack of public awareness around the money behind their newsfeed, is the core of this course. Speaking with my family, and friends - very few people understand the far-reaching effects of news sites shifting to be entertainment sites, without any clear announcement.
-
-
www.iab.com www.iab.com
-
In-Feed Sponsored Conten
I think this metric is misleading. Are we comparing video ads in feed to 'sponsored content' in feeds? The state and expectation of the viewer and the platform they are on has to be neutral for all ads types to be assessed...
-
In-feed sponsored content is least useful for generating new brand awareness. •In-feed sponsored content is most useful for established brands that seek to enhance and differentiate their image, deepen existing consumer relationships, to launch brand extensions
As a marketer, that's pretty interesting. Then how should a new brand establish cred?
-
Key Takeaways•Despite demographic and content differences, business and entertainment news users are highly receptive to in-feed sponsored content if it is relevant, authoritative and trustworthy.
Relevant I get. Authoritative and trustworthy are subjective or easily manipulative aspects or sponsored content.
-
-
www.washingtonpost.com www.washingtonpost.com
-
We found that participants who multitasked on a laptop during a lecture scored lower on a test compared to those who did not multitask, and participants who were in direct view of a multitasking peer scored lower on a test compared to those who were not. The results demonstrate that multitasking on a laptop poses a significant distraction to both users and fellow students and can be detrimental to comprehension of lecture content.
Not sure if anyone else has noticed, but observing laptop use in class this week, I noticed a marked decrease in participation with the teachers. the conversation was way less vibrant, at least half the class, more like 75%, were multitasking on their own devices. Compare this with the first week of class when everyone was paying attention - and you can a scene where individual distraction is hurting everyone's experience, and adding to other's distraction.
-
Beeps and pings and pop-ups and icons,
And as I read this, an ad pops up, with a beautiful picture of nature. Hilarious, but horrible.
-
-
www.chronicle.com www.chronicle.com
-
She managed to dial down that Facebook addiction, but she remained an obsessive e-mail checker—until Mr. Levy's class started to change her habits. It began with an assignment that required students to spend 15 minutes to half an hour each day observing and logging their e-mail behavior. The idea, an outgrowth of meditation, is to note what happens in the mind and body.
I'm surprised that anyone denies this is happening. Plenty of research discusses distraction and it's deleterious effects on the human brain. Everyone in our generation should accept that we need to exercise mental control against the myriad of ad algorithms vying for our attention and contributing to our mentally unfocused baseline state.
-
Never-Betters and the Better-Nevers. Those camps duke it out over whether the Internet will unleash vast reservoirs of human potential (Clay Shirky) or destroy our capacity for concentration and contemplation (Nicholas Carr).
Clearly it will do both, for different people.
-
-
blogs.britannica.com blogs.britannica.com
-
But here’s the thing: it’s not just Carr’s friend, and it’s not just because of the web—no one reads War and Peace. It’s too long, and not so interesting.
Whatever. I love that book. the Russian authors build an unprecedented level of empathy by drawing out sustained storylines.
-
-
journals.sagepub.com journals.sagepub.com
-
We show that whereas taking more notes can be beneficial, laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.
Karin Forsell agrees, as does Neuroscientist Bruce McCandliss. When we write, it activates different areas of the brain...
-
-
www.theatlantic.com www.theatlantic.com
-
I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy.
The brain, and focused attention, is like any muscle - use it or lose it. It's not an algorithm's fault that people don't have self-control. Educating people about the danger of distraction is good, but you can't force self-control.
-
-
search.proquest.com search.proquest.com
-
Wikipedia took the idea of peer review and applied it to volunteers on a global scale, becoming the most important English reference work in less than 10 years. Yet the cumulative time devoted to creating Wikipedia, something like 100 million hours of human thought, is expended by Americans every weekend, just watching ads. It only takes a fractional shift in the direction of participation to create remarkable new educational resources.
Super sad to see this platform being gamed by bots, algorithms, and professional profile managers.
-
-
www.chronicle.com www.chronicle.com
-
The path forward is to learn more about our vulnerabilities and design around them. To do that, we have to clarify our purpose. In education, learning is the focus, and we know that multitasking is not helpful. So it’s up to us to actively choose unitasking.
But how to achieve long-duration unitasking while keeping brain arousal high?
-
Conversations became more relaxed and cohesive.
In Gazzaley's book 'Distracted Mind', he basically proves that just the presence of a phone keeps student's from being present and cognitively focused in class.
-
-
www.science.org www.science.org
-
into interconnected systems that remember less by knowing information than by knowing where the information can be found. T
This is all good - sharing brain space with a computer is exactly what we envisioned. But the cognitive implications for someone else controlling if we have access or might lose this brain space is scary.
-
it appears that believing that one won’t have access to the information in the future enhances memory for the information itself, whereas believing the information was saved externally enhances memory for the fact that the information could be accessed, at least in general.
Theoretically the human brain should be better able to remember long-form information, like the Illiad, better now than the ancient Greeks. This seems to prove that it's not just simply lack of will standing in our way, it's expectation of information availability.
-
-
www.theverge.com www.theverge.com
-
The same experts frequently point out that artificial intelligence poses many genuine threats that already affect us today. These include how the technology can amplify racist and sexist prejudices; how it could upend society by putting millions out of jobs; how it is set to increase inequality; and how it will be used as tool of control by authoritarian governments.
Great points
-
omeone then posted a write-up of Zuckerberg’s Q&A on Twitter and tagged Musk, who jumped into the conversation with the comment below. Musk also linked approvingly to an article on the threat of superintelligent AI by Tim Urban. (The article covers much of the same ground as Nick Bostrom’s influential book Superintelligence: Paths, Dangers, Strategies. Both discuss a number of ways contemporary AI could develop into super-intelligence, including through exponential growth in computing power — something Musk later tweeted about.)
As highlighted in the other articles - these people, Bostrom and Urban, are wildly on the 'Singularity Negative' side of the argument, or under Musk's control, respectively.
-
“People who are arguing for slowing down the process of building AI, I find that really questionable,” Zuckerberg concludes. “If you’re arguing against AI you’re arguing against safer cars that aren’t going to have accidents.”
Zuckerberg is mixing issues in a way that distorts them. Slowing down the process of building AI safely has zero to little impact on self-driving care technology, which is largely already developed. AGI is not needed for self-driving cars. Frankly, him lumping the two together is clearly manipulative, and self-serving. I congratulate his PR team on a job well done though.
-
I just, I don't understand it. It's really negative and in some ways I think it is pretty irresponsible.”
Zuckerberg is making billions each year from narrow AI...so in terms of financials, he has an incentive to allow development..
https://techcrunch.com/2017/02/01/facebook-q4-2016-earnings/
-
Twitter: “Sigh.”
From Wired article: Pedro Domingos, a professor who works on machine learning at the University of Washington, summed up his response to Musk’s talk on Twitter with a single word: Sigh. “Many of us have tried to educate him and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent,” Domingos says. America’s governmental chief executives would be better advised to consider the negative effects of today’s limited AI, such as how it is giving disproportionate market power to a few large tech companies, he says. Iyad Rahwan, who works on matters of AI and society at MIT, agrees. Rather than worrying about trading bots eventually becoming smart enough to start wars as an investment strategy, we should consider how humans might today use dumb bots to spread misinformation online, he says.
-
Later, Domingos expanded on this in an interview with Wired, saying: “Many of us have tried to educate [Musk] and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent.”
Article he mentions: https://www.wired.com/story/elon-forget-killer-robots-focus-on-the-real-ai-problems/
About Petro Domingos: a researcher at Washington University. Author of the 'master Algorithm' - a known expert. To me, this also means, however, that he profits from having his name in this debate. Wonder how much people are using hyperbole to get their profile out there as a leader in this hot debate...
His take on this debate is that it's overblown. He outlines this in this Knugget article: https://www.kdnuggets.com/2017/01/domingos-ten-myths-machine-learning.html
"Machine learning will soon give rise to superhuman intelligence. From the daily news of AI’s advances, it’s easy to get the impression that computers are on the verge of seeing, speaking and reasoning as well as we do, after which they’ll quickly leave us in the dust. We’ve certainly come a long way in the first fifty years of artificial intelligence, and machine learning is the main reason for its recent successes, but we have a much longer way to go. Computers can do many narrow tasks very well, but they still have no common sense, and no one really knows how to teach it to them.
So there you have it. Machine learning is both more powerful than we often assume it to be and less. What we make of it is up to us — provided we start with an accurate understanding of it."
-
The beef (such as it is) goes back to a speech the SpaceX and Tesla CEO made to an assembly of US governors.
Reading Max Tegmark's book Life 3.0, https://www.theguardian.com/books/2017/sep/22/life-30-max-tegmark-review
It becomes plausible that most of the agreement about AI comes from these world leaders and experts talking at small dinner parties. Why have Zuckerberg and Musk not met and discussed. Did they meet and disagree?
-
The war between AI and humanity may be a long way off, but the war between tech billionaire and tech billionaire is only just beginning. Today on Twitter, Elon Musk dismissed Mark Zuckerberg’s understanding of the threat posed by artificial intelligence as “limited,” after the Facebook founder disparaged comments Musk made on the subject earlier this month.
Let's discuss the technological and financial motivations of the people involved here:
Amazon, Google, Facebook, Apple, and IBM are the AI leaders according to multiple sources:
http://fortune.com/2017/02/23/artificial-intelligence-companies/
https://www.datamation.com/applications/top-20-artificial-intelligence-companies.html
Here we see a case of two billionaires with both financial and ethical interests in the subject butting heads publicly.
-
-
www.wired.com www.wired.com
-
In 2012, the Department of Defense set a temporary policy requiring a human to be involved in decisions to use lethal force; it was updated to be permanent in May this year.
Wired does a fairly despicable job of click-bating with this article, titled "THE NEXT PRESIDENT WILL DECIDE THE FATE OF KILLER ROBOTS—AND THE FUTURE OF WAR"
https://www.wired.com/2016/09/next-president-will-decide-fate-killer-robots-future-war/
"But perhaps the most important decision they will make for overall human history is what to do about autonomous weapons systems (AWS), aka "killer robots." The new president will literally have no choice. It is not just that the technology is rapidly advancing, but because of a ticking time bomb buried in US policy on the issue."
I find this article sensational and misleading.
-
More than 3,000 researchers, scientists, and executives from companies including Microsoft and Google signed a 2015 letter t
Spearheaded by Max Tegmark from MIT: https://futureoflife.org/ai-open-letter/
This letter is the document with the largest number of professional signatures from experts and researchers. Essentially they argue for non-militarization, saying research must be "robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."
They also outline a list of research priorities. https://futureoflife.org/data/documents/research_priorities.pdf?x56934
Specifically, they look for AI to help with labor market forecasting, major disruption forecasting, help developing legal and ethical laws and research,
-
ISIS has already started using consumer quadcopters to drop grenades on opposing forces
Not surprising, but unclear what counter-measure tech is available.
-
As time goes on, improvements in AI and related technology may also shake up balance of international power by making it easier for smaller nations and organizations to threaten big powers like the US. Nuclear weapons may be easier than ever to build, but still require resources, technologies, and expertise in relatively short supply. Code and digital data tend to get cheap, or end up spreading around for free, fast. Machine learning has become widely used and image and facial recognition now crop up in science fair projects.
The study found:
Researchers in the field of Artificial Intelligence (AI) have demonstrated significant technical progress over the past five years, much faster than was previously anticipated.
Future progress in AI has the potential to be a transformative national security technology, on a par with nuclear weapons, aircraft, computers, and biotech
Advances in AI will affect national security by driving change in three areas: military superiority, information superiority, and economic superiority.
-
The report also
Some interesting points from the report:
Lesson #1: As with prior transformative military technologies, the national security implications of AI will be revolutionary, not merely different. ■ Governments around the world will consider, and some will enact, extraordinary policy measures in response, perhaps as radical as those considered in the early decades of nuclear weapons
Lesson #2: The applications of AI to warfare and espionage are likely to be as irresistible as aircraft. Preventing expanded military use of AI is likely impossible.
Lesson #3: Having the largest and most advanced digital technology industry is an enormous advantage for the United States. However, the relationship between the government and some leading AI research institutions is fraught with tension.
-
edge in drones and uncrewed ground vehicles that has been crucial to the US in Iraq and Afghanistan.
Again, was under the impression that DJI had a fast-closing monopoly on the drone market, and likely is selling/trading data back to its government...need to verify this to the extent one can...
-
Tom Simonitebusiness07.19.1707:00 amAI Could
Tom Simonite is also a reporter for the MIT Tech Review: https://www.technologyreview.com/profile/tom-simonite/
http://talkingbiznews.com/1/wired-hires-two-names-streshinsky-its-executive-editor/
-
In the near-term, America’s strong public and private investment in AI should give it new ways to cement its position as the world’s leading military power, the Harvard report says.
Goes contrary to other opinions/anecdotes I have heard about how much China is more integrated than we in the USA are.
-
132-page new report on the effect of artificial intelligence on national security
Written through the Harvard Kennedy School
https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf
-
-
waitbutwhy.com waitbutwhy.com
-
The AI Revolution: The Road to Superintelligence
Overall credibility rating: LOW.
Why?
- He is in the market of selling subscriptions. He is not a credible scientist, expert, etc.
- He has disclosed that he is influenced by Elon Musk in writing this article. Elon Musk has financial reasons to take this position. To understand this article, we have to understand Elon's motivations.
- While he cites experts in this article, he only draws on those experts who believe one side of the story. All are 'singularity' believers. It's not a well researched or fact-checked article.
-
The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing—optimistic estimates say we can do this by 2030
Another article from Ray Kurtzweil: REVERSE-ENGINEERING OF HUMAN BRAIN LIKELY BY 2030, EXPERT PREDICTS
https://www.wired.com/2010/08/reverse-engineering-brain-kurzweil/
-
China’s Tianhe-2, has actually beaten that number
Reuters reports that this computer retains 'top supercomputer rank'
The TOP500 project, started in 1993, issues a list twice a year that ranks supercomputers based on their performance.
There was little change in the top 10 in the latest list and the only new entry was at number 10 – the Cray CS-Storm, developed by Cray Inc, which also developed the Titan.
The United States was home to six of the top 10 supercomputers, while China, Japan, Switzerland and Germany had one entrant each.
The United States remained the top country in terms of overall systems with 231, down from 233 in June and falling near its historical low.
The number of Chinese systems on the list also dropped to 61 from 76 in June, while Japan increased its number of systems from 30 to 32.
-
Google is currently spending billions of dollars trying to do it.
https://www.wired.com/2014/01/google-buying-way-making-brain-irrelevant/
Though Google is out in front of this AI arms race, others are moving in the same direction. Facebook, IBM, and Microsoft are doubling down on artificial intelligence too, and are snapping up fresh AI talent. According to The Information, Mark Zuckerberg and company were also trying to acquire DeepMind.
MEET THE MAN GOOGLE HIRED TO MAKE AI A REALITY https://www.wired.com/2014/01/geoffrey-hinton-deep-learning/
This guy is going to be at the Market for Intelligence conference I am attending this week... OMG
-
Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.
I REALLY disagree with this categorization of Google Search. Google has the largest set of training data in the world. IMHO, this is a hotbed for AGI.
-
AI thinker Nick
Some good counter-points, alternative opinions from researchers from that same New Yorker article:https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom
Last summer, Oren Etzioni, the C.E.O. of the Allen Institute for Artificial Intelligence, in Seattle, referred to the fear of machine intelligence as a “Frankenstein complex.” Another leading researcher declared, “I don’t worry about that for the same reason I don’t worry about overpopulation on Mars.” Jaron Lanier, a Microsoft researcher and tech commentator, told me that even framing the differing views as a debate was a mistake. “This is not an honest conversation,” he said. “People think it is about technology, but it is really about religion, people turning to metaphysics to cope with the human condition. They have a way of dramatizing their beliefs with an end-of-days scenario—and one does not want to criticize other people’s religions.”
-
Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
Full article link: https://nickbostrom.com/superintelligence.html
This paper outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century. It looks at different estimates of the processing power of the human brain; how long it will take until computer hardware achieve a similar performance; ways of creating the software through bottom-up approaches like the one used by biological brains; how difficult it will be for neuroscience figure out enough about how brains work to make this approach work; and how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.
The New Yorker published this afticle on Bostrom: https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom
In it, the author notes: In moral philosophy, Peter Singer and Derek Parfit have received it as a work of importance, and distinguished physicists such as Stephen Hawking have echoed its warning. Within the high caste of Silicon Valley, Bostrom has acquired the status of a sage. Elon Musk, the C.E.O. of Tesla, promoted the book on Twitter, noting, “We need to be super careful with AI. Potentially more dangerous than nukes.” Bill Gates recommended it, too.
-
Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly
First published in the Wall Street Journal in 1994, this article was titled "Mainstream Science on Intelligence: An Editorial With 52 Signatories, History, and Bibliography"
http://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf
She notes: Since the publication of “The Bell Curve,” many commentators have offered opinions about human intelligence that misstate current scientific evidence. Some conclusions dismissed in the media as discredited are actually firmly supported.
- Intelligence tests are not culturally biased against American blacks or other native-born, English-speaking peoples in the U.S. Rather, IQ scores predict equally accurately for all such Americans, regardless of race and social class. Individuals who do not understand English well can be given either a nonverbal test or one in their native language.
- The brain processes underlying intelligence are still little understood. Current research looks, for example, at speed of neural transmission, glucose (energy) uptake, and electrical activity of the brain.
-
In 1993, Vernor Vinge wrote a famous essay in
https://edoras.sdsu.edu/~vinge/misc/singularity.html
The Coming Technological Singularity: How to Survive in the Post-Human Era. Vernor Vinge, Department of Mathematical Sciences, San Diego State University.
" This article was for the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993."
-
futurist Ray Kurzweil calls human history’s Law of Accelerating Returns
Kurzweil, a noted author and 'futurist', wrote "The Age of Spiritual Machines: When Computers Exceed Human Intelligence" in 1999, New York, NY: Penguin Books, ISBN 0-670-88217-8
He is known to be a prominent believer in the Singularity: https://singularityhub.com/2016/03/22/technology-feels-like-its-accelerating-because-it-actually-is/
See also him explaining this in a TED Talk https://www.ted.com/talks/ray_kurzweil_on_how_technology_will_transform_us
-
but by far THE most important topic for our future
After some digging, it becomes evident that Tim Urban is highly influenced by Elon Musk. See this post Urban put up about Elon basically telling him what's important. Urban became a believer: https://waitbutwhy.com/2015/05/elon-musk-the-worlds-raddest-man.html
This is interesting, in that while Urban discloses his relationship with Musk, it's really not clear that Elon put him up to writing this article about AI, and since Elon quotes it in his own social media, Elon is making Tim Urban look like an expert. Urban has a large subscriber base, especially among technologist in Silicon Valley. It begins to look like this is Elon's way of influencing the technology community, globally. https://waitbutwhy.com/2014/11/from-1-to-1000000.html
-
Tim Urban
Tim Urban was interviewed by Forbes in this article.
Does not come across as an AI expert. Sounds like a casual blogger. We need to figure out what his background is that gives him so much gravitas writing an article like this, and why someone like Elon Musk would believe in what he says is true.
Or, maybe he ghost wrote this for Elon?
-
-
www.newyorker.com www.newyorker.com
-
Last summer, Oren Etzioni, the C.E.O. of the Allen Institute for Artificial Intelligence, in Seattle, referred to the fear of machine intelligence as a “Frankenstein complex.” Another leading researcher declared, “I don’t worry about that for the same reason I don’t worry about overpopulation on Mars.” Jaron Lanier, a Microsoft researcher and tech commentator, told me that even framing the differing views as a debate was a mistake. “This is not an honest conversation,” he said. “People think it is about technology, but it is really about religion, people turning to metaphysics to cope with the human condition. They have a way of dramatizing their beliefs with an end-of-days scenario—and one does not want to criticize other people’s religions.”
This is super worth noting
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
Taking Bearings.
I wonder if this will be an individual process soon, or if we will subscribe to a trusted platform or algorithm to vet sites for us...?
-
-
hapgood.us hapgood.us
-
It’s by learning this stuff on a granular level that we form the larger understandings — when you know the difference between a fake news site and an advocacy blog, or understand how to use the Wayback Machine to pull up a deleted web page — these tools and process raise the questions that larger theories can answer.
As they say - it's a mindset. The age of investigative journalism where we can sit back and let others vet facts for us is over. Too many algorithms are playing against us.
-
But let me tell you what is about to happen. We are faced with massive information literacy problems, as shown by the complete inability of students and adults to identify fake stories, misinformation, disinformation, and other forms of spin.
It was super interesting to see how many Stanford students assumed the validity or quasi-accuracy of the daisy image. Definitely revealed how thinking needs to shift for our generation, even amount those 'highly educated'.
-
-
www.aft.org www.aft.org
-
In fact, checklists may make students more vulnerable to scams, not less.
Back to my comment above - I get the desire to get students into an inquisitive mindset, as well as the reality that professionals cheat the system when possible using site metadata, tags, etc., but I wonder who is working on automating collection of this information? Maybe if it were a private service, sites could/would not be able to manipulate it? Who is working on this/can comment on this from Google Chrome?
-
and that the owner of that firm has a record of creating “official-sounding nonprofit groups” to promote information on behalf of corporate clients.8
Surely we can run a bot/script that gives us the salient facts about any site we visit - if a human can crawl the web an discover two owns the site, who cites it, known political affiliations, can't we present that as an 'information' tab in the browser? Is this already happening somewhere? Must be automated.
-
civic online reasoning—the ability to evaluate digital content and reach warranted conclusions about social and political issues: (1) identifying who’s behind the information presented, (2) evaluating the evidence presented, and (3) investigating what other sources say.
Agreed - this is a really helpful framework. I just wonder how we can implement this realistically. Even if we provide something like Hyposes.is - people/special interests will begin to optimize and game that system too.
-
-
opinionator.blogs.nytimes.com opinionator.blogs.nytimes.com
-
The worry is no longer about who controls content. It is about who controls the flow of that content.
I'll add a third idea: It's about volume of information, or the 'noise in the machine'. There is a point at which people just switch off if they are being bombarded with too much political ranting. If 90% of this is entertainment-rated political news, and only 10% is informed, expert opinion or journalism, you could rationally blame the hyperbolic entertainment-news for making the citizenry push mute.
-
we need to know the facts about the candidates’ records.
This is why it's highly problematic that Fox News describes itself as entertainment, has no real ethics statement I've found (so far), yet a large number of people view it as journalistic truth.
Interesting debate on this on Quora: https://www.quora.com/Is-Fox-News-registered-as-a-news-organization-with-the-FCC
-
-
www.chronicle.com www.chronicle.com
-
It is "absolutely untrue" that young people understand how the Internet works when they enroll in college, he says. "That myth is in the direct interest of education-technology companies and Silicon Valley itself.
And even the educated do't really understand what's happening all the time. Ghostery being two prime examples...understanding which sites have your profile, and your data, is a new form of literacy. Most tech users don't even realize that sites protecting their information or their inbox are crawling and selling their personal data right back to advertisers.
-
she had no idea that the professor already knew of her affinity for pink cars and Olive Garden breadsticks—
Not sure about you guys, but I definitely google everyone I mean, especially bosses, professors, etc.-
-
They usually emphasize enhanced privacy settings on social-media accounts and scary case studies of career-ending YouTube videos.
I think this is a good example of an edge case on social media though. Most younger 'natives' seem to genuinely care less about what the older generation finds shocking behavior, so the line is moving. The 'so what' seems to be, by and large, pretty minimal...
-
-
www.theguardian.com www.theguardian.com
-
Fred Sanger, who published very little in the two decades between his 1958 and 1980 Nobel prizes, may well have found himself out of a job.
Definitely calls into question the value and role of unbiased research, who pays for it, and who gets access to it.
-
and scientists, knowing exactly what kind of work gets published, align their submissions accordingly.
Most public/private good initiatives suffer from some amount of tension. If there is a model for tenure that is in the public good, why haven't we transitioned more? Also - self-publishing must impact this dynamic somewhat.
-
-
www.nybooks.com www.nybooks.com
-
Palantir,
'Save the Shire' is such an ironic slogan for them...
-
(According to The New York Times, Cambridge Analytica was advising the Trump campaign.)
But doubtful that they signed a noncompete or exclusivity agreement, so all the data is for sale to anyone who pays.
-
“Although the group did not build the algorithm to treat light skin as a sign of beauty,” Sam Levin wrote in The Guardian, “the input data effectively led the robot judges to reach that conclusion.”
There should be a way to deal with this input data issue. Sounds like an oversight.
-
Someone at the Cambridge Psychometrics Centre decided that people who read The New York Review of Books are feminine and people who read tech blogs are masculine. This is not science, it is presumption. And it is baked right into the algorithm.
in face, Facebook is building profiles on screwed data, and news is becoming even more extreme, and biasing readers on their platform...
-
That I am interested in the categories of “farm, money, the Republican Party, happiness, gummy candy, and flight attendants” based on what Facebook says I do on Facebook itself.
Re news on automatically generated 'jew hater' category...
-
The company also buys personal information from some of the five thousand data brokers worldwide, who collect information from store loyalty cards, warranties, pharmacy records, pay stubs, and some of the ten million public data sets available for harvest.
While not surprised, I did not know this...and frankly expected that Facebook operated in a vacuum with my data I've given to it.The cross-data implications are pretty intense.
-
-
www.tandfonline.com www.tandfonline.com
-
Wisdom,
wis·dom ˈwizdəm/Submit noun the quality of having experience, knowledge, and good judgment; the quality of being wise. synonyms: sagacity, intelligence, sense, common sense, shrewdness, astuteness, smartness, judiciousness, judgment, prudence, circumspection; logic, rationale, rationality, soundness, advisability "we questioned the wisdom of the decision"
-
-
www.nytimes.com www.nytimes.com
-
n urging the United States Supreme Court not to hear the case, Wisconsin’s attorney general, Brad D. Schimel, seemed to acknowledge that the questions in the case were substantial ones. But he said the justices should not move too fast.
Takes us back to court evidence standards - if defendant sites character evidence, will this sort of data be admitted?
-
The Compas report, a prosecutor told the trial judge, showed “a high risk of violence, high risk of recidivism, high pretrial risk.” The judge agreed, telling Mr. Loomis that “you’re identified, through the Compas assessment, as an individual who is a high risk to the community.”
Read 'the Psychopath Inside' by Fallon (2013). Not all criminal tendencies lead to crime. Many can be sculpted into high-functioning individuals.
https://www.amazon.com/Psychopath-Inside-Neuroscientists-Personal-Journey/dp/1591846005
-
-
www.nature.com www.nature.com
-
utka in a News & Views (article no. 0033).
End of radiology as a profession?!
-
which are determined a priori by training the network with known input and output volumes.
training often happens iteratively, not always a priori...
-
recognize objects in pictures and discriminate subtle differences in otherwise similar patterns.
See Silicon Valley on HBO - hot dog computer vision scenario.
-
-
royalsociety.org royalsociety.org
-
y as possible.
Personally I like the puppy/muffin challenge more
-
Using these algorithms, computers can figure out what to learn in order to solve a problem, given lots and lots of examples. They can detect patterns in data, and make decisions or predictions on the basis of this
Caveat being with varying degrees of accuracy and intelligence.
-
This means they can analyse all of this data very quickly.
Sure, from a purely computational perspective. Still need humans to direct analysis and make sense of the data...so this is kind of misleading.
-
IBM estimates that 90% of all the world's data has been created in the last two years
Not just IBM, Google, McKinsey etc also cite similar stats. I'll try to find some.
-
-
learnweb.harvard.edu learnweb.harvard.eduGardner, H13
-
"multiple windows leading into the same room.”
This is exactly what bruce/adam seem to be discussing in terms of interventions and learning styles.
-
Similarly, emphasis on such capacities as taking into account the feelings of others, being able to plan one's own life in a reflective manner, or being able to find one's way around an unfamiliar terrain are likely to result in an emphasis on the cultivation of interpersonal, intrapersonal, and spatial intelligences respectively
Ask Karin - how is this incorporated in schools now?
-
the use of a "multiple intelligences curriculum" in order to facilitate communication between youngsters drawn from different cultures or the conveying of pivotal principles in biology or social studies through a dramatic performance designed and staged by students
Wonder how broadly this is being discussed/applied.
-
set of "talents" in the linguistic and/or logical-mathematical spheres
How is this different from style?!
-
genetic/environmental
I feel like several teachers have been skirting around the (political) issue of genetics, and intelligence. Would be good to understand the nuances of how people talk about it, at Stanford and in other cultures.
-
the relation between my concept of intelligence and the various conceptions of style needs to be worked out empirically, on a style-by-style basis.
I really disagree, initially. An intelligence can be a mix of styles, which add up to a general style...His definition makes style sound un-dynamic, but 'style' and tendency might be the best indication of underlying values that drive intelligence and cognition.
-
lesser extent as a consequence of the experiential, cultural, and motivational factors that affect a person.
So are we building AI with this fundamental idea in mind? Are we building potential? I argue not, we're thinking about it as an outline to be filled in, a domain to be mapped, not the proverbial 'lighting of a fire' as Yeats expressed.
-
An intelligence is a new kind of construct, and it should not be confused with a domain or a discipline.
Never more a propos than when one considers constructing AI.
-
But when it is necessary or advisable to assess an individual's intelligences, it is best to do so in a comfortable setting- with materials (and cultural roles) that are familiar to that individual.
Super - this resonates. The message of existing exams is 'we're trying to test how you deal with stress'. If a test is really about intelligence, the person has to be in a comfortable, or normal, state in order to get a true read on the metrics sought.
-
As such, it becomes crucial that intelligences be assessed in ways that are "intelligent-fair" that is, in ways that examine the intelligence directly rather than through the lens of linguistic or logical intelligence (as ordinary paper-and-pencil tests do)
Another example of the medium is the message. The construct of the test is the message about value. He's breaking this down/reorienting it.
-
MI theory represents a critique of "psychometrics-as-usual." A battery of MI tests is inconsistent with the major tenets of the theory.
Clearly need to read the original work to understand this...
-
The commerce between theory and practice has been ready, continuous, and, for the most part, productive
I have to imagine this is fairly unique
-
A silence of a decade’s length is sometimes a good idea.
Word.
-
-
stateof.creativecommons.org stateof.creativecommons.org
-
The British Museum releases 128 models to Sketchfab, providing greater access and interaction with the museum’s 3D collection than ever before.
Wonder if we can use some of this for the Junior Museum and Zoo mobile project...
-
Due to the reach of Geonet, there is increasing information on a variety of safety protocols like where one must move to avoid tsunamis and advice about what size after-shock to expect.
Very true. A number of friends and family have used this resource during/directly after earthquakes.
-
-
www.benkler.org www.benkler.org
-
very high positive externalities.
But billions of people can't access the internet. Anyone can walk a road. Are we creating a duel-track economy/world...?
-
Internet:
How is this supposed to work vis a vis undesirable common asset usage, like distribution of pornography, hacking, and public monitoring by governments?
-
ystems embodied local knowledge,
So system is superior only if there is expert knowledge?
-
open access commons are tragic:
" users acting independently according to their own self-interest behave contrary to the common good of all users by depleting or spoiling that resource through their collective action."
Lloyd, William Forster (1833). Two lectures on the checks to population. England: Oxford University. Retrieved 2016-03-13.
-
-
fairuse.stanford.edu fairuse.stanford.edu
-
April 11, 2017 Stanford Copyright & Fair Use – Key Overview Updates By Mary Minow 0inShare inShare0 Attorney at law, Nolo Legal Editor, Blogger — Dear Rich: Nolo’s Patent, Copyright and Trademark Blog, Author, Nolo Q: Thank you for updating the copyright overview on this site. What are the most important changes that you want us to know? A: Because the update reflects changes from 2014 through 2016 it includes a few decisions that readers may be familiar with such as the Google book scanning decision (Author’s Guild v. Hathitrust, discussed below), the sequel rights to Catcher in the Rye, (Salinger v. Colting), the use of news – including business news and video clips – for transformative purposes (Swatch Grp. Mgmt. Servs. Ltd. and Fox News v. TVEYES, Inc.), the use of pop culture references (the “Who’s on First” comedy routine) within a play (Fox News v. TVEYES, Inc), and the ability to parody a popular movie (Point Break).
Does fair use extend to material that is itself unauthorized. In this case, the court said yes, extending and further liberalizing the standard.The author of “Point Break Live!” filed sued for copyright infringement and breach of contract, and the court held that "if the creator of an unauthorized work stays within the bounds of fair use and adds sufficient originality, she may claim protection under the Copyright Act, 17 U.S.C. 103, for her original contributions." (http://fairuse.stanford.edu/case/keeling-v-hars/)
-
transformative purposes
TVEyes is a media-monitoring service. in 2013, Fox News sued for copyright infringement, noting TVEyes was a commercial, on-demand service, and as such should not have protection under the fair use doctrine. TVEyes argues their product is more like a searchable database. A judge agreed with TVEyes in 2014, saying 'turning television broadcasts into a searchable database was a transformative fair use of the footage [1]. But the same judge ruled in 2015 that protection did not extend to allowing users to download or email the clips.
[1] https://dockets.justia.com/docket/new-york/nysdce/1:2013cv05315/415525/
-
(Salinger v. Colting),
The issue with fair use in the case of Salinger v. Colting is what effects injunctions have on free speech and public interest. In this case, a group of universities issued an amicus brief after a preliminary injunction was put in place banning the publication of Colting’s book, arguing that injunctions must be closely considered, not automatically issued. They argued that the plaintiff would have to suffer irreparable harm to warrant an injunction. (http://cyberlaw.stanford.edu/our-work/cases/salinger-v-colting-et-al)
-
other good news for academics was the ruling in Author’s Guild v. Hathitrust. Most of your readers are probably aware of this case, in which the Second Circuit ruled that digital scans of a book constituted a fair use when used for two purposes: a full-text search engine, and electronic access for disabled patrons who could not read the print versions
In Authors Guild v. HathiTrust, a group of authors claimed that the HathiTrust Digital Library of scanned books infringed their copyrights. A Federal Court and subsequently the Second Circuit Court called this fair use for accessibility and search.[1] Fair use in this case was debatable. If the HathiTrust was deemed a library, it would be governed by 17 U.S.C. 108 and can't claim a fair use defense. [2]
The court ruled that special rights granted to libraries in §108 are in addition to fair use rights, and that there are four independent factors to address in any fair use evaluation: whether the purpose of use is commercial nature in nature, or for nonprofit educational purposes; the nature of the copyrighted work; the amount of content used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work [2].
[1] Authors Guild v. HathiTrust (2d Cir. June 10, 2014). LINK [2] https://www.law.cornell.edu/uscode/text/17/108
-
That seems unlikely based on the Eleventh Circuit rejection of the “10% rule” in Cambridge University Press v. Patton. The District court had allowed copying of 10% of a work as recommended by the Code of Best Practices, a set of fair use guidelines established by a group of publishers and academics. But the Eleventh Circuit rejected that standard and instead emphasized the importance of a flexible case-by-case fair use analysis. The good news for the academics was that on remand the majority of copying at issue was permitted under fair use
In Cambridge University Press v. Patton, Cambridge University sued Georgia State University for distributing content through its e-reserves system, though Georgia State asserted that its uses were fair use, a legal concept that allows for distribution of copyright content in the interest of wider public good. Interestingly, the litigation was almost 50% funded by the Copyright Clearance Center, a licensing company - with a clear private, monetary interest in the outcome. (Andrew Albanese, "Publishers Appeal 'Flawed' Decision in GSU E-Reserves Case", Publishers Weekly, 11 September 2012.)
-
-
www.nytimes.com www.nytimes.com
-
To repeat, The Times is a subscription-first business; it is not trying to maximize pageviews.
I think there is a persistent market for this. The Information is a good example of innovation in this model.
-
That must change.
Harnessing user generated content and pairing it with respected journalism will be very cool.
-
The Times has an unparalleled reputation for excellence in visual journalism.
Wonder if they will acquire Nat Geo.
-
employer of choice for top journalists
Opinion, vs. reporting of facts, will become the differentiated news in a world where NLP drives automated reporting of current events.
-
between mission and tradition:
I'll point out that Fox News has the most light-weight ethics and information standards statement I've seen, after assessing NPR, NYTimes, WJS, and CNN
-
-
www.pewresearch.org www.pewresearch.org
-
Facebook, is now a common news source
Interesting to consider different attitudes to news based on social platform. I'll argue that Twitter and LinkedIn are more likely to be viewed as news sources, ergo their content is taken as such, with associated ads and trust in verified sources, like the NYTimes and NPR's accounts. Facebook blurs the line, does one really 'friend' the NYTimes? Are facebook users more intellectually vulnerable because they consider Facebook a safe place, still?
-
-
www.pewresearch.org www.pewresearch.org
-
This growth hasn’t necessarily cannibalized the audience for traditional radio, however; 91% of those ages 12 and older listened to terrestrial radio in the past month.
Interesting - so readers are becoming listeners. Predict podcasts will be inundated with ads next.
-
local television news revenue is relatively steady at $18.6 billion – at least for now
Which astounds me. I'm not their target market clearly...
-
- Sep 2017
-
we.riseup.net we.riseup.net
-
“He understood the grammar of gun-powder.”
Lao Tsu says "the master wins the battle before it begins" - if you explore the playing field (medium), the battle (content) becomes arbitrary.
-
-
drive.google.com drive.google.com
-
Good definition, but I'll argue we need both kinds of inquiry for a healthy society.
-
Can someone opine on what he means by provocation?
-
Don't all societies cycle back into this over time, in a variety of ways? Making it interesting to ask "where are we in our cultural cycle, and where have societies historically gone next?"
-
-
drive.google.com drive.google.com
-
We are also designers of assessments
Is this kind of GED/golden thread/known system stuff?
-
o begin with the end in mind means to start with a clear understanding of your destination.
But then you don't get innovation...
-
-
www.marxists.org www.marxists.org
-
pragmatists
prag·ma·tism ˈpraɡməˌtizəm/Submit noun 1. a pragmatic attitude or policy. "ideology was tempered with pragmatism" 2. PHILOSOPHY an approach that assesses the truth of meaning of theories or beliefs in terms of the success of their practical application.
-
-
uhra.herts.ac.uk uhra.herts.ac.uk
-
but browsing through the web, the problem caused by the dephysicalization and typification of individuals as unique and irreplaceable entities starts eroding our sense of personal identity as well.
As evidenced in dating-app culture
-
artificial, synthetic or engineered
Re SuperIntelligence - is AI destruction homicide?
-
We do not know whether we may be the only intelligent form of life.
Fermi Paradox...
-
And following Sigmund Freud (1856–1939), we acknowledge nowadays that the mind is also unconscious and subject to the defence mechanism of repression.
Worth discussing how this is extrovert, vis a vis the other examples.
-
The lack of balance is obvious and a matter of daily experience in the life of millions of citizens.7
kind of a bold statement without facts to back it up...
-
Pirolli [2007])
-
(ab urbe condita)
a Latin phrase meaning "from the founding of the City (Rome)", https://en.wikipedia.org/wiki/Ab_urbe_condita
-
metaphysics of the infosphere
Reminds me of the metasphere from Hyperion.
-
-
pollev.com pollev.com
-
Television
LOL
-