189 Matching Annotations
  1. Apr 2019
    1. The alternative to "growth" is not stasis or passivity, but a growing aliveness to the actual change that you're undergoing in the process of generating responses to specific life challenges.

      Growth as controlled development, vs aliveness as uncontrolled development

  2. Mar 2019
    1. With Rockchip, the deal was partly to enable distribution into China.

      Intel usually manufactures its own processors and acts as its own supplier, without going third-party integrators that incorporate its chips. But recently it signed a deal with Rockchip to supply it with Atom processors, and let Rockchip take care of integrating Atom with the rest of the China hardwar ecosystem.

    2. Multi-band, RF switching and roaming – you’re actually building something high-end.

      The high-end 3G/LTE chips have native capability to support multiple communications bands, and even do seamless switching.

    3. started SKU-ing our RF engine

      A complex chip that can juggle multiple bands may be overkill in countries without such a complex telecommunications infrastructure. So companies like Intel may create simpler, lower-end chips that are cheaper to manufacture—each separate chip design is usually called an SKU (Stock-Keeping Unit)

    4. At the same time we were also going to the next process node, so we had to make choices.

      Intel’s manufacturing division does not have the manpower to migrate both Atom and Core processor lines to 14nm, so they had to choose one to work on first.

      The one that is worked on later may have a delayed launch, if the migration process drags on. So choices have to be made.

    5. we are having to optimize and pay attention to the ratio of downlink versus uplink.

      Communications hardware used to be designed with a greater focus on download throughput than upload (typically with a ratio of 10:1). But these days, with more and more sharing going on, upload throughput is starting to rise in importance, and hardware manufacturers have to adapt to these new needs.

    6. you might still be behind with each cadence

      There is a cycle to new processor and hardware releases, typically a yearly one. Intel can release better and better products but so can its competitors, so there is the risk that each new product may not grab more market share.

    7. If you look at where we were four years ago, we basically had Wi-Fi and WiMAX in total.

      4 years ago Intel literally had no presence in 3G/LTE hardware. Their maiden entry into the market was with their acquisition of Infineon.

    8. Taking the entire portfolio and moving it to 14nm is also pretty aggressive.

      Not only does Intel support a variety of communications protocols with their XMM series of chips, but each time they move to a new manufacturing process they port the entire portfolio too! That is a lot of work, and it brings power efficiency improvements as well as lower cost (due to the smaller silicon wafer space needed for each chip).

    9. we will be able to do things that others cannot easily do,

      Such as create single-chip solutions, or fully-digital solutions—right now most communications chips still have some analog parts, which consume quite a lot of power.

    10. We have to hit it on 14 nanometers.

      The Atom x3 line is Intel’s budget line, competing directly with ARM-based competitors like Qualcomm and Samsung. If they can manufacture on 14nm first they will enjoy power efficiency advantages (better performance at the same power, or better battery life and lower temperatures at the same performance), perhaps gaining market dominance for up to a year before the rest catch up.

      Intel has the advantage of speed because they are well-integrated vertically; the design and manufacturing departments work closely and can sort out issues faster than most other hardware partners can. Other firms have to work with incumbent manufacturing giants TSMC, Samsung, and Globalfoundries.

    11. The current Atom x3 SoFIA line are currently listed with ARM Mali graphics, with the x5 and x7 line featuring Intel’s own Generation 8 graphics.

      Intel has its own graphics chip, currently just known as "Intel HD graphics" at the lower end, and "Iris/Iris Pro graphics" at the higher end. But none of these are being used in the x3 Atom.

    12. IoT things

      “Internet of Things”, basically the idea that every appliance and device in your house, car, etc will be connected to the internet.

    13. it is very easy to build one device and a hit but wireless is about pipelining

      Intel doesn’t just want to score a one-hit wonder, they want to master the ecosystem and entrench themselves somewhere in the hardware integration pipeline that lets them benefit year after year.

    14. reference platforms

      These are complete platforms, basically something like an unbranded tablet or smartphone, that hardware manufacturers can point to and say “this is what we have in mind for our hardware”. E.g. Nvidia Tegra Note

    15. for the developers that require SDKs

      Hardware manufacturers usually release software development kits (SDKs) with pre-programmed examples of how to write code for their hardware, to make software developers’ jobs easier.

    16. Bluetooth Low Energy (LE)

      A.k.a. Bluetooth 4.0. This is a different version from prior iterations, which enables a low-power mode that has greatly extended battery life. The throughput of BLE is not very high, but enough for devices where battery life is more critical, such as Bluetooth headsets, keyboards, etc.

    17. P2P

      peer-to-peer

    18. FUD
    19. peer-to-peer

      transferring data directly from device to device, without going through the cloud, or a router.

    20. Two of the main battlegrounds for designing SoCs are performance and power.

      The basic sliding scale for hardware: something more powerful generally uses more power and hence runs hotter and has lower battery life. A successful processor has to strike a balance between the two.

    1. system-on-chip (SoC)

      A chip that contains computational cores, memory management, and peripheral management hardware all on the same chip. These different specialised functions used to be divided out into different chips ten years ago, with the CPU containing mainly computational cores, while memory and peripheral management was offloaded to the motherboard.

    2. data analysis

      i.e. market analysis

    3. every square millimeter costs you 20 cents

      the cost of a silicon wafer is fixed, so the more functional processors a manufacturer can make with each wafer (by shrinking the processor design), the cheaper each chip will be in terms of material cost.

    4. If you want to perform any non-trivial amount of computation, even just on a single input, you're going to need a design that builds on top of foundational blocks.

      Any complex product needs a hierarchical building process. C.f. Koestler, The Ghost in the Machine

    5. semiconductor behemoth like Intel

      The only other behemoth that has this kind of capability is Samsung, which designs its own Exynos chips (based on ARM microarchitecture) and also manufactures them. AMD used to have this capability but its manufacturing arm was split off, and became GlobalFoundries.

    6. not just the CPU like it used to be, where everything else in the system lies on the other end of a connected bus

      A decade ago CPUs used to be just computing cores and little else; memory and connectivity came from controller chips on the motherboard. As photolithography manufacturing processes shrunk the size of transistors, these capabilities came to be integrated into the processor chip, providing throughput and latency improvements as well as lowering power consumption.

    7. their ability to vertically integrate as much as they can

      Apple not only designs their own chips (although they are manufactured by someone else, usually Samsung), but also integrates these chips into their own products, and then does the marketing and software-development for those chips, outsourcing little else (only assembly). The only other company that does this is Samsung.

    8. you have vendors that are master integrators but design none of the blocks themselves

      original equipment manufacturers, a.k.a. OEMs. Asus, Acer, Dell, HP, and most other brands you are familiar with.

    1. the blocks are basically black boxes

      A “black box” meaning that you don’t know what it looks like inside, how things are arranged, but you know what the expected result should be for any input into this computational black box.

      E.g. any calculator, no matter what it is made of inside (electronics, or gears, or levers), must always produce the result “2” from the input “1+1”.

    2. will design those blocks with a common interface to the outside world that's shared with other similar blocks in the same family

      This is why standardisation, and by extension standards (such as USB and Wifi), are so important.

    3. So the simulated software design acts for all intents and purposes like the hardware design, it just doesn't perform the same. In reality, it's many orders of magnitude slower for complex designs.

      This has close parallels to the famous brain paradox: “Can the human brain ever fully understand itself?”

      “Can a processor ever fully simulate itself?”

    1. This means that it has a higher performance ceiling than the NUC, which is limited by its small size.

      Generally, a CPU has to put out more heat to reach higher performance levels. The TDP of a chip is a measure of how much power the manufacturer expects it to be putting out while under load. The higher the TDP, the more heat it is expected to put out, and hence a more effective heatsink (i.e. one with greater surface area) is required.

    1. We might model people as having not just two sets of preferences, but two natures. (Recall that dichotomies are particularly gratifying ways for humans to organize the world.) Expanding from Kuran’s public-private dichotomy, the two natures might be described as on the one hand social, long-term in focus, sacredness-respecting belonging-maintaining, and with high overlap of “public,” and on the other hand, short-term, moment-to-moment preference not connected to others or to cultural expectations, which must ordinarily be kept more “private.”

      The self-transcending and self-asserting tendencies of the human holon.

      C.f. Koestler, The Ghost in the Machine

    1. The government needs to place tough restrictions on data collection and storage by businesses to limit the amount of damage in the event of a cyber breach.

      I find it hard to imagine how this could be usefully implemented. How is monitoring of data collection going to be done?

      Even simpler ideas, like the Do Not Call registry, have difficulty clamping down on businesses that breach regulations.

    2. requiring them to incorporate encryption into their products

      Simply requiring it is not sufficient, if there is any link along the entire transmission chain that provides an entry point. And the weakest entry point is usually on the client (user) side.

    3. Thirdly, the government should make it mandatory for all local websites and servers to utilize the more secure “https” prefix.

      This is probably the most feasible measure of all the ones mentioned about cybersecurity.

    4. by facilitating a more efficient waste collection regime, smart waste bins will eliminate the need for many sanitation workers.

      What does this mean? Efficient in what way? Smart in what way?

      Current when the adjective “smart” is prepended to a device, it just means it has sensors that can report its status remotely. If that is the case here, I fail to see how a smart bin can reduce or even eliminate the need for sanitation workers.

      If by “smart bins” the author means a humanless waste collection system, that’s a different matter altogether.

    5. Likewise, self-driving vehicles will reduce the need for many transportation workers.

      Not likely. Our information systems are not as comprehensive as we’d like to think for fully-driverless systems.

      A driverless taxi system still requires lots of informational infrastructure: a list of valid pickup points/zones, drop-off points/zones, digital speed limits, accurate navigational maps (which none of the currently available maps are), …

      Not to say it won’t happen, but I don’t see us moving forward on this anytime soon without a paradigm change in the government’s thinking about open data.

    6. If used freely in smart Singapore

      This qualifier already nullifies the rest of the sentence. If 3D printers, drones, and advanced robotics are used in future Singapore, it will be in any manner but freely.

    7. Foreign migrant workers in Singapore can simply return to their countries when they are made redundant but local Singaporean workers will be in a bind since they do not have that option.

      Example of the kind of unthinking statement that reflects lack of nuance in this article.

      The only way most foreign migrant workers will return to their countries (permanently) is if they are forced to do so, whether its because of structural unemployment or a clamping down on foreign labour import.

      Singaporeans are in a bind because the available jobs are not jobs they want, and the jobs they want don’t come with the pay they want, and the pay they want isn’t available in the country they want. they do “have the option” to return to their country when they are “made redundant”, and it has been exercised.

    8. Singapore must be prepared to tackle this paradigm shift in labour trend.

      Seems to be a blanket statement without considering what these shifts in labour trends are, and what they imply.

      A large part of the 140 million workers are in large-scale manufacturing industries, where capital investment goes into scaling fully automated systems. That kind of investment does not make sense in Singapore at the scales we are operating at.

      In 2014, manufacturing was 18.4% of GDP, and trending down. Business services is 15.8%, and trending up. Singstat.gov.sg

      The shift in labour trend will happen, it’s just not going to be homogeneous worldwide.

    9. an unintended consequence of a smart nation might well be the emergence of a persistent underclass and a “digital divide” between those who thrive in the new hyper-connected economy and those who lose out.

      Disingenuous. This consequence is fully intended by those who design the systems (although they would never admit to it). Bits of this attitude leak out in quotes like Eric Schmidt’s (2009) interview: “If You Have Something You Don't Want Anyone To Know, Maybe You Shouldn't Be Doing It

      Any designed system promulgates an underclass which doesn’t thrive under it, and an elite class which does. I do not see how Singapore’s authoritarian high modernist state is going to be any different.

    10. Instead of fostering social inclusiveness by bringing different communities together

      The very idea of a community is one that separates its identity from others by classification. Forcing different communities together is not inclusiveness, it’s asking for trouble.

      A nuanced treatment of the idea can be found in updated urban design textbooks.

    11. One only has to look at how much wealth was accumulated by a handful of entrepreneurs in the global IT industry to see that such a digital divide is a very real possibility.

      Accumulation of wealth by a “handful of entrepreneurs” does not hint at a divide.

      When it spreads beyond “a handful” and reaches a certain critical mass (e.g. the kind that would spark off Occupy Wall Street, which is probably closer to 10% than 1%), and when “the ladder is pulled up” preventing the non-elites from climbing up into the elite class, that is when the divide happens.

      It takes a lot more than hyper-connectivity to make that happen. One also needs a user-design zeitgeist that leaves behind those who won’t get on board with the latest updates, an education system that actively discourages its students from picking up relevant skills, and other attitudes that “pull up the ladder”. Some of these are gradually creeping in, though.

    12. Of course, the answer is not to reject technology but rather to manage it and contain its pernicious effects.

      For a nuanced treatment, see What Technology Wants, and its related idea of The Technium, by Kevin Kelly.

    13. introduce a range of financial aid and skills re-training programmes to mitigate the impact of hyper-connectivity on low-skilled workers.

      Missing the point about causes of the divide. See above. This is hardly nuanced.

    14. In theory, a cost-benefit analysis can be carried out to help establish whether a high-tech system should be incorporated into the national infrastructure.

      Some nuances to consider:

      1) Cost-benefit analyses (CBAs) are high-modernist, in that a paradigm of cost-and-benefit must be adopted before the analysis can be done. This involves a lot of subjective judgement, and naïve CBAs can be just as harmful as no CBA at all.

      2) This presupposes that “high-tech system” is A Thing™ that one can decide whether or not to adopt. (Again, one sees the pernicious influence of high modernism.) Technology is like a meme in the way it spreads; its effects are only localised, like a benign cluster of cells, until it hits a major transmission pipeline and metastasises.

      The level of technology employed by the government, in the absence of deliberate instruction, arises from what is chosen by the IT vendor (which I’ll bet is NCS), which in turn depends on what is available in the commercial packages chosen by said vendor. Commercial packages are not static code, made once and frozen for sale; they are living projects, morphing and evolving with the demands of its customers and with the zeitgeist of the era. In this way, technology makes its way into the government whether they like it or not; it is not a decision to be made.

      What the government can choose, is whether to stick to a legacy system—by which I mean systems that have fallen behind the times, and hence incur higher maintenance costs due to the lack of widely available parts—or “go along with the times” and adopt a more up-to-date system from a list of choices. I am not implying that the latter is the right choice, just that it is the natural choice for a cost-concerned government.

    15. However, due to the difficulty of assessing all costs and benefits in a precise manner, it makes sense to build in layers of failsafe and redundancy into high-tech systems even if their cost-benefit ratios appear favourable.

      It is the height of hubris to suppose that one—be one a person or organisation—could design failsafes for a (high-modernist monolithic) system and satisfy all stakeholders.

      See Crash-Only Thinking for an alternative failure-resilience model. We can’t grow/test/evolve redundancy systems unless we provide enough opportunities for it to happen.

      Helpful to keep in mind, especially as the author says later,

      Rather than aim for a perfect system that never fails, [multiple layers of failsafe] might indeed be a more realistic approach.

    16. The idea of a more resilient society

      See The Design of Crash-Only Societies for an alternative view.

    17. if the estimated cost outweighs the estimated benefits of the system,

      To the best of my knowledge, I have never seen the Singapore Government carry out a cost-benefit analysis for completely new and untested systems.

      The question to ask for any new high-tech system is not “do the estimated costs outweigh the estimated benefits”, but “has it been successful elsewhere on our kind of scale, and how can we replicate that”?

      I am not criticising this approach, just pointing out the reality of novelty-adoption. It is senseless to ask if driverless-car systems make sense for Singapore when no such system in the world has been verifiably successful yet. (The closest one to keep your eye on is probably MCity.)

    18. As key public services become fully automated, there is little doubt that Singapore will reduce its dependence on low-skilled workers.

      Demonstrates a lack of understanding of what it is low-skilled workers do, and the reasons for automation.

      Automation does not happen for its own sake; it is usually a to a need for lower production/service cost, or higher efficacy (greater production speed, more consistent quality, and so on).

      The need for lowering the cost of key public services often leads to them being carried out less frequently, or to wage depression of low-wage workers. It is far easier for small contractors to do this than to invest in the kind of infrastructure required for full automation. This is not a mere investment of money, but more importantly of attention, mindfulness, and a willingness to adopt a new business model.

      Full-automation systems will most likely come about when a conglomerate successfully acquires the required technology, integrates it vertically, and offers the complete package as a series of contracts to be completed over a decade or two. The technology is almost there, but the level of integration with ministries (MEWR, MND, MTI, MOT) and stat boards (URA, HDB, BCA, NEA, among others) this requires means that it will spend more time clearing red-tape than actually being engineered.

      In the meantime, low-skilled workers remain the obvious choice.

    19. 3D printers, drones and advanced robotics will ultimately destroy the jobs of many service workers.

      And create more, author forgets to add. It will be a long time before they are fully automated; the difficulty in automation lies not in the programming and engineering (which is actually alarmingly fast by industry norms), but in figuring out what exactly needs to be automated.

      In the meantime, new income opportunities arise from the new technology that is becoming available. Consultants, operators, designers are growing in numbers. If they act as mediators of technology that carries out service needs, do we consider them service workers? Are cleaners no longer in the service industry if they drive floor-cleaning vehicles?

    1. HC problems can only be solved by increasing the complexity of your life in a specific way: by progressively embodying responses to novelty.

      The way to keep being alive rather than merely living is to keep accepting change and novelty.

    2. Perhaps there are anti-hard problems that require negatively increasing amounts of stupidity below zero — or active anti-intelligence, rather than mere lack of intelligence — to solve.

      Problems that involve doing nothing—meditation and focus. The AI equivalent = ?

    3. Anti-intelligence in this sense is not really stupidity. Stupidity is the absence of positive intelligence, evidenced by failure to solve challenging problems as rationally as possible. Anti-intelligence is the ability to imaginatively manufacture and inflate non-problems into motives containing absurd amounts of “meaning,” and choosing to pursue them (so lack of anti-intelligence, such as an inability to find humor in a juvenile joke, would be a kind of anti-stupidity).

      Anti-intelligence: choosing not to see the sense in a situation, or rejecting the commonsense analysis of the situation.

    4. If you can get your AI anti-intelligent enough to suffer boredom, depression and precious-snowflake syndrome, then we’ll start getting somewhere.

      AI that is anti-intelligent enough to “reject the premise of the question”

    1. The “reward” associated with anticipating the experience is far more powerful than the experience itself.

      Dopamine triggered by anticipation—the neural equivalent of stock market derivatives?

    2. Our brains’ ability to trick itself into doing things it doesn’t even like can be a bit discouraging, but also presents us with a promising possibility: making new habits enjoyable not through visceral pleasure, but through finding new ways to be curious about them.

      Use framing narratives to program the brain and change its perception

    3. You can’t even understand an already existing emergent pattern by analyzing its components — because it is more than the sum of its parts, disassembling the parts will not reveal the essence, the “more.” Thus the pointlessness of all the books and websites chronicling the habits of successful people in tedious detail: success is an emergent pattern of emergent patterns, even more resistant to imitation.

      “Growing” emergent success rather than copying emerged success

    4. As any product designer will tell you, your main competition is not another new product. It is the status quo. Our existing habits are so stable that it takes an outside force — a disturbance — to destabilize the system just long enough for new solutions to establish themselves. This study reported that 36% of successful changes in behavior were associated with a move to a new place (nearly three times the rate associated with unsuccessful changes).

      Activation energy (chaos) is required to disturb stable arrangements (habits) and allow new ones.

    5. The primary risk of entrepreneurship and other free agent lifestyles is not financial or even social — it is the risk to a person’s very self-concept as someone who does what they set out to do. In entrepreneurial endeavors that depend just as much on luck and timing as intelligence and hard work, this feels like a terrible gamble. And it’s a gamble with odds you can’t improve through careful preparation and planning (in fact, too much planning will probably worsen your odds).

      People fear the implication of conceptual, existential failure more than the direct consequences of financial failure?

    6. The reason lifestyle experimentation is so risky for individuals is that, unlike a company or a product, you can’t just fail fast, walk away, and try again. There is no exit— this is your life. Self-concepts are not disposable.

      Lifestyles are vulnerable to crashing; there is no smooth exit/transition from one lifestyle to another, only the abrupt termination of one and bootstrapping of the other.

    7. Stories, like emergent systems, only move in one direction. They cannot be rolled back and played again. This irreproducibility suggests the importance of another form of psychological capital that is also highly correlated with successful behavior change: self-compassion. They are two sides to the same coin — you need self-efficacy to believe you can do it, but you equally need self-compassion to be ok when you don’t.

      Stories as paths in a non-conservative field; they are irreversible.

    8. Self-compassion aids change by removing the veil of shame and pain that keeps you from examining the causes of your mistakes (and often, leads you to indulge in the very same bad habit as a way of forgetting the pain). Self-forgiveness is the first step in fostering an invitational attitude that is open to feedback and learning, from yourself and others.

      To help people change, help them to accept their mistakes rather than treating it as a source of shame

    1. Good questions are a resource.

      “Questions are places in your mind where answers fit. If you haven’t asked the question, the answer has nowhere to go.”

      Clayton Christensen

    1. where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides. 

      What they (Google’s execs) see as beneficial for Google, and for a visible subset of the world’s population.

      What commitment will they make to exploring benefits and downsides for different stakeholders, or encouraging their participation in discourse?

    2. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies.

      What happens when those cultures and societies do not agree with Google’s goals?

    3. provide appropriate opportunities for feedback, relevant explanations

      Given what we know about machine learning, the difficulty of this may be insurmountable. But it is always worthwhile to try.

    4. We aspire to high standards of scientific excellence as we work to progress AI development

      This was not demonstrated in the way they handled James Damore’s memo.

    5. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

      Sounds like Google will still pursue military applications, but will wait until they have an appropriate story to tell.

  3. Jan 2019
    1. E. Glen Weyl Talks to Jaron Lanier About How to Live with Technology.

      Jaron Lanier is a writer, musician, and pioneering computer scientist who helped create modern virtual reality. He is also the author of several books about technology, most recently Ten Arguments for Deleting Your Social Media Accounts Right Now. E. Glen Weyl is a Principal Researcher at Microsoft Research New England, where he works on the intersection of technology and economics. He is also the co-author of Radical Markets.

    1. Let’s see how predictable the powerlessness of power posing was ahead of time. We want to find out how cortisol fluctuates throughout the day (which would affect measurement error) and how it responds to interventions (to estimate effect size).

    1. All our impact on the environment, beyond that of other species with a comparable planetary range and biomass, can be traced to our runaway brain development and the impact of the associated cognitive surplus.

    1. With an infusion of hundreds of millions of dollars from Paul Allen,4 Etzioni and his team are trying to develop a layer of common-sense reasoning to work with the existing style of neural net. The first problem they face is answering the question, What is common sense?

    1. Karl Friston’s free energy principle might be the most all-encompassing idea since Charles Darwin’s theory of natural selection. But to understand it, you need to peer inside the mind of Friston himself.

  4. Sep 2018
    1. There have been a few different experiments with putting a tail on a person in virtual reality. But the best, most rigorous work came out of Mel Slater's lab at University College London. That particular tail was a really good tail. It was a long tail. And the task was to get it to whip around in front of you to hit a target. So it was a pretty non-trivial bit of athleticism with your tail. And people can just do it.
    1. This is somewhat mythical, but he certainly carried out detailed experiments with metal balls by rolling them down sloping planks to discover the basic laws of acceleration.

      The source of the myth is Galileo's first biographer, Vincenzo Viviani. Viviani is not considered a credible source because of other errors in the biography. There is documented evidence of two Tower of Pisa experiments during Galileo's time, neither conducted by Galileo.

      highlight: https://hypothes.is/a/uUpFrroYEeiszw-CjTl8sw

      from http://www.scientus.org/Galileo-Myths.html

  5. Oct 2017
    1. It was coming together in her mind: “Bullies, Bystanders and Bravehearts.” It would be personal; there would be research; she would write, and she would talk, and she would interview people who had suffered fates worse than her own and bounced back.

      Should the narrative of science have a say in the outcome of science: the collective body of knowledge that is acknowledged as reliable?

    2. When I asked Gelman if he would ever consider meeting with Cuddy to hash out their differences, he seemed put off by the idea of trying to persuade her, in person, that there were flaws in her work.“I don’t like interpersonal conflict,” he said.
    3. She also knew what was coming, a series of events that did, in fact, transpire over time: subsequent scrutiny of other studies she had published, insulting commentary about her work on the field’s Facebook groups, disdainful headlines about the flimsiness of her research.

      How do we do replication studies without the unpleasant shaming that people tend to do if a finding that doesn’t replicate?

    4. Cuddy felt as if Simmons had set them up; that they included her TED talk in the headline made it feel personal, as if they were going after her rather than the work.

      Are talks/seminars based on (unreplicated) findings fair game for mentioning?

    5. as scientists adjusted to new norms of public critique

      What were the old norms of public critique?

    6. fellow academics have subjected her research to exceptionally high levels of public scrutiny

      Exceptionally high for a psychology study? Or exceptionally high for a replication study? Or exceptionally high for a psychology study that failed to replicate?

  6. Jun 2017
    1. formally extending into cyberspace the human rights to privacy, freedom of expression and improved economic well-being

      How much of a say should citizens have in how their cybersecurity rights are to be protected?

      For instance, in Singapore, many government services require a Singpass for access. Singpass offers 2-factor authentication (2FA) by two means only: SMS and a key dongle.

      SMS 2FA is known to be insecure, but a key dongle is cumbersome to carry around (especially once other companies start implementing them as well. Banks have already done so.)

      There are better security options available, such as hardware keys. To what extent should governments be required to explore the use of all security options?

      A government that actively updates its security measures also requires a citizenry that is willing to learn and update its knowledge of cybersecurity. To what extent should citizens (of any age) be required to do so?

    2. and following in its wake may well be cybersecurity

      How would people enforce their right to cybersecurity?

      Would they sue firms or companies that fail to provide an adequate level of cybersecurity?

      Would cybersecurity be legislated, like our imports, and insecure services offered by individuals be banned under law?

    3. the actions of governments cutting off internet access

      Is this comparable to jailing? We jail criminals to ensure they do not enter the same spaces as us and harm us. Should we do that to hackers who pose a threat to our data?

    4. Those of us who have regular internet access often suffer from cyber-fatigue: We’re all simultaneously expecting our data to be hacked at any moment and feeling powerless to prevent it.

      We are responsible for securing our physical spaces by ensuring our windows are closed, doors are locked, and other openings re secured. Nonetheless, under the law, the physical space bounded within the house is considered private property, and breaking into this space is a crime—burglary—punishable by law.

      How would this extend to cyberspace, where the boundaries are not so clearly drawn? Who "owns" the data? Even if we are supposed to "own" the data, it is stored physically elsewhere and managed by someone else—to what extent are they responsible for protecting our information?

  7. Apr 2017
    1. major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more

      I am not confident that AI will help us sort that better. Even humans have a hard time deciding the gray areas; AI simply helps us to scale up such decision-making.

    2. and each person should be able to share what they want while being told they cannot share something as little as possible

      Will they be able to share what they want, how they want to? Even if it’s not as a Facebook post?

    3. that each person should see as little objectionable content as possible

      This is part of what leads to “loss of common understanding”, and what helps to build up filter bubbles.

    4. Any system will always have some mistakes

      Correction: “any single system that attempts to govern billions or trillions of entities will have lots of mistakes.”

      That thing about evolving towards a system of local governance is likely a better idea. But is Facebook comfortable with the idea of ceding control of its platform?

    5. need to evolve towards a system of personal control over our experience

      It’s eternal tension between a filter bubble and a “public life” (in the sense of involuntary civic engagement)

    6. we need to evolve towards a system of more local governance

      Much agreed, and I hope this pursuit never ceases.

    7. With this cultural shift, our Community Standards must adapt to permit more newsworthy and historical content, even if some is objectionable. For example, an extremely violent video of someone dying would have been marked as disturbing and taken down. However, now that we use Live to capture the news and we post videos to protest violence, our standards must adapt. Similarly, a photo depicting any child nudity would have always been taken down -- and for good reason -- but we've now adapted our standards to allow historically important content like the Terror of War photo. These issues reflect a need to update our standards to meet evolving expectations from our community.

      It sounds like Facebook has suddenly realised it is a political entity, much to its chagrin, and while figuring out how to handle all the stones being thrown at it, is also starting to revel at the sheer power it has, and just how much it can influence people’s lives. With great power comes great responsibility, perhaps too great for a single man—or company—to bear.

    8. Our guiding philosophy for the Community Standards is to try to reflect the cultural norms of our community.

      I believe the communities of people on Facebook are sufficiently diverse that it is a fallacy to attempt to shoehorn all of them into a single set of cultural norms.

      It would be hegemony if Facebook decides to stop finding better ways to accommodate multiple cultures and instead forces a single set of norms on its users, excusing it as “operational scaling issues”.

    9. That means we need Community Standards that reflect our collective values for what should and should not be allowed.

      It is striking that Facebook mediates what goes into the Community Standards, whereas such things are traditionally arrived at by consensus. And there is much non-consensus about the way Facebook enforces their consensus.

    10. we face global problems that span national boundaries.

      There are also some problems that span platform boundaries. When will Facebook facilitate sharing to other apps and open up their APIs so that we can tap on the collective resources and minds of developers worldwide?

    11. As the largest global community, Facebook can explore examples of how community governance might work at scale.

      This is a pretty scary proclamation of Zuckerberg’s dreams: to simply dominate decision-making with the sheer size of people on a platform he can influence.

    12. In general, if you become less likely to share a story after reading it, that's a good sign the headline was sensational. If you're more likely to share a story after reading it, that's often a sign of good in-depth content. We recently started reducing sensationalism in News Feed by taking this into account for pieces of content, and going forward signals like this will identify sensational publishers as well.

      Statements like this worry me. Zuckerberg just went from saying “read-share correlation is a good sign the headline was sensational” to saying “we reduce sensationalism by taking this into account” to “signals like this will identify sensational publishers” in 3 consecutive sentences.

      It sounds like a quick path to circular reasoning: “articles that people are less likely to share after reading are sensational, and sensational articles are less likely to be shared if people read them.”

    13. and we lose common understanding

      This is misleading. The “common understanding” had not existed to begin with, and thus cannot be lost. And often nuance is not something that can be stuffed down peoples’ minds.

    14. provide a complete range of perspectives

      How will the line between fake news and “misinformed but valuable perspectives” be drawn? Where does tolerance of falsehood in the name of subjectivity end?

    15. leading to a loss of common understanding

      If there is a powerful effect leading to loss of common understanding, it would be the Filter Bubble.

    16. even more powerful effects we must mitigate

      I.e. the effect of clickbait, especially when it leads to fake news

    17. But the past year has also shown it may fragment our shared sense of reality.

      Probably the closest that Facebook will come to admitting the part they played in the results of the 2017 election.

    18. raised $15 million

      Social infrastructure to monitor the use of funds and resources would be really handy here, too, and not just for raising money.

    19. the Facebook community

      I’m not sure there is actually some tangible community there. Perhaps many scattered communities, brought together for a single purpose at a single point in time, but calling that a community sounds like a stretch.

    20. To prevent harm, we can build social infrastructure to help our community identify problems before they happen. When someone is thinking of suicide or hurting themselves, we've built infrastructure to give their friends and community tools that could save their life. When a child goes missing, we've built infrastructure to show Amber Alerts -- and multiple children have been rescued without harm. And we've built infrastructure to work with public safety organizations around the world when we become aware of these issues. Going forward, there are even more cases where our community should be able to identify risks related to mental health, disease or crime.

      The Facebook PA system. This is actually a good idea, if it can figure out how to prevent abuse.

    21. In a world where this physical social infrastructure has been declining, we have a real opportunity to help strengthen these communities and the social fabric of our society.

      And this would require Facebook to start making active decisions about what kind of social infrastructure to build up, rather than letting the data shape where it wants to go. Are they ready to pivot?

    22. We can design these experiences not for passive consumption but for strengthening social connections.

      How does Facebook plan to do this when its metric is based around increasing user interaction with the interface, and not necessarily with each other?

    23. Today, Facebook's tools for group admins are relatively simple. We plan to build more tools to empower community leaders like Monis to run and grow their groups the way they'd like, similar to what we've done with Pages.

      What about ways for people to contest claims of representation by a Facebook group they didn't create?

    24. Going forward, we will measure Facebook's progress with groups based on meaningful groups, not groups overall.

      This is dangerous if chased as a global, persistent goal. Chasing a measure makes it no longer an accurate gauge of the thing that's being measured.

      a.k.a. Goodhart's Law

    25. Facebook stands for bringing us closer together and building a global community.

      What if the way to do so is not through Facebook? Would Facebook stand for that?

    26. Today I want to focus on the most important question of all: are we building the world we all want?

      Who gets to build the public space? The platform owners have the chance to shape it, and the affordances of the platform very well shape the kind of world that can be achieved. Is Facebook building the world they think people want, or the world they want?

    1. The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.

      Facebook algorithmically prioritises user interactions with Facebook

  8. Mar 2017
    1. In general, parents are not overly worried about their young children taking public transport alone, except for the possible risk of being pushed in by the crowd when entering the train, or steep and fast escalators

      Safety concerns of parents for children

    2. Wheelchair-bound commuters travelling alone are concerned about the wheels of their wheelchairs getting caught in the platform gap, while many seniors are afraid of steep and fast-moving escalators.

      Design for differing levels of mobility

    3. “Now, because of the new model of Maxicabs, all the wheelchairs have to go up from the back. Then a lot of those terminally ill (passengers)...say only coffins go up from the back. Superstitious lah...We rather carry him up from the side and then put the wheelchair behind.”

      UX considerations: ground superstitions do need to be addressed as well.

    4. At the moment, diaper-changing tables are only available in wheelchair-accessible toilets and/or female toilets. It would be more user-friendly if diaper-changing tables could be incorporated in both the male and female toilets, apart from the wheelchair-accessible ones. Comparatively, Tokyo has 254 ‘multi-purpose’ toilets built in train stations that are wheelchair-accessible and have diaper-changing and baby seat amenities. In Seoul and Taipei, dedicated nursing rooms are provided at selected train stations. Onboard the trains, mothers in Seoul are able to identify a ‘baby’ icon on information panels to find out where they can alight to use a nursing room. PTC recommends LTA and public transport operators look into implementing some of the best family-friendly practices from other cities.

      Recommended family-friendly measures

    5. PTC recommends that LTA review its policy on open strollers on public buses in consultation with parents, public transport operators, and experts,
    6. “W e drive different bus models everyday. The way to drive each bus is different. Scania - need to tap lightly. If you tap too hard, the passengers will fly from the back of the bus to the front. Another is double decker. Volvo will require harder tapping of the accelerator. If you are used to driving Volvo, Scania would be very hard to drive.”

      On adapting to different bus models

    Annotators

  9. Sep 2015
    1. more disruptions resulting from hyper-connectivity.

      Broad generalisation that lacks the nuanced understanding of information hyperconnectivity.

      These disruptions so far arose from hardware failures: power lines, track signalling, etc, none of which were the fault of hyper-connectivity. Software issues, of the sort that lead to immediate breakdowns, are generally much easier to spot, isolate/contain, and rectify.

      If anything, it was hyper-connectivity that helped alleviate some of the effects of these disruptions: people searching alternative routes on Google Maps, looking up bus arrival timings to pick a different route, SMRT informing commuters of line breakdowns, and so on.

      Much harder to spot are systemic issues—the “unknown unknowns”—which lead to sub-optimality, or even systemic breakdown under certain conditions. This is not the fault of hyper-connectivity, it is the fault of authoritarian high modernism that fails to acknowledge its shortcomings and allow other entrants to fill in its cracks.

    2. with the public still reeling from recent subway breakdowns,

      “Reeling”? Why the sensationalism? The recovery was pretty quick once the fault was isolated, and aside from the initial delays, things were back to normal rather quickly.

      That’s what I’m thinking about when I see someone claiming that “the public is still reeling”.

    3. But the danger with high-tech systems is that they are often inter-connected so much so that a malfunction in one part can potentially set off a chain reaction that leads to a system-wide shutdown.

      This happens with low-tech systems too. Just saying. It happens because of hyper-connectivity (think technology-mediated globalisation), and not because of technology per se.

    4. The third major concern of hyper-connectivity is technical malfunctions and how the general public will react to the inconvenience and disruption caused.

      Technical malfunctions is just one part of it. They are at best unmitigated, at worst amplified, by systemic malfunctions.

    5. the advent of social media and affordable mobile computing devices – though beneficial in terms of facilitating informational exchanges – has also added a level of coldness to our communications with others; and a smart nation might just make things a lot colder.

      Disingenuous. If our interactions over social media are cold, it is because we haven’t picked up on the rituals of social media (the FB-like, the emoji, the hashtag, the meme-image, appropriate replies that fit in 140 characters, etc).

      Face-to-face interactions are mediated by social rituals, and social media is no different. Interactions are enriched when participants are well-versed in these social rituals and know when and how to employ them.

    6. As people retreat into their digital worlds and take each other for granted,

      I just want to point out that people have been doing this for centuries. When our forebears didn’t have digital words to retreat to, they had summer villas, office work, hunting trips, all sorts of other activities to lose themselves in before they can no longer avoid the mundanity of their everyday lives.

    7. it might provide one with the excuse that personal interactions have become unnecessary.

      So author is blaming “hyper-connectivity” because it can be used as an excuse? Well, well.

      If I don’t want to meet someone and use technology as an excuse, does that make the reduction in interaction technology’s fault?

      Hyper-connectivity is not the reason for limited face-to-face contact. Long working hours and other family-hostile work policies are the reason for that. And communications technology is helping to mitigate some of those effects by allowing interactions between family members when face-to-face interaction is impossible.

      From interacting with teenagers for over two recent years, I can tell you that if a teenager doesn’t want to hang out with her family, it has less to do with the smartphone that is always in her hand, and more to do with the fact that most teenagers just don’t want to spend more than a few hours with family over a weekend. They want to do things, try new activities, eat food, hang out with friends … there is a lot to distract teenagers these days and that is not the fault of hyper-connectivity.

    8. A fitness tracker that enables one to keep an eye on a loved one from a distance is axiomatic of that kind of effect. Simply watching a loved one over cyberspace is clearly no substitute for face-to-face communication.

      Disingenuous. If you know anyone who has used such technology (home cameras for monitoring kids/pets, for example) and considers it a perfectly valid replacement for face-to-face communication, I would like to meet them.

      People don’t use such technology to replace face-to-face communication. Video calls replace phone calls, not personal visits. Technology augments interaction rather than displacing it.

    9. we have also become more detached from one another at the personal level.

      Naïve, presupposes that we were less detached from one another in a nostalgic past.

    10. How cyber and personal interactions might be affected by hyper-connectivity is also a mystery at this point.

      Not really. Author just needs to read more sci-fi, not the predictive kind, but the kind that examines the effects of technology on psychology and sociology.

      It’s not about understanding hyper-connectivity, it’s about understanding communities and cultures.

  10. Jun 2015
    1. This is why the choice is so hard. Everything can do everything, and people will tell you that you should use everything to do everything. So you need to figure out for yourself what kind of team you have, what kind of frameworks you like using, where people can be most productive, so they will stick around through the completion of the project.

      “Picking” a programming language.

    2. tests and version control are now the trigger for actually shipping code. If you can follow a process like this, you can release software several times a day—which in the days of shrink-wrapped software would have been folly.

      How deployments are automated and scaled.

    3. This is the experience of using version control. It’s a combination news feed and backup system. GitHub didn’t invent version control. It took a program called git, 38 which had been developed at first by Linus Torvalds, the chief architect of Linux, and started adding tools and services around it.

      Why everyone loves git. And why everyone (who’s anyone) ought to know git.

    4. Tests are just code, of course. They check the functions in other code. They run, you hope, automatically, so you can find out if the day’s work you did breaks things or not. Relentless testing is one way to keep an eye on yourself and to make sure the other person’s bugs and your bugs don’t find each other one wintry night when everyone is home by the fireplace and crash the server right before Christmas, setting up all kinds of automated alarms and forcing programmers into terrible apology loops with deeply annoyed spouses.

      The importance of being tested.

    5. Odds are, if you’re doing any kind of programming, especially Web programming, you’ve adopted a framework. Whereas an SDK is an expression of a corporate philosophy, a framework is more like a product pitch. Want to save time? Tired of writing the same old code? Curious about the next new thing? You use a graphics framework to build graphical applications, a Web framework to build Web applications, a network framework to build network servers. There are hundreds of frameworks out there; just about every language has one.

      What is a framework, and how is it different from a programming language.

    6. It’s up to an IDE to help you connect your ideas into this massive, massive world with tens of thousands of methods so you can play a song, rewind a song, keep track of when the song was played (meaning you also need to be aware of the time zones), or keep track of the title of the song (which means you need to be aware of the language of the song’s title—and know if it displays left-to-right or right-to-left). You should also know the length of the song, which means you need a mechanism for extracting durations from music files. Once you have that—say, it’s in milliseconds—you need to divide it by 1,000, then 60, to get minutes. But what if the song is a podcast and 90 minutes long? Do you want to divide further to get hours? So many variables. Gah!

      Programmer problems.

    7. Within Xcode are whole worlds to explore. For example, one component is the iOS SDK (Software Development Kit). You use that to make iPhone and iPad apps. That SDK is made up of dozens and dozens of APIs (Application Programming Interfaces). There’s an API for keeping track of a user’s location, one for animating pictures, one for playing sounds, and several for rendering text on the screen and collecting information from users. And so forth.

      What programmers use to make iOS apps.

    8. Poor, sad, misbegotten, incredibly effective, massively successful PHP. Reading PHP code is like reading poetry, the poetry you wrote freshman year of college. I spent so many hundreds, maybe thousands, of hours programming in PHP, back when I didn’t know what I was doing and neither did PHP. Reloading Web pages until my fingers were sore. (I can hear your sympathetic sobs.) Everything was always broken, and people were always hacking into my sites.

      Why many programmers dislike PHP.

    9. In 2008 a developer named Ryan Dahl modified the V8 engine, which was free software, and made it run outside the browser. There had been freestanding versions of JavaScript before (including some that ran inside Java, natch), but none so fast. He called this further fork Node.js, and it just took off. One day, JavaScript ran inside Web pages. Then it broke out of its browser prison. Now it could operate anywhere. It could touch your hard drive, send e-mail, erase all your files. It was a real programming language now. And the client … had become the server.

      Why Javascript has become so big in the past decade.

    10. Relational databases represent the world using tables, which have rows and columns. SQL looks like this:

      I hope the programmer you get understands what SQL is.

    11. For a truly gifted programmer, writing code is a side effect of thought. Their skill isn’t in syntax; it’s how they perceive time and computation. They can see the consequences of their actions more quickly than the next programmer; they spend less time in the dark. Their code still has bugs, it still needs to be optimized—they’re not without flaws. But for every candle we own, they have three or four flashlights and a map.

      Gifted programmers.

    12. C is a simple language, simple like a shotgun that can blow off your foot. It allows you to manage every last part of a computer—the memory, files, a hard drive—which is great if you’re meticulous and dangerous if you’re sloppy. Software made in C is known for being fast. When you compile C, it doesn’t simply become a bunch of machine language in one go; there are many steps to making it really, ridiculously fast. These are called optimizations, and they are to programming what loopholes are to taxes. Think of C as sort of a plain-spoken grandfather who grew up trapping beavers and served in several wars but can still do 50 pullups.

      Most of the time you don’t really need a specialised C programmer.

    13. It’s possible for a C programmer and a Java programmer to read each other’s code, but it’s harder to make C code and Java code work together. C and Java represent the world in different ways, structure data in different ways, and address the components of the computer in different ways. There are true benefits to everyone on a team using the same language. They’re all thinking the same way about how to instruct the computer to process data. It’s not necessary for every team across a big organization to use the same language. In fact, it’s often counterproductive. Large organizations have lots of needs and use many languages and services to meet them. For example, Etsy is built atop PHP—but its product-search service uses Java libraries, because the solutions for search available in Java are great.

      How programming languages are used together in a project.

    14. It’s not simply fashion; one’s career as a programmer depends on demonstrating capacity in one or more languages. So there are rankings, frequently updated, rarely shocking. As of April 15, the world’s most-used computer languages, according to the Tiobe index (which uses a variety of indicators to generate a single ranking for the world of programming), are Java, C, C++, Objective-C, and C#, followed by JavaScript, PHP, and Python. The rankings are necessarily inexact; another list, by a consulting firm called RedMonk, gives JavaScript the top spot, followed by Java. There are many possible conclusions here, but the obvious one is that, all things being equal, a very good Java programmer who performs well in interviews will have more career options than a similar candidate using a more obscure language.

      On the importance of programming languages in the working world.

    15. The average programmer is moderately diligent, capable of basic mathematics, has a working knowledge of one or more programming languages, and can communicate what he or she is doing to management and his or her peers.

      “Minimum requirements” for programming.

    16. “I watch the commits,” TMitTB says. Meaning that every day he reviews the code that his team writes to make sure that it’s well-organized. “No one is pushing to production without the tests passing. We’re good.”

      Every programmer today should know about version control, and git/github. If they don’t know what a commit is, be very wary.

    17. But what does it matter? Every day he does a 15-minute “standup” meeting via something called Slack, which is essentially like Google Chat but with some sort of plaid visual theme, and the programmers seem to agree that this is a wonderful and fruitful way to work.

      Recent working tools.

    18. Some parts of the functional specification refer to “user stories,” tiny hypothetical narratives about people using the site, e.g., “As a visitor to the website, I want to search for products so I can quickly purchase what I want.” 10 Then there’s something TMitTB calls wireframe mock-ups, which are pictures of how the website will look, created in a program that makes everything seem as if it were sketched by hand, all a little squiggly—even though it was produced on a computer. This is so no one gets the wrong idea about these ideas-in-progress and takes them too seriously. Patronizing, but point taken.

      The parts of “programming” that don’t actually involve programming, but are very helpful for programmers to know.

    19. Most programmers aren’t working on building a widely recognized application like Microsoft Word. Software is everywhere. It’s gone from a craft of fragile, built-from-scratch custom projects to an industry of standardized parts, where coders absorb and improve upon the labors of their forebears (even if those forebears are one cubicle over).

      Standard libraries (parts) and toolboxes speed up work a lot. Don't write from scratch unless you're trying to learn, trying to optimise, or trying to make something that doesn't exist yet.

    20. Coders are people who are willing to work backward to that key press. It takes a certain temperament to page through standards documents, manuals, and documentation and read things like “data fields are transmitted least significant bit first” in the interest of understanding why, when you expected “ü,” you keep getting “�.”

      What it takes to do programming: the tenacity to dig until you get at the root of the problem.

    21. The World Wide Web is what I know best (I’ve coded for money in the programming languages Java, JavaScript, Python, Perl, PHP, Clojure, and XSLT), but the Web is only one small part of the larger world of software development.

      The programming languages you were asking for. But note the caveat.

    22. “My people are split on platform,” he continues. “Some want to use Drupal 7 and make it work with Magento—which is still PHP.” He frowns. “The other option is just doing the back end in Node.js with Backbone in front.” You’ve furrowed your brow; he eyes you sympathetically and explains: “With that option it’s all JavaScript, front and back.”

      Choice of programming language, choice of framework, choice of backend will keep changing; it is more important to gather a team that works well together.

    1. completed survey cards

      “Kids stared at their teachers for hundreds of hours a year, which might explain their expertise. Their survey answers, it turned out, were more reliable than any other known measure of teacher performance—­including classroom observations and student test-score growth.”

      – From Why Kids Should Grade Teachers, The Atlantic

    2. witnessed by

      “A new evaluation system in Washington, D.C., for example, requires five observations each year, compared with the previous systems that required one or two at most, and in many cases none at all. Starting next fall, a Tennessee law will require at least four observations a year, rather than one every five years.”

      – From Teacher Ratings Get New Look, Pushed by a Rich Watcher, The New York Times

    3. student feedback and evaluation

      Room to Grow

      – From Infinite Sums, Jonathan Claydon.

  11. May 2015
    1. a well-reputed 80 Plus Gold PSU

      See reviews from Jonnyguru, HardwareSecrets.

      Not only does this unit not care about the hot box, this unit gave me the exact same voltage readings as it did cold for almost every single test. Seriously, I can't remember the last time this happened. The 3.3V rail and the 5V rail were so solid that every number I got in the hot box progressive tests matched up exactly with the corresponding cold test. The heavy 3.3V/5V crossload test was the only thing that got either rail to budge, and even then it only moved the 5V rail down by 0.01V compared to the cold test. That's... amazing. Even the 12V rail did its best to stick to its cold numbers, again moving only 0.01V on any one test. Where's my lower jaw? Oh, there it is... on the floor. Folks, this may be the best X series unit I've tested yet. Voltage readings so solid you couldn't budge them with a Peterbilt. Ripple results so low an ant couldn't trip over the waveforms. And to top it all off, you have that Gold level efficiency, a fan that turns off when not needed, and complete indifference to high temperatures. I have to do a 10 here.

      – Jonnyguru

      The Seasonic X-Series 560 W can really deliver its labeled wattage at high temperatures. Efficiency was extremely high, above 87% at light load (20% load, 112 W) and at 89% at full load, peaking 90.3% at 60% load (336 W). Really impressive.

      Voltage regulation was another highlight of this product, with all voltages within 3% of their nominal values – including the -12 V output. The ATX12V specification allows voltages to be up to 5% from their nominal values (10% for the -12 V output). Therefore this power supply presents voltages closer to their nominal values than necessary. Noise and ripple levels were always extremely low. Below you can see the results for the power supply outputs during test number five. The maximum allowed is 120 mV for the +12 V and -12 V outputs, and 50 mV for the +5 V, +3.3 V, and +5VSB outputs. All values are peak-to-peak figures.

      – HardwareSecrets

    2. it can dispense with this cooling fan
    3. I live in the tropics

      The daily temperature range here is 26–32°C!

    4. all-in-ones

      e.g. Lenovo Ideacentre

      Lenovo Ideacentre

    5. slim tower systems

      e.g. Alienware X51

      Alienware X51

    6. smaller-form-factor PSUs

      Such as SFX, TFX.

      SFX SFX PSU

      TFX TFX PSU

      – From Wikipedia, ATX#SFX and ATX#TFX

    7. Thin-ITX motherboards with onboard DC-DC converters
    8. the picoPSU

      picoPSU picoPSU plugged into 24-pin motherboard connector housing

    9. 24-pin power connector

      24-pin power connector A 24-pin power connector on the motherboard.

    10. 80 Plus testing

      More detail on the testing process can be found in HardwareSecret’s Understanding the 80 Plus Certification.

    11. a Seasonic X560

      Seasonic X560

      – From Anandtech, Seasonic X-Series 560W

    12. additional caveat

      PSU efficiency also varies with the input voltage—they are slightly more efficient with 230V input voltage, and less so with 115V input. But this is not relevant for the purpose of this article.

    13. buy the PSU with the highest efficiency

      Much more detail to be found through in-depth articles like HardwareSecret’s Everything you need to know about power supplies.

    1. “novels set in imaginary futures are necessarily about the moment in which they are written”

      INTERVIEWER You’ve written that science fiction is never about the future, that it is always instead a treatment of the present.

      GIBSON There are dedicated futurists who feel very seriously that they are extrapolating a future history. My position is that you can’t do that without having the present to stand on. Nobody can know the real future. And novels set in imaginary futures are necessarily about the moment in which they are written. As soon as a work is complete, it will begin to acquire a patina of anachronism. I know that from the moment I add the final period, the text is moving steadily forward into the real future.

      – From The Paris Review, The Art of Fiction No. 211

    1. the screaming fits of reality denial that Hollywood prefers

      See: Thirteen going on Thirty, et al.

    2. the conscious watcher in my mind

      Schiller wrote of a ‘watcher at the gates of the mind’, who examines ideas too closely. He said that in the case of the creative mind ‘the intellect has withdrawn its watcher from the gates, and the ideas rush in pell-mell, and only then does it review and inspect the multitude.’ He said that uncreative people ‘are ashamed of the momentary passing madness which is found in all real creators . . . regarded in isolation, an idea may be quite insignificant, and venturesome in the extreme, but it may acquire importance from an idea that follows it; perhaps in collation with other ideas which seem equally absurd, it may be capable of furnishing a very serviceable link.’

      From Impro, Keith Johnstone, quoted in Ritual and the Consciousness Monoculture.

    3. who is “fronting”

      Being multiple also provided subtle comic relief from an insufferable boss. "During meetings, as he repeated the same stuff for the umpteenth time, we'd poke fun inside where he couldn't hear us," she says.

      From Are Multiple Personalities Always a Disorder?.

    4. animating it

      1530s, "to fill with boldness or courage," from Latin animatus past participle of animare "give breath to," also "to endow with a particular spirit, to give courage to," from anima "life, breath" (see animus).

      From EtymologyOnline

    5. refer to themselves as systems

      A "multiplicity system" refers to the group within the body itself (i.e., "I'm part of a multiplicity system"). The system might consist of two people, or it might consist of 200. The "outer world" is this physical plane that we're all stumbling around in, while "inner worlds" are the subjective realms where their system members spend time when they're not "fronting," or running the body in the outer world.

      From Are Multiple Personalities Always a Disorder?.

    6. the protagonist possesses a peripheral

      I use the term “possess” here because I think it more appropriate than “control”; if you don’t like the connotations, you may consider “inhabit” as an alternative term.

    1. “cluttered-desk-cluttered-mind” paradigm

      I am well aware of what Einstein said about this quote, but his desk is cluttered because he knows where his things are but others don’t. Most Downloads folders are cluttered in a way that nobody knows where anything is without a Ctrl-F search.

    1. you are not working directly with one or two big OEMs or Tier 1s

      If you have big customers (ASUS, Apple, etc). OEMs (Original Equipment Manufacturers) are companies that manufacture the devices, sourcing subcomponents to other companies. or Tier 1s are companies that manufacture components in use for other companies, e.g. LG & Samsung making LCD panels for tablet OEMs.

    2. Atom x3/SoFIA

      SoFIA: Smart or Feature phone with Intel Architecture

      Most phones are expensive because they have to include a cocktail of chips providing different capabilities: audio processing, wifi/bluetooth, 3G/LTE, etc. SoFIA is Intel’s attempt to make a chip that integrates as many of these capabilities as possible so as to make entry-level phones even cheaper.

      Silicon wafers are expensive, so the material cost of each chip depends on its size. The fewer chips you need to include, the lower the material cost (excluding R&D costs for manufacturing research).

    3. working on the foundry process

      making sure that the modem blueprint can actually be manufactured with the process they have in mind (currently 14nm node lithography; each new process requires a new round of testing)

    4. working on a discrete modem cadence and roadmap

      working on the modem roadmap, showing when each new generation of modem chips is coming out and what capabilities they will have. But this is not integrated with the computing processors (Atom, Core) yet.

    5. Intel now has an integrated LTE on die.

      LTE (4G) chips are separate microchips in a smartphone/tablet, but Intel has managed to bundle this capability with one of their processors. It is not yet available in market at this point, but is being tested.

      This is something that few competitors can match, because many of them do not make processors and hence cannot bundle their communications chips inside. The engineering and design expertise required to achieve this is immense.

    6. So for Intel to be going into all of these segments with the compute but without the connect means there is no business.

      Intel has traditionally been focused on computing microchips—processors meant for general-purpose computation, rather than communications chips. They’re a latecomer to the field but are quickly catching up.

      The incumbents in communications chips are Qualcomm, Broadcom, Marvell, Realtek, etc.

    7. Everything that Computes, Connects

      Everything that has a microchip inside should be connected to the internet, sharing information through it and controllable through it. This is possible now with microchips that have inbuilt capability to communicate wirelessly (previously they had to have a separate chip included).

    1. In most cases, the brothels’ facades are homey and cheery in their designs--your first guess might be that cute little grandmas lived inside.

      The social civilising of brothels