943 Matching Annotations
  1. Feb 2019
    1. Allocation of emissions among independent products that share the same process: for example, multiple products sharing the same transport process (vehicle); multiple telecommunication services sharing the same network; multiple cloud services (email, data storage, database applications) sharing the same data center

      k8s makes this a pain, if its designed to co-mingle services on the same boxen

    2. Depending on the goal and scope of the assessment, a rule of thumb may be used for assessing ICT products where theemissions from a specific life cycle stage or element are determined by the screening assessment to be less than 5 percent of the total emissions. In this case, a detailed assessment for that stage or element is not required. The emissions for that stage or element are then calculated using the percentage determined in the screening assessment. The sum of the emissions calculated in this way (i.e., based on the percentage from the screening estimate) should not exceed 20 percent of the total emissions.

      Less than 5? skip it

    3. A “screening assessment” is an initial assessment of a product to understand its significant and relevant sources of emissions. This assessment is described in the Product Standard in section 8.3.3.

      Entry level

    4. This chapter provides software developers and architects guidance to benchmark and report the GHG emissions from software use in a consistent manner and make informed choices to reduce greenhouse gas emissions. The chapter is in two parts. Part A provides guidance on the full life cycle assessment of software, while Part B relates specifically to the energy use of software, and covers the three categories of software: operating systems (OS), applications, and virtualization.

      actual formal guidance!

    5. 2015 GeSI published the SMARTer 20308report, extending the analysis out to 2030. This study predicted that the global emissions of the ICT sector will be 1.25 Gt CO2e in 2030 (or 1.97% of global emissions), and emissions avoided through the use of ICT will be 12 Gt CO2e,which is nearly 10 times higher than ICT’s own emissions.

      theres a 2030 report now. I did not know

    6. the total abatement potential from ICT solutions by 2020 is seven times its own emissions.

      increasing confidence in the potental then

    7. Rebound Effects

      🤯

    8. The Product Standard (sections 11.2 and 11.3.2) states that“avoided emissions shall not be deducted from the product’s total inventory results, but may be reported separately.”

      ah, so its not like a magic offset. more transparent. good.

    9. The Product Standarddefines products to be both goods and services, thus for the ICT sector it covers both physical ICT equipment and delivered ICT services. This Sector Guidance, however, focuses more on the assessment of ICT services. In this Sector Guidance the definition of products includes both networks and software as ICT services.

      this makes me think that services like e-commerce or ride sharing might not count on he first read thru

  2. Dec 2018
    1. As the chief executive of the world’s biggest cement company observed, “we know how to make very low carbon cement – by why would we? There is no incentive.”

      FUCKING HELL

    2. In the growing trade war between China and the US, it seems the world is unwilling even to think about the entirely legitimate use of consumption-based or border carbon pricing either to encourage cleaner production in China, or to deter the Trump administration from using discriminatory trade measures to re-industrialize drawing partly on older and more carbon-intensive technologies.

      How would border carbon pricing work? You pay a tax on the CO2 emissions 'imported'?

    3. The European utilities that tried to ignore the energy transition are now economic zombies; some split their companies in two to try and isolate the assets that have already turned into liabilities in a decarbonizing system.

      E-on as an example?

    4. Averaged over the full 35 years, a constant percentage reduction would require c = - 10%/yr to reach the same end-point – almost impossible at the starting point, but entirely feasible and easily observed in the latter stages of sunset industries.

      Okay, so the argument as I see it so far is that, change, while averaged out might be 3.68 per year, but assuming it's a straight line, is a mistake, as substition of high carbon energy to low carbon looks more like an S shaped curve

    5. their analysis leads both teams to the – only slightly caveated - conclusion that the emission reductions required to the deliver the Paris Aims (“well below 2 deg.C”) are implausible, by almost any standard of macroeconomic evidence – and still more so for the most ambitious “1.5 deg.C” end of the spectrum.

      Ah, so this is a response to the we're doomed papers from before

    1. Electricity Intensity of Internet Data Transmission: Untangling the Estimates

      This is the html version of the PDf I was referring to before.

  3. Oct 2018
    1. Video streaming service Netflix is the world's most data-hungry application, consuming 15% of global net traffic, according to research from bandwidth management company Sandvine.

      Ah, there's new sandvine report for 2018

  4. Sep 2018
  5. Apr 2018
    1. By eliminating cold servers and cold containers with request-based pricing, we’ve also eliminated the high cost of idle capacity and helped our customers achieve dramatically higher utilization and better economics.

      Cold servers and cold containers is a term I haven't heard before, but it sums up the waste of excess capacity nicely

    1. Robust It is this flirty declarative nature makes HTML so incredibly robust. Just look at this video. It shows me pulling chunks out of the Amazon homepage as I browse it, while the page continues to run. Let’s just stop and think about that, because we take it for granted. I’m pulling chunks of code out of a running computer application, AND IT IS STILL WORKING. Jut how… INCREDIBLE is that? Can you imagine pulling random chunks of code out of the memory of your iPhone or Windows laptop, and still expecting it to work? Of course not! But with HTML, it’s a given.
  6. Mar 2018
    1. For the five studies that satisfy our criteria, the electricityintensity of transmission networks has declined by factor of170 between 2000 and 2015

      It's got 170x more energy efficient in 15 years

    2. 2Example of daily variation of Internet traffic in 2012, based on number of page views per 15-minute interval for part of theAkamai network (Peill-Moelter 2012, reprinted with permission).

      This looks similar the curve in the Power of Wireless Cloud. I wonder if it's the same now?

    3. A white paper released byCisco (2015) predicts Internet traffic growth of 42% per yearto 2020.

      42% compounding, year on year?

    4. the broader trends identified by Koomey andcolleagues (2011) and Koomey and Naffziger (2015, 2016) aresuggestive of the rates of change we would expect to see innetworking devices constructed from silicon microprocessorsand related components.

      So assumptions about Moore's law about increasing energy efficiency can be applied

    5. Williams and Tang (2012)estimate the carbon intensity

      Oh, so they've gone the other way here

    6. Estimatesbased on specific or state-of-the-art equipment, such as Baligaand colleagues (2009), omit the less efficient legacy equip-ment (i.e., equipment with higher electricity use per GB ofdata transferred) in use within country-wide Internet networks,resulting in a substantial underestimate of electricity intensityat the lower end of the observed range (0.004 kWh/GB for2008).

      Ah, so that's why it's so low - they assumed all the network kit was, new, shiny and frugal

    7. Existing estimates for the electricity intensity of Internetdata transmission, for 2000 to 2015, vary up to 5 orders of mag-nitude, ranging from between 136 kilowatt-hours (kWh)/GBin 2000 (Koomey et al. 2004) and 0.004 kWh/GB in 2008(Baliga et al. 2009). While increased efficiency over time canaccount for 2 orders of magnitude of this variation (based onresults presented below), alone it does not explain the spreadof results.
    8. For ex-ample, Mayers and colleagues (2014) applied electricity in-tensity estimates as part of an LCA study comparing differentmethods of games distribution, concluding that the carbon-equivalent emissions arising from an Internet game download(for an average 8.8-gigabyte [GB] game) were higher than thosefrom Blu-ray Disc distribution in 2010

      I still have a hard reading getting my head around this

    9. This article derives criteria to identify accurate estimates over time andprovides a new estimate of 0.06 kWh/GB for 2015.

    Tags

    Annotators

    1. However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends data to improve user experience in the app and barred it being sold on or used for advertising.

      HOLY SHIT

    1. He uses the PURE Method for easy of use.

      I like this idea, I'm curious about how the split the steps, and how each step is ranked

    2. Let’s acknowledge the five purposes of research:

      This is worth comparing to mydleton's list

    1. One argument I've heard against this approach is that if everyone did this, then we would run out of pink, sparkly marbles. We'll know this is something to be worried about when women are paid significantly more than men for the same work.

      Touché

  7. Feb 2018
    1. The extraterritorial nature of these two frameworks — they protect the privacy rights of people in Europe regardless of where their data is collected — means that they will become the de facto standard for privacy around the world.

      I'm not totally clear on how would be enforced yet, but jeepers

    2. Your privacy testing procedures should predict the ways unauthorized users would access actual data on your system. Would a suspicious search for user data, or an alteration to a record, be logged as a security vulnerability? Is data stored in login cookies? Could someone gain access to data by intentionally triggering an error?

      This sounds a lot like threat modelling.

    3. Data should be deleted, either automatically or through user actions, when it is no longer needed. Take care to think of how deleted data may still be present in archives and backups. You will also need to work with third parties whom you pass data to or receive it from, such as a SAAS or a cloud service, to ensure that a request for data deletion on your end also removes the data on their end, and to verify that this has been done.

      I would love to see what an agreement for this looks like, when postgres, Cassandra etc. essentially use a append-only-log to capture new data

    4. The European term “personal data” differs from the American term “personally identifiable information.” The latter pertains to a much more limited set of information than the European model. It also does not see information as contextual, whereas the European framework emphasizes the risks inherent in data aggregation.

      Important distinction. This is a useful article

    1. Pollution, broadly, is the number one source of unrest and citizen dissatisfaction, and it’s actually an essential threat to the rule of the Chinese Communist Party because they have to do something about it to keep their people content.

      Source of unrest too?

    2. In the late 1970s, it cost $100 a watt for solar panel material, but the price has dropped 300-fold over the last 40 years. The first 100x price drop didn’t matter because solar was still more expensive than coal or gas. So all through that incredible price drop, people could say, “It’s a toy. It’s never going to make sense.”

      TODO: Find the source for this quote

    1. The AutoGrid Flex platform interfaces with a wide variety of IoT devices, from residential to industrial-scale energy applications. In addition to energy-consumption data, typical residential appliances may also provide telemetry about air temperature, humidity, water temperature, and occupancy. Industrial devices often generate a variety of interesting process-specific data, but some of the most common and useful measurements include wind speed, solar irradiance, and thermal limits. These data streams can be leveraged by the AutoGrid machine learning algorithms to enhance forecasting and optimization of flexible energy resources throughout the network.
    2. If, for example, an OhmConnect consumer saves one kilowatt hour (kWh) of electricity, the California ISO will reward OhmConnect as if that consumer generated one kWh. OhmConnect in turn passes a significant portion of that savings to its end user.
    3. Winn said that solar plant operators can also attach thermal cameras to drones to help identify solar cells that are less efficient, perhaps even broken: A solar cell that’s absorbing all the energy and producing electricity is going to be much cooler than one that is not.
    4. The Heila IQ box runs powerful software that presents an abstract view to the operator. Instead of directly controlling the individual assets, the operator describes higher-level goals and constraints such as “reduce emissions” or “avoid using gas-based generators because they are expensive.” Then, as the microgrid is operating, the Heila IQ automatically controls the assets to try to optimize for these goals and satisfy the constraints. Later, if the operator adds new assets to the microgrid, they don’t need to configure the individual assets or try to rebalance the system. As long as they specify the higher-level goals and constraints, the Heila IQ-based microgrid continues to control the assets appropriately.

      wow, this is possible now?

    1. Smaller data centers—servers stashed in closets or rooms in office buildings under 5,000 square feet—barely apply these efficiency strategies. That’s how small and medium-sized data centers end up consuming 49 percent of the electricity used in U.S. data centers each year, despite owning just 40 percent of the total number of servers, according to a 2014 report by the nonprofit Natural Resources Defence Council (NRDC).

      The other argument for cloud. It's like running your own power station in a closet now, when you can pay for it on a meter

  8. Jan 2018
    1. No more retention scams that allow online signups but demand users phone a call centre to delete their accounts.

      Holy caw, this covers opt-out after subscriptions too? Eeeenteresting...

    1. Douglas  Hofstadterobserved  that,  no  matter  how  much  work  went  into  developing  computer  programs  to  play  chess  against  Grand  Masters,  the  winning  program  always  seemed  to  be  10  years  away

      I didn't know this came from chess games

    2. Slicing  Features  into  separate  functional  parts  helps  us  actively  manage  the  scope  by  creating  different  implementation  options  45that  are  often  implicit  and  non-­negotiable  when  we  have  larger  Features  in  the  backlog

      On a second read through, this appears to be a key thing for this to work - decomposing larger units of work into smaller things, but not diving into the deliver-in-a-day scale of wehat he's referring to as user stories here

    3.  At  the  heart  of  the  rolling  wave  forecast  is  the  acceptance  of  uncertainty.  This,  in  turn,  allows  us  to  keep  our  options  open.  To  be  flexible  and  able  to  respond  to  change  –just  like  the  Agile  Manifesto  says.  

      okay, this is a nice way to present it

    4. what did you say about rolling wave forecast

      Moar new terminilogy for me…

      okay, so it feels a bit like a weird cross between a release plan and a roadmap

    5. By  forecasting  and  showing  progress  (or  the  lack  thereof)  very  early  on,  Carmen  is  bringing  the  scope  discussion  to  the  first  few  days  of  the  project.  

      So basically, acknowledge self-deception both parties played along with to get here as soon as you can, because it will come up either way

    6. Move  to  Story  Points.  Even  if  this  is  just  another  way  of  estimating,  getting  rid  of  ‘hours’  and  ‘days’  has  too  many  benefits  to  ignore  them.  We  already  discussed  previously  in  this  book  the  problems  of  using  time-­based  metrics,  which  are  an  indication  of  cost,  to  measure  project  progress.  Even  if  it’s  just  another  proxy  metric  for  productivity,  Story  Point-­based  estimation  gives  a  better  understanding  of  how  things  like  risk,  complexity,  expected  dependencies  for  each   Story,   etc.   Given   that   a   large   amount   of   time   it   takes   to   deliver   one   Story   is   spent  waiting,  Story  Point  estimation  is  more  likely  to  help  you  assess  the  true  impact  of  one  Story  in  your  project.

      Surely when you have story points it's now really hard to compare across teams and projects though, right? A 3 pointer for one team is not a 3 pointer for another.

    7. Mandating   the   maximum   calendar   duration   for   an   item   is   also   used   for   User   Stories.   In   my  practice  I  advise  teams  to  have  1-­day  User  Stories.  The  reason  is  simple.  If  you  were  wrong  about  the  time  it  takes  to  develop  your  User  Story  you  will  know  it  already  to

      So this is similar to the idea in Reinertsen's book, when he describes the round robin approach if you can't reliably estimate work

    8. Both  these  metrics  will  help  you  forecast  the  progress  for  your  project.  While  the  User  Story  velocity  metric  will  help  you  assess  when  a  certain  Feature  might  be  ready;;  the  Feature  velocity  will  help  you  assess  when  the  project  might  be  ready.

      This seems to assume that Carmen understands all the technology and the problem domain well enough to split a big feature into meaningful stories of more or less uniform size for devs to deliver. This feels like a different skill set to project management

    9. In  my  research  I’ve  normally  used  the  progress  data  from  3  to  5  iterations  (or  weeks  if  you  are  using   Kanban/flow   based   software   development)   in   order   to   define   the   initial   progress   rate.  Many  expect  that  you  need  many  more  data  points  before  you  can  make  a  useful  prediction,  but  that  is  not  the  case.  Three  to  5  iterations  are  typically  enough

      The German tank problem referenced as a justification for this is fascinating

    10. “Absolutely correct! In fact you will not know how long the whole project will take until you have either the whole backlog of INVEST Stories sliced up (a bad idea) or until you have enough historical information that you can infer the cycle time for every backlog item, independently of size,” Herman explained
    11. Early  in  each  project,  your  top  priority  is  not  to  ship  something  meaningful  to  your  customer,  but  to  obtain  information  on  capacity,  throughput,  and  bac

      Okay, this is an interesting, and there's lots around about optimising for learning, but this is the first time I've seen it explicitly phrased like this

    12. Even  if  each  Story  may  not  be  “sellable”,  it  must  be  testable  and  final,  i.e.  the  team  can  make  sure  that  aparticular  User  Story  has  been  successfully  completed  according  to  a  Definition  of  Done.  This  Definition  of  Done  is  a  litmus  test  that  will  allow  you  to  classify  tiny  parts  of  the  whole  project  as  completed,  before  the  whole  project  is  done.
    13. Each  Story  can  be  dropped  from  the  project  without  affecting  the  overall  project  delivery.

      This seems to contradict the earlier point about E meaning 'essential'. If I can drop a story then surely, it wasn't essential, right?

    14. Essential,  meaning  that  every  story  is  absolutely  required  for  the  product  to  be  viable.  To  be  Essential,  a  story  must  not  only  be  valuable,  but  it’s  removal  must  make  the  product  unusable  or  unsellable.  Earlier  INVEST  definitions  included  ‘Estimatable’  in  the  sense  that  there  would  be  some  understanding  and  specific  definition  of  the  story  that  allowed  us  to  cast  an  estimate  if  we  wanted  to.  #NoEstimates  focuses  on  value  instead.  The  goal  is  to  do  only  what  is  essential  to  the  project’s  success.

      I'm struggling with this, as when you're making trade-offs between stories to work on in a given timebox, you'd be deliberately deciding not to have certain things that you've just deemed essential.

    15. Gedanken  or  Gedankenexperiment.  Ángel  Medinilla,  this  book’s  fantastic  illustrator,  

      Ah, THAT'S where they came from

    16. At  Toyota,  the  production  engineers  would  simultaneously  start  to  design  the  production  line  and  prepare  manufacturing  long  before  the  new  car  designs  were  finished  (hence,  concurrent  engineering),  instead  of  waiting  until  all  decisions  about  the  design  were  closed.  This,  in  the  end,  provided  Japanese  manufacturers  with  an  astonishing  competitive  advantage  that  let  them  design  and  produce  the  Toyota  Prius  in  about  3  years27,  from  idea  to  first  sale!

      Only 3 years? Cripes

    17.  Project  Management  Body  of  Knowledge  (PMBO

      AH, this is the PM Book he was mentioning last night

    18. But   for   complex   environments,   where   estimates   come   mostly   from   personal   experience   and  knowledge,   these   estimates   will   be   different   for   every   person.   Experts   might   estimate   some  work  as  easy  and  fast,  while  novices  might  estimate  the  same  work  as  difficult  and  long  lasting.  Some  team  members  may  see  risks  and  estimate  the  impact  on  the  schedule,  while  others  may  ignore  those  risks.Hence,  in  most  environments  estimates  are  personal.  

      And presumably not comparable across teams then, if you're managing a portfolio of projects or products, and trying to work out where to focus your efforts?

    19. So,  if  h(a)  is  much  larger  than  g(e)  the  cost  of  a  feature  cannot  be  determined  by  relative  estimation.In  turn,this  means  that  the  most  common  estimation  approach,  Story  Point  estimation,  cannot  work  reliably.

      If this is the second 'social' complexity analysis, and it's a much larger factor, then I missed this part in the talk. Then again telling people to factor in how dysfunctional their org is might be a hard sell in an evening

    20. Some  researchers  have  alreadyproposedwhat  a  “good”estimate  should  be.  In  19861,  they  proposed  thata  good  estimation  approach  would  provide  estimates  “within  25%  of  the  actual  result,  75%  of  the  time”.

      Okay, this figure is what we need to beat, with Reinertsen's cost of delay question, tracking the cost of the project being 60 days late

    Tags

    Annotators

    1. Among the options available are two for missile alerts, according to the Washington Post. One is labelled “test missile alert”, which will test the notification system is working without actually sending an alert to the public.

      Microcopy matters, yo.

  9. inclusive-components.design inclusive-components.design
    1. For a consistent experience between users, we need to be deliberate and focus() an appropriate element

      Deliberate decisions about the next action with focus, provide a nicer UX

    2. <use xlink:href="#bin-icon">

      Ah… so THAT's what the hidden SVG at the beginning of the piece was fore

    3. Many kinds of users often feel the need to scale/zoom interfaces, including the short-sighted and those with motor impairments who are looking to create larger touch or click targets.

      Nice argument for leveling up in SVG

    4. In this example, &times; is used to represent a cross symbol. Were it not for the aria-label overriding it, the label would be announced as “times” or “multiplication” depending on the screen reader in question.

      So aria labels overrule clever submit typography. Userful to know

    5. In my version, I just add a minor enhancement: a line-through style for checked items. This is applied to the <label> via the :checked state using an adjacent sibling combinator.

      Clever CSS tricks abound in this piece

    6. It’s quite valid in HTML to provide an <input> control outside of a <form> element. The <input> will not succeed in providing data to the server without the help of JavaScript, but that’s not a problem in an application using XHR.

      Did not know this.

    7. all the state information we need is actually already in the DOM, meaning all we need in order to switch between showing the list and showing the empty-state is CSS.

      Wow. - never thought of this. It's not as obvious as the approach above though if you were working on the code base - how expensive is a check for todos.length?

    8. If you do use a <section> element, you still need to provide a heading to it, otherwise it is an unlabeled section.

      unexpected accessibility gotcha!

    1. Cierge sends a magic link as well as a magic code that a user can manually enter into the login screen to continue as an alternative to clicking the link. Magic codes are short, volatile, & memorable (eg. 443 863). For example, you can look up the code on your phone then enter it into your browser on desktop.

      This is is the use case for magic codes

    1. This means our problem with 1% of requests, could affect 20% of pageviews (20 requests x 1% = 20% = ⅕). And 60% of users (3 pages x 20 objects x 1% = 60% ≈ ⅔).

      This is one of the counter-intuitive thing about large numbers.

    1. For example, it’s easy to repair, we have a take back program and we’ve researched the best recycling methods. Some time back we also discussed alternative business models for consumers to incentivize take back.

      When you place the incentives here, it's in Fairphone's interest to make it easy to service and fix. This is smart.I like.

  10. citeseerx.ist.psu.edu citeseerx.ist.psu.edu
    1. Wireless communications has been recognized as akey enabler to the growth of the future economy. There is anunprecedented growth in data volume (10x in last 5 years) andassociated energy consumption (20%) in the Information andCommunications Technology (ICT) infrastructure.The challenge is how to: meet the exponential growth in datatraffic, deliver high-speed wide-area coverage to rural areas,whilst reducing the energy consumed. This paper focuses on thecellular wireless communication aspect, which constitutes approx-imately 11% of the ICT energy consumption. The paper showsthat with careful redesign of the cellular network architecture,up to 80% total energy can be saved. This is equivalent to saving500 TWh globally and 1.4 TWh in the United Kingdom.

      Where is the date for this paper ?

    1. Data usage on the internet is estimated to be 20,151 PetaBytes per month (Cisco 2011). This is equivalent to 241 billion GB per year. Applying these figures to the average power estimate yields a figure of 5.12 kWh per GB.

      Okay, so this is a top down figure, essentially dividing one huge number by another

    2. An example transmission activity might begin on a desktop computer when an end user requests to download a song.

      These next two paras explain pretty much the entire life cycle. Woot!

    3. Many people are familiar with Moore’s law, which states that computational speeds are increasing at an exponential pace (Wikipedia 2012). There is also a corollary to this relationship known as Koomey’s law, which states that computational energy efficiency is also increasing at an exponential rate (Koomey 2009).

      Koomey's law, the second new law I've come across this week after Wirth's law

    4. Our major finding is that the Internet uses an average of about 5 kWh to support the utilization of every GB of data, which equates to about $0.51 of energy costs. Only 38% of those costs are borne by the end-user, while the remaining costs are thinly spread over the global Internet through which the data travels; in switches, routers, signal repeaters, servers, and data centers (See Figure 1 below). This creates a societal “tragedy of the commons,” where end users have little incentive to consider the other 62% of costs and associated resources.

      5GW per GB in 2012 for the whole system

    1. Competition may also come from China, where hardware makers specializing in Bitcoin mining want to enter the Artificial Intelligence focused GPU space.

      Weird crypo currency dividend. ML will get massively cheaper after the crash?

    1. Indeed, our national companies further increased their shares of electricity from renewable energy, coming to a total group-wide average of almost 33 percent by the end of 2016.

      Is there already a list of all the mobile providers and the energy mix they use?

    1. The E-Fan X hybrid-electric technology demonstrator is anticipated to fly in 2020 following a comprehensive ground test campaign, provisionally on a BAe 146 flying testbed, with one of the aircraft’s four gas turbine engines replaced by a two megawatt electric motor. Provisions will be made to replace a second gas turbine with an electric motor once system maturity has been proven.

      I wonder what kind of range this would offer

    1. The lower the frequency of the band the further it can travel, so the 800MHz band is the most adept of the three at travelling over long distances, which means users can get a 4G signal even when they’re a long way from a mast. This becomes particularly useful in rural areas where masts are likely to be quite spread out.

      Hmm? I assumed 4G was lower range than 3G. From what I read here, 4G can work at a longer range, with lower capacity, scenarios and work at shorter range, high capacity scenarios.

      But only if the cell phone provider has both low frequencies and high frequencies

    1. Infrastructure Electricity Use for All Scenarios

      From ~32 to ~7 Billion KWh per year for infra

    2. The impact on the installed base in hyperscale data centers is smaller since, on average, one server in a hyperscale data center can replace 3.75 servers in non-hyperscale data centers. This is because servers in hyperscale data centers are assumed to run at roughly 3 times the utilization of non-hyperscale data centers and have no redundancy requirements.
    3. The formulas in the “Redundancy” column represent the total number of servers needed for a data center containing N functional servers. For example, redundancy of “N+1” means that there is one redundant server present in each data center, while redundancy of “N+0.1N” means that there is one redundant server for every 10 functional servers. For data centers where the number of redundant servers scales with server count (i.e. closets, mid-tier, and high-end enterprise), consolidation of servers reduces the number of redundant servers required.

      I'm not sure how the design for failure approach fits into this - it's an implicitly higher N, as you typically build in redundancy at the application level instead

    4. The percent decrease in service provider data centers is assumed to be smaller because these data centers tend to have a lower rate of inactive servers due to better management practices that avoid the institutional problems of dispersed responsibility between IT and facility departments which often plagues internal data centers.

      Basically, cloud gets better efficiency because they have a very good direct reason to do so

    5. Infrastructure savings result from the reduced amount of IT equipment that require cooling and electrical services as well as the decrease in industry-wide average PUE, brought down by the growth in data centers with very low PUE values (i.e., hyperscale data centers).

      Where we would be without late state surveillance capitalism industrialising servers

    6. Historical Data Center Total Electricity Use

      So, in the US at least, and according to this report, it's not as gloomy as it looked before

    7. Total Electricity Consumption by Technology Type

      First graph I've seen showing the breakdown by tech type. Infra here presumably means HVAC and the like?

    8. PUE by Space Type

      Handy table

    9. Consequently, smaller data centers are still being measured with PUE values greater than 2.037 while large hyperscale cloud data centers are beginning to record PUE value of 1.1 or less.
    10. Total Server Installed Base by Data Center Space Category

      Everything stable apart from explosive growth in cloud

    11. Total U.S. Data Center Network Equipment Electricity Consumption

      When I look at this graph, it looks like energy efficiency is outpacing network traffic growth - at least over wired connections

    12. Total U.S. Data Center Storage Electricity Consumption

      A disks get larger and large, you need fewer of them, and because you have few drives to power, the total energy usage falls

    13. The values shown in Table 1 represent the average of active servers, and therefore the inclusion of inactive servers (assumed to be 10% of internal and 5% of service provider and hyperscale data centers) slightly lowers the overall averages.

      15-45% difference assumed based on how industrialised the data center is

    14. Volume Server Installed Base 2000-2020

      This graph shows the projected growth between cloud. non-branded servers, and non-cloud, branded servers really well

    15. Observation of data showed that for any given year, the number of servers in the installed base was more than the sum of the previous 4 years’ shipments, but less than the previous 5.
    16. Similar to previous U.S. data center energy estimates,12345 this study uses data provided by the market research firm International Data Corporation (IDC) to derive numbers of data center servers, as well as storage and network equipment, installed in the United States. Power draw assumptions are then applied to the estimated installed base of equipment to determine overall IT equipment energy consumption.

      Does IDC publish this data anywhere or it is all private?

    17. Figure ES-1 shows that these five scenarios yield an annual saving in 2020 up to 33 billion kWh, representing a 45% reduction in electricity demand when compared to current efficiency trends.

      That graph shows the cumulative advantages, and you can see the impact cloud (i.e. hyper scale DC's) has

    18. The resulting electricity demand, shown in Figure ES-1, indicates that more than 600 additional billion kWh would have been required across the decade.

      How much electricity use has been avoided thanks for energy saving measures since 2010

    19. From 2000-2005, server shipments increased by 15% each year resulting in a near doubling of servers operating in data centers. From 2005-2010, the annual shipment increase fell to 5%, partially driven by a conspicuous drop in 2009 shipments (most likely from the economic recession), as well as from the emergence of server virtualization across that 5-year period. The annual growth in server shipments further dropped after 2010 to 3% and that growth rate is now expected to continue through 2020.

      Virtualisation and move to the cloud means small scale inefficient DCs are less common now?

  11. Dec 2017
    1. The new high-­‐speed LTE networks that accelerate themobile Internetrequireup to three times more data per hour per task compared to the previousslower 3G networks, and thus more energy.43And compared to 2G networks, LTEenergy consumption is 60 times greaterto offer the samecoverage.

      Holy biscuits. 4G is 3 times as much for as 3G per hour, which is in turn 20 times more than 2G for the same area.

      And 5G is even shorter range than 4G, meaning you need many more transmitters 0_o

    1. You can use the value event to read a static snapshot of the contents at a given path, as they existed at the time of the event. This method is triggered once when the listener is attached and again every time the data, including children, changes. The event callback is passed a snapshot containing all data at that location, including child data.

      So adding a ref too close to the root means the entire snapshop is sent, not just the diff

    1. Projects by IF is a limited company based in London, England. We run this website (projectsbyif.com) and its subdomains. We also use third party services to publish work, keep in touch with people and understand how we can do those things better. Many of those services collect some data about people who are interested in IF, come to our events or work with us. Here you can find out what those services are, how we use them and how we store the information they collect. If you’ve got any questions, or want to know more about data we might have collected about you, email hello@projectsbyif.com This page was published on 25 August 2017. You can see any revisions by visiting the repository on Github.

      As you'd expect, If's privacy page is fantastic

  12. Nov 2017
    1. A Chinese company has built a 2,000 metric-ton (2,204 tons) all-electric cargo ship, which was launched from the southern Chinese city of Guangzhou in mid-November, according to state-run newspaper People’s Daily.

      How does this compare to the ships of the kind produced by Maersk?

    1. Average land use area needed to produce one unit of protein by food type, measured in metres squared (m²) per gram of protein over a crop'sannual cycle or the average animal's lifetime. Average values are based on a meta-analysis of studies across 742 agricultural systems andover 90 unique foods.

      Beef is nearly 6 times the impact of Pork.

      This is worth referring to in the background section to provide context, on why you need more than just changes to the web

    1. We measured the mix of advertising and editorial on the mobile home pages of the top 50 news websites – including ours – and found that more than half of all data came from ads and other content filtered by ad blockers. Not all of the news websites were equal.

      This has some good stats on different news pages

    1. It was also achieved with the support of some of Scottish biggest industries including the whiskey industry.

      The whiskey industry? Was there a campaign to get behind renewables?

    1. At some point in the following decade json-head.appspot.com stopped working. Today I’m bringing it back, mainly as an excuse to try out the combination of Python 3.5 async, the Sanic microframework and Zeit’s brilliant Now deployment platform.

      Oh neat, I had no idea now supported Python. Another option available!

    1. Yes we do have a Wordpress plugin, available here: http://wordpress.org/extend/plugins/cloudinary-image-management-and-manipulation-in-the-cloud-cdn/. While you don't need to install any image software on your server, you will need to register for a (free) Cloudinary account to use the plugin and start uploading images to the cloud.

      If you have existing images, presumably you need to re-upload these, I think

    1. ImageOptim makes images load faster Removes bloated metadata. Saves disk space & bandwidth by compressing images without losing quality.
    1. Figure 4: Typical diurnal cycle for traffi c in the Internet. The scale on the vertical axis is the percentage of total users of the service that are on-line at the time indicated on the horizontal axis. (Source: [21])

      I can't see an easy way to link to this graph itself, but this reference should make it easier to get to this image in future

    2. Our energy calculations show that by 2015, wireless cloud will consume up to 43 TWh, compared to only 9.2 TWh in 2012, an increase of 460%. This is an increase in carbon footprint from 6 megatonnes of CO2 in 2012 to up to 30 megatonnes of CO2 in 2015, the equivalent of adding 4.9 million cars to the roads. Up to 90% of this consumption is attributable to wireless access network technologies, data centres account for only 9%.

      Wow, these numbers. More than 90% in transmission? This makes CDNs and other web performance optimisation techniques much more relevant, than I first thought.

    1. “I want to suggest to you that there’s a different type of critical infrastructure, and that’s critical infrastructure that’s in motion, of which aviation is one of the third of that,” Hickey said. The others are surface and maritime transportation, he said.

      New term to me "critical infra in motion"

    1. There is a newfactor; at the core of the global Internet all of traffic ultimately moves through high-­‐speed fiber-­‐optic Internet exchange points (IXPs). Engineers have achieved a10,000 fold improvement in IXPspeedssince the 1980s.111But the rate of improvement hit a physics wall around 2005. Future traffic growth will require new, different and more hardware.

      We were getting really good at making wired networks more efficient, than physics got in the way

    2. Suchhighlydispersed networks may increaseoverallenergyusewhen counting boththe in-­‐building network energy, and the energy to manufacturemillions of picocells.

      Again, compounded by 5G?

    3. AnEU project directed at reducing cellular energy use –because the “networks are increasingly contributingto global energy consumption” -­‐-­‐identifiedtechnologiesthat can yielda 70% reduction in energy per byte transported.107But, global mobile traffic isforecast to rise 20-­‐fold in five years

      Note - find the EU project mentioning this

    4. Listening just onceto a song stored in the Cloudusesless energy than purchasing and shippinga CD, taking into account manufacturing and transport energy. Listeningto the song a couple of dozen times leads tomoreoverallenergy used,largelybecause ofgreater use of the networks.105The Cloud uses more energy streaminga high-­‐def moviejust once than does fabricating and shippinga DVD.

      That high def movie example here. Streaming uses more than making and shipping a DVD? SRSLY?

    5. Most current estimates likely understate global ICT energyuse by as much as 1,000 TWhsince up-­‐to-­‐date data are unavoidably “omitted”. At the mid-­‐point of the likely rangeof energy use, the total ICT ecosystemnowconsumesabout 10% of world electricity supplied for all purposes.For ICTenergy use to ‘only’ doubleover the next decade(as illustratedbelow), hugegains in efficiencywill beneeded –at a time when efficiency gains in ICT have slowed.91ICT willlikely consumetriple the energy of all EVs in the world by 2030(assuminganoptimistic 200 millionEVgoal).92Or,in otherterms, transporting bits now uses 50%more energythanworldaviation, and will likely use twice as muchby 2030

      Twice as much as aviation by 2030 here, not 2020?

    6. For a smartphone,the embodied energy ranges from 70 to 90% of theelectricity the phonewill use over its life, counting recharging its battery.74,75,76Thus,theenergy use of smartphone itself (i.e., excluding networks and data centers) is totally dominated by manufacturing, not by the efficiency of say thephone’s wall-­‐chargeror battery. This is quite unlike other consumer products.

      These seem V different from the fairphone stats

    7. It takes energy, dominantly electricity, to manufacture ICThardware. Buildingone PC usesabout the same amount of energy as making a refrigerator,for example.67Annualized, theenergy to fabricatea PC is three to fourtimesthat ofa refrigeratorbecause the latter is usedthreeto fourtimes longer

      First example I've seen comparing non-mobile hardware upgrade cycles

    8. Global traffic on mobile networks is expanding at historically unprecedented rates, rising from today’s 20 to over 150 exabytesa year within a half decade. While today’s networks energy use rangesfrom 1.5 to over 15kWh/GB of traffic,47overall network energy efficiency will need to improve nearly 10-­‐foldin five years to keep total systemenergy use from rising substantially

      A 10 fold range per GB downloaded

    9. Reduced to personal terms, although charging up a single tablet or smart phonerequires a negligible amount of electricity, using either to watch an hour of video weeklyconsumes annually moreelectricity in the remote networks thantwonew refrigeratorsuseina year

      So watching Discovery each week for a year is the same as two fridges

  13. Sep 2017
    1. Yet Katherine Philips and Margaret Neale found that if two groups are given the exact same pieces of information, the diverse groups share all their information, while the homogenous ones often don’t, because they assume their members already have the same perspective and information.

      so it's about sharing all the info, rather than making lazy assumptions?

    1. In 1944-45 a lethal famine struck the island of Java, where Bandung is located, killing some 2.4 million people.

      Holy caw. 2.4 million?

    1. Throughout the night of the general election results, we marked candidates as having been elected or not elected. We did not add vote counts. Particular thanks are due to Mark Longair of mySociety for a marathon effort here. The data populated mySociety’s theyworkforyou.com/mps, which gradually filled with newly elected MPs throughout the night. This data also enabled Facebook’s ‘You have newly elected representatives’ notification to their users the following morning. Facebook users could then also choose to follow news from their new MP. This kind of feedback loop — you voted, here’s what happened, now here’s how you connect with them — is an exemplar of the use of open democracy data and we hope Facebook will continue this practice for other elections. We will encourage other popular platforms to borrow this approach.

      I wonder if you'd ever see something like this in Germany with the BPB

  14. Aug 2017
    1. People points determine how an employee allocates their time, and it also determines their salary—some skill sets are still more valuable than others within a Holacracy.) Employees who have too many unallocated people points are sent to “The Beach” where they either need to find new roles within the company or are let go. The overwhelming feeling of instability (worrying about people points, or whether they’ll be sent to The Beach) has sparked the fight-or-flight response that Brown spoke about in her keynote.

      Jeez, this sounds like something from Black Mirror

    1. The Media Lab Prado call-for-projects platform helps spread the word about workshops and experiments related to the city and shared spaces – urban agriculture, data visualisations, cultural events, urban economics, etc. The Media Lab Prado digital façade provides real-time information on research, workshops, and on-going experiments to residents of the Letras district are updated on programs, and also enables them to publish their own announcements for events as well as neighbourhood news.

      What is the closest thing to this in Germany or the UK?

    2. Citizen laboratories use digital tools and “hacker ethics” to reclaim and coproduce in Madrid’s vacant spaces. Some twenty laboratorios ciudadanos have emerged over the last few years, including La Tabacalera, Esta es une plaza or Campo de la Cebada. Each specialises in a particular field, such as agriculture and urban economy, social and cultural integration, collaborative art or digital economy.

      Does this assume vacant spaces won't immediately be turned into housing developments?

    3. These platforms serve as a “middle ground”, connecting the “underground” of residents, users, hackers and artists, with the “upper world” of administrations, businesses and engineers.

      underground AND upper world

  15. Jun 2017
    1. I think it's natural for like-minded people to group together but the longer that process continues the more of an echo chamber it becomes. What's worse is the longer you wait to try to get people involved in the project that would naturally not try to join the harder it will be. When your team is 4 men, the first woman which joins will make a significant impact. When your team is already 20 men you need to get a lot more women on board to have the same impact. But it's not just gender that is making a difference, it's in particular cultural backgrounds. The reason Unicode is hard is not because Unicode is hard, but because a lot of projects start out with a lack of urgency since many of the original developers might live in ASCII constrained environments (It took emojis to become popular for people to develop a general understanding of why Unicode is useful in the western world).

      First time I've seen the slowness of emoji to be presented as a diversity issue. Given how well used they are, it's a good example of how diverse teams miss features that may seem obvious in retrospect.

  16. May 2017
    1. This may be through extending the life of an existing garment by design interventions over time, or through the development of hyper-recyclable short-life products, enabling efficient recovery of virgin fabrics over multiple lifetimes.

      This sounds like a cool idea, and adds something to the whole fast fashion issue without just waving ginfers at people, but do these materials exist at a 'fast fashion' price point?

  17. Apr 2017
    1. In response to the racism faced by Britain's former colonial subjects, the phrase "We are here because you were there" became a striking anti-racist slogan.

      Powerful

  18. Dec 2015
    1. AWS Footprint Because Netflix relies more heavily on AWS regions that are powered primarily by renewable energy (including the carbon-neutral Oregon region), our energy mix is approximately 50% from renewable sources today. We mitigate all of the remaining carbon emissions, which added up to approximately 10,200 tons of CO2e in 2014, by investing in renewable energy credits (RECs) in the geographic areas that host our cloud footprint; last year, the majority went to RECs for wind projects in North America, with the remainder going to Guarantees of Origin (GOs) for hydropower in Europe.

  19. Jul 2015
    1. I wish I could take a browser-based red pen to articles and be able to leave edits visible to others.

      This is the closest thing I've found to what you're asking for. No edits, but comments aren't too bad, right?