12 Matching Annotations
  1. Sep 2018
  2. May 2018
    1. Negative values included when assessing air quality In computing average pollutant concentrations, EPA includes recorded values that are below zero. EPA advised that this is consistent with NEPM AAQ procedures. Logically, however, the lowest possible value for air pollutant concentrations is zero. Either it is present, even if in very small amounts, or it is not. Negative values are an artefact of the measurement and recording process. Leaving negative values in the data introduces a negative bias, which potentially under represents actual concentrations of pollutants. We noted a considerable number of negative values recorded. For example, in 2016, negative values comprised 5.3 per cent of recorded hourly PM2.5 values, and 1.3 per cent of hourly PM10 values. When we excluded negative values from the calculation of one‐day averages, there were five more exceedance days for PM2.5 and one more for PM10 during 2016.
  3. Jun 2017
    1. in sync replicas (ISRs) should be exactly equal to the total number of replicas.

      ISRs are a very imp metric

    2. Kafka metrics can be broken down into three categories:Kafka server (broker) metricsProducer metricsConsumer metrics

      3 Metrics:

      • Broker
      • Producer (Netty)
      • Consumer (SECOR)
  4. May 2017
  5. Feb 2017
    1. AZGFD developed a draft Sonora chub monitoring plan and the CNF has proposed a linear habitat sampling protocol for Sycamore Canyon in 1993. Neither protocol has been finalized as of 2012.

      Wow. 19 years and the agencies can't finalize their monitoring protocol. Impressive.

  6. Dec 2016
  7. Apr 2016
    1. closer to what my own parents experienced than you might guess.

      Here it comes, the plug for algorithmic love or at least the comparison to arranged marriage. Are the services akin to parent arrangements?

  8. May 2015
  9. Feb 2014
    1. API Services During my monitoring of the API space, I came across a new API monitoring service called AutoDevBot, which monitors all your API endpoints, and notifies you when something goes wrong. Pretty standard feature in a new wave of API integration tools and services I’m seeing emerge, but what is interesting is they use Github as a central place to store the settings for the API monitoring service. AutoDevBot has you clone their settings template, make changes you need to monitor your APIs, register and fire up AutoDevBot to monitor. Seems like a pretty simple way for API service providers to engage with API providers, allowing them to manage all the configuration for API services alongside their own internal API operations.