30 Matching Annotations
  1. May 2019
    1. Publishers who aren’t media partners with Facebook, Snapchat and Twitter, aren’t highlighted prominently on these platforms, don’t receive a heads up about new products and never have a direct line to support at these companies.

      Looking at the relationship of authors, book publishers, and even the big 5 publishing companies provides a reasonable model for what all of this looks like down the road. All the publishers are generally screwed if they're reliant on one distributor which they don't control.

  2. Apr 2019
    1. We make our branded beverage products available to consumers throughout the world through our network of independent bottling partners, distributors, wholesalers and retailers as well as Company-owned or -controlled bottling and distribution operations — the world's largest beverage distribution system.
  3. Feb 2019
    1. What happened is that Spotify dragged the record labels into a completely new business model that relied on Internet assumptions, instead of fighting them: if duplicating and distributing digital media is free (on a marginal basis), don’t try to make it scarce, but instead make it abundant and charge for the convenience of accessing just about all of it.
  4. Aug 2017
    1. Thus, predicting species responses to novel climates is problematic, because we often lack sufficient observational data to fully determine in which climates a species can or cannot grow (Figure 3). Fortunately, the no-analog problem only affects niche modeling when (1) the envelope of observed climates truncates a fundamental niche and (2) the direction of environmental change causes currently unobserved portions of a species' fundamental niche to open up (Figure 5). Species-level uncertainties accumulate at the community level owing to ecological interactions, so the composition and structure of communities in novel climate regimes will be difficult to predict. Increases in atmospheric CO2 should increase the temperature optimum for photosynthesis and reduce sensitivity to moisture stress (Sage and Coleman 2001), weakening the foundation for applying present empirical plant–climate relationships to predict species' responses to future climates. At worst, we may only be able to predict that many novel communities will emerge and surprises will occur. Mechanistic ecological models, such as dynamic global vegetation models (Cramer et al. 2001), are in principle better suited for predicting responses to novel climates. However, in practice, most such models include only a limited number of plant functional types (and so are not designed for modeling species-level responses), or they are partially parameterized using modern ecological observations (and thus may have limited predictive power in no-analog settings).

      Very nice summary of some of the challenges to using models of contemporary species distributions for forecasting changes in distribution.

    2. In eastern North America, the high pollen abundances of temperate tree taxa (Fraxinus, Ostrya/Carpinus, Ulmus) in these highly seasonal climates may be explained by their position at the edge of the current North American climate envelope (Williams et al. 2006; Figure 3). This pattern suggests that the fundamental niches for these taxa extend beyond the set of climates observed at present (Figure 3), so that these taxa may be able to sustain more seasonal regimes than exist anywhere today (eg Figure 1), as long as winter temperatures do not fall below the −40°C mean daily freezing limit for temperate trees (Sakai and Weiser 1973).

      Recognizing where species are relative to the observed climate range will be important for understanding their potential response to changes in climate. This information should be included when using distribution models to predict changes in species distributions. Ideally this information could be used in making point estimates, but at a minimum understanding its impact on uncertainty would be a step forward.

  5. Jun 2017
  6. Mar 2017
    1. One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.
  7. Jan 2017
    1. To simulate equilibrium sagebrush cover under projected future climate, we applied average projected changes in precipitation and temperature to the observed climate time series. For each GCM and RCP scenario combination, we calculated average precipitation and temperature over the 1950–2000 time period and the 2050–2098 time period. We then calculated the absolute change in temperature between the two time periods (ΔT) and the proportional change in precipitation between the two time periods (ΔP) for each GCM and RCP scenario combination. Lastly, we applied ΔT and ΔP to the observed 28-year climate time series to generate a future climate time series for each GCM and RCP scenario combination. These generated climate time series were used to simulate equilibrium sagebrush cover.

      This is an interesting approach to forecasting future climate values with variation.

      1. Use GCMs to predict long-term change in climate condition
      2. Add this change to the observed time-series
      3. Simulate off of this adjusted time-series

      Given short-term variability may be important, that it is not the focus of the long-term GCM models, and that the goal here is modeling equilibrum (not transitional) dynamics, this seems like a nice compromise approach to capture both long-term and short-term variation in climate.

    2. Our process model (in Eq. (2)) includes a log transformation of the observations (log(yt − 1)). Thus, our model does not accommodate zeros. Fortunately, we had very few instances where pixels had 0% cover at time t − 1 (n = 47, which is 0.01% of the data set). Thus, we excluded those pixels from the model fitting process. However, when simulating the process, we needed to include possible transitions from zero to nonzero percent cover. We fit an intercept-only logistic model to estimate the probability of a pixel going from zero to nonzero cover: yi∼Bernoulli(μi)(8)logit(μi)=b0(9)where y is a vector of 0s and 1s corresponding to whether a pixel was colonized (>0% cover) or not (remains at 0% cover) and μi is the expected probability of colonization as a function of the mean probability of colonization (b0). We fit this simple model using the “glm” command in R (R Core Team 2014). For data sets in which zeros are more common and the colonization process more important, the same spatial statistical approach we used for our cover change model could be applied and covariates such as cover of neighboring cells could be included.

      This seems like a perfectly reasonable approach in this context. As models like this are scaled up to larger spatial extents the proportion of locations with zero abundance will increase and so generalizing the use of this approach will require a different approach to handling zeros.

    3. Our approach models interannual changes in plant cover as a function of seasonal climate variables. We used daily historic weather data for the center of our study site from the NASA Daymet data set (available online: http://daymet.ornl.gov/). The Daymet weather data are interpolated between coarse observation units and capture some spatial variation. We relied on weather data for the centroid of our study area.

      This seems to imply that only a single environmental time-series was used across all of the spatial locations. This is reasonable given the spatial extent of the data, but it will be necessary to allow location specific environmental time-series to allow this to be generalized to large spatial extents.

    4. Because SDMs typically rely on occurrence data, their projections of habitat suitability or probability of occurrence provide little information on the future states of populations in the core of their range—areas where a species exists now and is expected to persist in the future (Ehrlén and Morris 2015).

      The fact that most species distribution models treat locations within a species range as being of equivalent quality for the species regardless of whether there are 2 or 2000 individuals of that species is a core weakness of the occupancy based approach to modeling these problems. Approaches, like those in this paper, that attempt to address this weakness are really valuable.

  8. Dec 2016
    1. An open source infrastructure for a centralized network now provides almost the same level of control as federated protocols, without giving up the ability to adapt. If a centralized provider with an open source infrastructure ever makes horrible changes, those that disagree have the software they need to run their own alternative instead. It may not be as beautiful as federation, but at this point it seems that it will have to do. Tweet

      I'm not sure if this comparison is really working: What if I really take the Signal software, because I have reasons to do so. How can I stay upstream compatible, if "upstream" means the master branch of Open Whisper Systems? There soon will be two communities at least for me: the one that I left behind and the one that went with me to the new installation. But how can they communicate with each other? With the installation of the second instance the "Signal" communication has become distributed in a way except for the two instances cannot talk to each other. As moxie0 says above: "When someone recently asked me about federating an unrelated communication platform into the Signal network, I told them that I thought we'd be unlikely to ever federate with clients and servers we don't control."

    2. This reduced user friction has begun to extend the implicit threat that used to come with federated services into centralized services as well. Where as before you could switch hosts, or even decide to run your own server, now users are simply switching entire networks. In many cases that cost is now much lower than the federated switching cost of changing your email address to use a different email provider.

      There it is again: convenience as the main driver for the ecosystem to develop.

      The "cost" mentioned here is the freedom of not having to send my personal social graph to a server that might belong to someone else tomorrow.

      The two things compared do not fit: Switching networks on the basis of a phone number can be compared to switching similar services with the equivalent of an email address. And changing your email provider can be compared to changing your phone company without being able to take your phone number with you.

    3. If anything, protecting meta-data is going to require innovation in new protocols and software. Those changes are only likely to be possible in centralized environments with more control, rather than less. Just as making the changes to consistently deploy end to end encryption in federated protocols like email has proved difficult, we're more likely to see the emergence of enhanced metadata protection in centralized environments with greater control.

      This is just true under the premise of a quickly moving ecosystem and does not need to be necessarily the case in general. A quickly moving ecosystem can be found in the field of social media e.g. But, on the other side, a "moving ecosystem" can also be seen in the global surveillance structures that put a pressure on developers to react quickly. This can also be seen as the competition here.

      What if the protocol is federated, but the development of the app that implements that protocol is centralized?

    4. It creates a climate of uncertainty, never knowing whether things will work or not.

      This is not a technological problem but a social one in the first place. It demands new solutions to reduce uncertainty. The automatic update mechism of Let'sEncrypt! might be a hint to what we should look at.

    5. If XMPP is so extensible, why haven't those extensions quickly brought it up to speed with the modern world?

      Is extensibility the only paradigm for updating protocols alongside the moving ecosystem? Regarding other open source tools like Wordpress the update mechanisms are more convenient (although update happen too often with WP).

    6. A recorded album can be just the same 20 years later, but software has to change.

      Concerning a physical record or tape that's correct. But if you look at the cover versions of songs of the past, it is obvious that there is the desire to reinterpret them, to hear the musical idea of a song in the contemporary cultural context. Thus, "cover protocols" are the reinterpretation and reimplementation of a protocol idea. No one would say, a cover song is an update of the original song. It is a concurrent version, a concurrent implementation of a musical idea and can be understood knowing the original or not. Certainly, music is not software, but if the code is the foundation for software, notes are the foundation for music.

  9. Nov 2016
    1. My thoughts on Climatic Associations of British Species Distributions Show Good Transferability in Time but Low Predictive Accuracy for Range Change by Rapacciuolo et al. (2012).

    2. Whilst the consensus method we used provided the best predictions under AUC assessment – seemingly confirming its potential for reducing model-based uncertainty in SDM predictions [58], [59] – its accuracy to predict changes in occupancy was lower than most single models. As a result, we advocate great care when selecting the ensemble of models from which to derive consensus predictions; as previously discussed by Araújo et al. [21], models should be chosen based on aspects of their individual performance pertinent to the research question being addressed, and not on the assumption that more models are better.

      It's interesting that the ensembles perform best overall but more poorly for predicting changes in occupancy. It seems possible that ensembling multiple methods is basically resulting in a more static prediction, i.e., something closer to a naive baseline.

    3. Finally, by assuming the non-detection of a species to indicate absence from a given grid cell, we introduced an extra level of error into our models. This error depends on the probability of false absence given imperfect detection (i.e., the probability that a species was present but remained undetected in a given grid cell [73]): the higher this probability, the higher the risk of incorrectly quantifying species-climate relationships [73].

      This will be an ongoing challenge for species distribution modeling, because most of the data appropriate for these purposes is not collected in such a way as to allow the straightforward application of standard detection probability/occupancy models. This could potentially be addressed by developing models for detection probability based on species and habitat type. These models could be built on smaller/different datasets that include the required data for estimating detectability.

    4. an average 87% of grid squares maintaining the same occupancy status; similarly, all climatic variables were also highly correlated between time periods (ρ>0.85, p<0.001 for all variables). As a result, models providing a good fit to early distribution records can be expected to return a reasonable fit to more recent records (and vice versa), regardless of whether relevant predictors of range shift have actually been captured. Previous studies have warned against taking strong model performance on calibration data to indicate high predictive accuracy to a different time period [20], [24]–[26]; our results indicate that strong model performance in a different time period, as measured by widespread metrics, may not indicate high predictive accuracy either.

      This highlights the importance of comparing forecasts to baseline predictions to determine the skill of the forecast vs. the basic stability of the pattern.

    5. Most variation in the prediction accuracy of SDMs – as measured by AUC, sensitivity, CCRstable, CCRchanged – was among species within a higher taxon, whilst the choice of modelling framework was as important a factor in explaining variation in specificity (Table 4 and Table S4). The effect of major taxonomic group on the accuracy of forecasts was relatively small.

      This suggests that it will be difficult to know if a forecast for a particular species will be good or not, unless a model is developed that can predict which species will have what forecast qualities.

    6. The correct classification rate of grid squares that remained occupied or remained unoccupied (CCRstable) was fairly high (mean±s.d.  = 0.75±0.15), and did not covary with species’ observed proportional change in range size (Figure 3B). In contrast, the CCR of grid squares whose occupancy status changed between time periods (CCRchanged) was very low overall (0.51±0.14; guessing randomly would be expected to produce a mean of 0.5), with range expansions being slightly better predicted than range contractions (0.55±0.15 and 0.48±0.12, respectively; Figure 3C).

      This is a really important result and my favorite figure in this ms. For cells that changed occupancy status (e.g., a cell that has occupied at t_1 and was unoccupied at t_2) most models had about a 50% chance of getting the change right (i.e., a coin flip).

    7. The consensus method Mn(PA) produced the highest validation AUC values (Figure 1), generating good to excellent forecasts (AUC ≥0.80) for 60% of the 1823 species modelled.

      Simple unweighted ensembles performed best in this comparison of forecasts from SDMs for 1823 species.

    8. Quantifying the temporal transferability of SDMs by comparing the agreement between model predictions and observations for the predicted period using common metrics is not a sufficient test of whether models have actually captured relevant predictors of change. A single range-wide measure of prediction accuracy conflates accurately predicting species expansions and contractions to new areas with accurately predicting large parts of the distribution that have remained unchanged in time. Thus, to assess how well SDMs capture drivers of change in species distributions, we measured the agreement between observations and model predictions of each species’ (a) geographic range size in period t2, (b) overall change in geographic range size between time periods, and (c) grid square-level changes in occupancy status between time periods.

      This is arguably the single most important point in this paper. It is equivalent to comparing forecasts to simple baseline forecasts as is typically done in weather forecasting. In weather forecasting it is typical to talk about the "skill" of the forecast, which is how much better it does than a simple baseline. In this case the the baseline is a species range that doesn't move at all. This would be equivalent to a "naive" forecast in traditional time-series analysis since we only have a single previous point in time and the baseline is simply the prediction based on this value not changing.

    9. Although it is common knowledge that some of the modelling techniques we used (e.g., CTA, SRE) generally perform less well than others [32], [33], we believe that their transferability in time is not as well-established; therefore, we decided to include them in our analysis to test the hypothesis that simpler statistical models may have higher transferability in time than more complex ones.

      The point that providing better/worse fits on held out spatial training data is not the same was providing better forecasts is important especially given the argument about simpler models having better transferability.

    10. We also considered including additional environmental predictors of ecological relevance to our models. First, although changes in land use have been identified as fundamental drivers of change for many British species [48]–[52], we were unable to account for them in our models – like most other published accounts of temporal transferability of SDMs [20], [21], [24], [25] – due to the lack of data documenting habitat use in the earlier t1 period; detailed digitised maps of land use for the whole of Britain are not available until the UK Land Cover Map in 1990 [53].

      The lack of dynamic land cover data is a challenge for most SDM and certainly for SDM validation using historical data. If would be interesting to know, in general, how much better modern SDMs become based on held out data when land cover is included.

    11. Great Britain is an island with its own separate history of environmental change; environmental drivers of distribution size and change in British populations are thus likely to differ somewhat from those of continental populations of the same species. For this reason, we only used records at the British extent to predict distribution change across Great Britain.

      This restriction to Great Britain for the model building is a meaningful limitation since Great Britain will typically represent a small fraction of the total species range for many of the species involved. However this is a common issue for SDMs and so I think it's a perfectly reasonable choice to make here given the data availability. It would be nice to see this analysis repeated using alternative data sources that cover spatial extents closer to that of the species range. This would help determine how well these results generalize to models built at larger scales.

  10. Oct 2015
    1. advocate for the preservation, archiving, and free c

      This is a really important question about the relative invisibility of e-lit. Distribution tends to rely on models which mimic academia and fandom - funny linking those two I suppose.

  11. Jan 2014
    1. Distribution of departments with respect to responsibility spheres. Ignoring the "Myself" choice, consider clustering the parties potentially responsible for curation mentioned in the survey into three "responsibility spheres": "local" (comprising lab manager, lab research staff, and department); "campus" (comprising campus library and campus IT); and "external" (comprising external data repository, external research partner, funding agency, and the UC Curation Center). Departments can then be positioned on a tri-plot of these responsibility spheres, according to the average of their respondents' answers. For example, all responses from FeministStds (Feminist Studies) were in the campus sphere, and thus it is positioned directly at that vertex. If a vertex represents a 100% share of responsibility, then the dashed line opposite a vertex represents a reduction of that share to 20%. For example, only 20% of ECE's (Electrical and Computer Engineering's) responses were in the campus sphere, while the remaining 80% of responses were evenly split between the local and external spheres, and thus it is positioned at the 20% line opposite the campus sphere and midway between the local and external spheres. Such a plot reveals that departments exhibit different characteristics with respect to curatorial responsibility, and look to different types of curation solutions.

      This section contains an interesting diagram showing the distribution of departments with respect to responsibility spheres:

      http://www.alexandria.ucsb.edu/~gjanee/dc@ucsb/survey/plots/q2.5.png