9 Matching Annotations
  1. Sep 2020
    1. Had it not been for the attentiveness of one person who went beyond the task of classifying galaxies into predetermined categories and was able to communicate this to the researchers via the online forum, what turned out to be important new phenomena might have gone undiscovered.

      Sometimes our attempts to improve data quality in citizen science projects can actually work against us. Pre-determined categories and strict regulations could prevent the reporting of important outliers.

    1. However, very little has been published in the academic literature about the factors that influence people to take part in citizen science projects and why participants continue their involvement, or not.

      What do we know so far? Where are clear areas where research can be done to improve our understanding of this?

    1. A training session was not carried out by the authors for the Asian Longhorned Beetle Swimming Pool Survey, a project specific to New York State.

      Why was there not a training session for this specific project?

  2. Jun 2020
  3. Feb 2020
    1. "We are at a time where some people doubt the validity of science," he says. "And if people feel that they are part of this great adventure that is science, I think they're more inclined to trust it. And that's really great."

      These citizen scientists in Finland helped identify a new type of "northern light". Basically, 2 people were able to take a shot of the same display at the same second, 60 miles apart, allowing for depth resolution.

  4. Dec 2019
    1. Four databases of citizen science and crowdsourcing projects —  SciStarter, the Citizen Science Association (CSA), CitSci.org, and the Woodrow Wilson International Center for Scholars (the Wilson Center Commons Lab) — are working on a common project metadata schema to support data sharing with the goal of maintaining accurate and up to date information about citizen science projects.  The federal government is joining this conversation with a cross-agency effort to promote citizen science and crowdsourcing as a tool to advance agency missions. Specifically, the White House Office of Science and Technology Policy (OSTP), in collaboration with the U.S. Federal Community of Practice for Citizen Science and Crowdsourcing (FCPCCS),is compiling an Open Innovation Toolkit containing resources for federal employees hoping to implement citizen science and crowdsourcing projects. Navigation through this toolkit will be facilitated in part through a system of metadata tags. In addition, the Open Innovation Toolkit will link to the Wilson Center’s database of federal citizen science and crowdsourcing projects.These groups became aware of their complementary efforts and the shared challenge of developing project metadata tags, which gave rise to the need of a workshop.  

      Sense Collective's Climate Tagger API and Pool Party Semantic Web plug-in are perfectly suited to support The Wilson Center's metadata schema project. Creating a common metadata schema that is used across multiple organizations working within the same domain, with similar (and overlapping) data and data types, is an essential step towards realizing collective intelligence. There is significant redundancy that consumes limited resources as organizations often perform the same type of data structuring. Interoperability issues between organizations, their metadata semantics and serialization methods, prevent cumulative progress as a community. Sense Collective's MetaGrant program is working to provide a shared infastructure for NGO's and social impact investment funds and social impact bond programs to help rapidly improve the problems that are being solved by this awesome project of The Wilson Center. Now let's extend the coordinated metadata semantics to 1000 more organizations and incentivize the citizen science volunteers who make this possible, with a closer connection to the local benefits they produce through their efforts. With integration into Social impact Bond programs and public/private partnerships, we are able to incentivize collective action in ways that match the scope and scale of the problems we face.

  5. May 2018
    1. Negative values included when assessing air quality In computing average pollutant concentrations, EPA includes recorded values that are below zero. EPA advised that this is consistent with NEPM AAQ procedures. Logically, however, the lowest possible value for air pollutant concentrations is zero. Either it is present, even if in very small amounts, or it is not. Negative values are an artefact of the measurement and recording process. Leaving negative values in the data introduces a negative bias, which potentially under represents actual concentrations of pollutants. We noted a considerable number of negative values recorded. For example, in 2016, negative values comprised 5.3 per cent of recorded hourly PM2.5 values, and 1.3 per cent of hourly PM10 values. When we excluded negative values from the calculation of one‐day averages, there were five more exceedance days for PM2.5 and one more for PM10 during 2016.
  6. Sep 2017
    1. Through Open Humans, you can gather valuable data about yourself and find cool projects to share it with.
  7. Aug 2017
    1. But Dunkin emphasized having data isn’t enough: EPA’s working to make it more accessible. To help crowdsource data from citizen scientists and regulators at the state, local and tribal level, it’s critical to create data standards, she said.

      How can EPA's data standards be made inter-operable with RDF-based efforts such as researchobject.org.