67 Matching Annotations
  1. Apr 2020
    1. is that classic NHST does not apply to exploratory studies (Flueck and Brown 1993)

      this seems to be what I come across with a lot of the research I read. A lot of aerosol studies are 1 project, 1 year, 1 month, etc. It's all very exploratory and I don't usually see a statistical analysis on a lot of the data.

    2. Gardner and Altman (1989) describe methods for calculating con-fidence intervals for correlations, regressions, differ-ences of means, and a variety of other statistical measures.

      Take a look at this - might be a better way for me to look at my data ?

    3. H0 being "rejected" if p < 0.05. As Rosnell and Rosenthal (1989) noted however, "...surely, God loves the .06 nearly as much as the .05." In other words, why should a sample correlation strong enough to return p < 0.06, but not p < 0.05, be denoted as not statistically signifi-cant?

      an interesting thought ? I like that they're questioning who decided this .05 ?

    1. Comparing the papers in the two examined issues, itappears that papers with a more dynamical focus gen-erally do not stray as much into significance testing aspapers with a more geographical, diagnostic focus.

      Interesting

  2. Feb 2020
    1. Students in many fields, for example astronomy, oceanography, and genomics, are often already proficient in analyzing large, complex data sets. That’s an attractive skill to many employers.

      50% agree 50% disagree. While scientific papers are a niche, I still think there is great value in the process of learning how to write and present results. That is a skill we cannot lose as scientists. And I don't think this article is strictly suggesting we step away 100% from journal articles, but I do think journals may reach a broader audience? As a student, we read a lot of papers and those papers contribute to our learning and sometimes we just want a quick look at results or methods or simply the introduction. Personally, I don't want to run code or a software program to see results if I'm just looking to write a sentence or two for an introduction or an abstract. Again, I think maybe the shift should be to have these codes / softwares available to complement the paper. Not the other way around.

    2. That’s why, rather than maximizing their publication output, scientists should spend more time and effort on software development. This could mean contributing to existing open-source projects on which their research depends or developing new software related to their research. In both activities, it’s crucial to recognize that "software," as opposed to just "code," includes many additional components, such as documentation, examples, and tests, which provide rich context for the code itself.

      I actually really like this idea. I think this would also shift the focus from "competition" to collaboration.

    1. Additionally, within the geo-sciences sample sizes are typically limited, which means that the available samples mightnot capture the full range of possible outcomes and thereby might also not be represen-tative of the true underlying physics driving the relationship between the inputs and out-puts. In this scenario, the network may therefore fail to model the relationship correctlyfrom a physical perspective, even if it accurately captures a relationship between the in-puts and outputs given the provided training data. Thus, the ability to interpret neu-ral networks is important for ensuring that the reasoning for a network’s outputs are con-sistent with our physical understanding of the earth system.

      Still curious as to how much data is needed to train? I think this raises a good point because at least with a human like training a graduate student you can interact and understand where the failure occurred, but this seems harder to understand with NN

    1. Integrating scientific knowledge into AI approaches greatly improves transparency because the more scientific knowledge is used, the easier it is to follow the reasoning of the algorithm. Generalization, robustness, and performance are also improved because the scientific knowledge can fill many gaps left by small sample sizes, as explained below.

      How much data do we need to provide? the more the better?

    2. “training data”

      I have questions about the training data - how does it work? If I supply training data could the model then become biased towards my preferences? Is this an issue people worry about??

    3. approached with vigilance

      I agree - I don't know a lot about AI, but I think the field we work in is already very predictive to begin with and I can see where adding in AI could cause a lot more uncertainty - BUT I also see how using AI could serve as a way for scientists to save time and man power which could allow for faster advancement in the field?

    1. !"#$%&'()*+)*

      Academic background of Naomi is super interesting. She definitely offers a different perspective on modeling - as someone who does not do modeling / active predicting work. I enjoy the outside perspective but thought this chapter was slightly biased. Kind of left a sour taste in my mouth- felt like she was saying the work we do doesn't matter because we can't predict the future....

    2. E((".$%",0,/(,9(%"%0/2%-5%"3+<<%3.3".$,."B%"-%%2".'":7%:,7%")'7"+-:7%5%2%-.%2")(''2/-<@H"#$%"'+.5'6%>"9'.$")'7".$%"7%:+.,./'-"')".$%"M%,.$%7"4%70/5%",-2>"6'7%"/6:'7.,-.(8>")'7".$%":%':(%"')"n7,-2"F'7?3>"6/<$."$,0%"9%%-"0%78"2/))%7K%-.@

      Nope, don't like this. I know I said hope for the best expect the worst up there - but statements like this seem to cry wolf. I think there needs to be some type of quantitative output. Because for say emission factors - how will we know how much to limit or reduce? I think quantitative can be good - if taken with a grain of salt.

    3. M$%-".$%3%"G),5.3H":7'0%2"),(3%>"9(,6%")'7".$%"2/3,3.%7"B,3"(,/2",.".$%")%%."')".$%"M%,.$%7"4%70/5%

      Unfortunately these are some of the consequences we face. Also our job as scientists to communicate that our predictions are not tell all be all. Should be more of a hope for the best expect the worst mentality. again - worst case scenario we are too safe?

    4. $/3"/3"B$,.".$%"X.'(%6,/5",3.7'-'6%73"2/2;".$%8",22%2"%:/585(%3"+:'-"%:/585(%3".'"G3,0%".$%":$%-'6%-'-HW.'"6,?%".$%"6'2%(")/.".$%"2,.,W,((".$%"B$/(%":7%3%70/-<","383.%6".$,."B,3"5'-5%:.+,((8")(,B%2",."/.3"7''.@

      Band aid over a bullet hole - which Is what I think we need to be very careful about in science. When our predictions/models are wrong do we just say oh just by changing this one thing makes it work, or do we need to go back to the bones of the model and make sure that our foundation is solid?

    5. "A'B%0%7>"+-5%7.,/-.8"&'0*$.,$#",3".$%"6%,3K+7%6%-."%77'73"'-"/-2/0/2+,(":,7,6%.%73",55+6+(,.%2@"b,5$",22%2"0,7/,9(%",22%2"+-5%7.,/-.8".'".$%"6'2%(>"B$/5$>"B$%-":7'6+(<,.%2"/-","1'-.%"S,7('"3/6+(,./'->"5'-.7/9+.%2".'".$%"+-5%7.,/-.8"')".$%"6'2%(":7%2/5./'-@"

      double edged sword

    6. /-".$%"(,9'7,K.'78"B%"$,0%".$%"6%,-3".'"5'-.7'("%C.%7-,("5'-2/./'-3"/-"B,83".$,.",7%"-'.",0,/(,9(%"/-"'72/-,78"(/)%@

      In atm chem field - chamber exp. are very popular because they are extremely cost effective. Gives great insight on some properties but also the chamber cannot replicate natural conditions. Black lights do not equate the actual sunlight. But field experiments also have huge limitations. I think it's important to have both

    7. "X7%2/5./0%"3+55%33"/-"35/%-5%>",3"/-"'.$%7",7%,3"')"(/)%>"+3+,((8"%-23"+:"9%/-<","6,..%7"')"(%,7-/-<")7'6":,3."6/3.,?%3@"

      Which is still science. Is that not a great way to learn? I mean I think everyone should understand that there are errors associated with models and we should not expect them to be perfect, but the fact that we can now alert entire cities of to evacuate or not before a hurricane is amazing. And so maybe the prediction is wrong, but what's the worst case - you were too safe?

    8. +3/-<"6'2%(3".'":7%2/5.".$%")+.+7%"6,8"-'."2'".$,.>"%/.$%7@"

      I mean I think this is all relative? It is 100% dependent on what your goals are. I think as a society these models are important because they can at least provide an idea of what conditions in the future are going to be like. If sea level is going to rise, we shouldn't build as close to the coast. If CO2/CH4 levels are going to rise and cause health problems for future generations we should exercise caution on our current day emissions. I think there's a lot of good that can come from model predictions

  3. weather.rsmas.miami.edu weather.rsmas.miami.edu
    1. hese sciences were understood to be different from geology (and biology, for that matter) because they dealt with repetitive patterns and events. For them, the future was accessible,

      ...refer to my previous comment...

    2. eologists were deeply concerned with temporal matters, but their concern was the past, not the future.

      It feels like this chapter is comparing apples and oranges. Of course methods in the 1800s were more about observation and explanation. They didn't know anything. Also the fields they are describing (esp. geology) in my opinion is very observation based - we have this rock, how did it get here 9 million years ago. Earth science (i.e meteorology, oceanography, etc) there's a lot more room for prediction based science. We know that in the Jurassic period CO2 levels were Xppmv from ice cores, and we are approaching levels of Xppmv - let's see how this will affect our near(ish) future.

    3. uniformitarianism.

      the theory that changes in the earth's crust during geological history have resulted from the action of continuous and uniform processes.

  4. Jan 2020
    1. 

      Back to Tyler's point - this is just bad science to not consider causation.

      Update : after reading the whole chapter - I still stand by my previous statement, BUT correlation and causation need to be balanced. You can't go looking for correlation because you think a caused b and just because a caused b doesn't mean there's a scientific explanation for it.

    2. 

      I also think this is really important for science. And that it's a delicate balance of correlation and causation.

    3. 

      Ha! interesting.

    1. We performed atwo-tailedttest with the null hypothesis that predicted val-ues are the same from both regression models and found atwo-tailedpvalue of 0.748 at 532 nm.

      First article I think I've ever read that says they've done this!

    2. To compare the performance of the different regressionmodels discussed in this study, we calculated the root meansquare error (RMSE)

      The RMSD represents the square root of the second sample moment of the differences between predicted values and observed values or the quadratic mean of these differences. These deviations are called residuals when the calculations are performed over the data sample that was used for estimation and are called errors (or prediction errors) when computed out-of-sample.

    3. Theerror bars on SSA are the error calculated by adding (in quadrature)the uncertainty in the absorption measurement, the uncertainty inthe extinction measurement, and 1 standard deviation of the averagevalue for room burns, whereas for stack burn it is the propagationerror calculated form uncertainty in PAS and CRDS measurements

      adding up all uncertainties

    4. dditionally, EC/OC for the same fuel wasmuch more variable between lab burns than MCE. Eight dif-ferent sawgrass burns yielded MCE ranging in the narrowinterval of 0.949 to 0.963, while EC/OC for the same burnsvaried from 0.010 to 4.358.

      YES

    5. ta from the FLAME-3 set of experiments. Panels a and b of Fig. 1 show a least-squares fit to our FLAME-4 data along with the parameteri-zations proposed by Liu et al. (2014).

      would like to do this

    6. For fires without backup fil-ters or those that were below the detection limit, the av-erage OC correction for that fuel type was applied: ricestraw (2.0±0.4 %), ponderosa pine (1.2 %), black spruce(2.9±1.6 %), and peat (3.1±0.8 %). For fuels types with-out backup filters collected, the study average OC artifact(2.4±1.2 %) was subtracted

      how? where did these corrections come from??

    7. Errors for SSA in this studywere calculated by propagating the uncertainties describedabove for the extinction and absorption measurements andadding the standard deviation of the actual fire measurementsin quadrature.

      quadrature = the process of constructing a square with an area equal to that of a circle, or of another figure bounded by a curve.

    8. al. (2014), which have the samefunctional form as the red-line fits. The error bars on SSA are theerror calculated by adding (in quadrature) the uncertainty in the ab-sorption measurement, the uncertainty in the extinction measure-ment, and 1 standard deviation of the average value for room burns.For stack burns, the error is the quadrature sum of the uncertainty inthe PAS and CRDS measurements.(d)Absorption Ångström expo-nent as a function of MCE. The error bars for AAE are 1 standarddeviation of the least-squares fit to the averaged data for a givenfuel. Green lines are the 95 % confidence interval of the fits.

      What is in quadrature?

    1. or the analysis we assumed a collection efficiency equal to unityfollowingHennigan et al. (2011), Ortega et al. (2013), Eriksson et al. (2014), andBruns et al. (2015)

      CE of 1 or .5?

    1. a collection efficiency of1 was assumed due to the high fraction of OA constitutingsubmicron aerosol; high resolution analysis was used togenerate elemental composition data

      CE of 1 ... this surprises me. but if they're confident that its OA and not a lot of bouncing or charring has occurred I suppose this could work. ?

    2. Atmospheric Photooxidation Diminishes Light Absorption byPrimary Brown Carbon Aerosol from Biomass Burnin

      Annotating for class but also annotating for myself because this is relevant to my research.

    3. scattering albedo from 0.85 to 0.90.

      hmm. Opposite of our findings. BUT we think our OA is bleached so maybe what they observe is the bleaching effect and we see a decrease in SSA because we're already past the bleaching max??

    1. As such, this is a very small probability p-value (<significance level of 0.5) for the mean of a sample to take a value of 2 or more.And so we can reject our Null hypothesis. And we can call our

      I found this to be an interesting way to do this example. The hypothesis was exercising does not mean weight loss. because the p value is small (the probability of that being true) the null hypothesis is rejected. BUT on the other hand if we did weightloss is due to exercise and we had a larger p value then we accept the null hypothesis

    2. Null hypothesi

      general statement or default position that there is nothing significantly different happening, like there is no association among groups or variables, or that there is no relationship between two measured phenomena.

    1. VI. Comparing Science and the Law Science and the law differ both in the language they use and the objectives they seek to accomplish.

      I took a science communication course and this was a hot topic. As scientists it is our job to present our work to the community in ways they will understand it. - Would like to discuss this if we have time.

    2. A certain number of the papers in any issue of a scientific journal will have titles that begin with “Evidence for (or against)…” What that means is, the authors were not able to prove their point, but are presenting their results anyway.

      interesting

    3. f a theory makes novel and unexpected predictions, and those predictions are verified by experiments that reveal new and useful or interesting phenomena, then the chances that the theory is correct are greatly enhanced. And, even if it is not correct, it has been fruitful in the sense that it has led to the discovery of previously unknown phenomena that might prove useful in themselves and that will have to be explained by the next theory that comes along.

      YES

    4. Most scientists will say that there is a paradigm shift in their laboratory every 6 months or so (or at least every time it becomes necessary to write another proposal for research support). That is not exactly what Kuhn had in mind.

      agreed - no set time frame but I think that's the nature of science. The more we tease apart data the more we wish we had because now we want to look at a-z and the way to get more data is through $$$$

    5. As time goes on, difficulties and contradictions arise that cannot be resolved, but the tendency among scientists is to resist acknowledging them. One way or another they are swept under the rug, rather than being allowed to threaten the central paradigm. However, at a certain point, enough of these difficulties accumulate to make the situation intolerable. At that point, a scientific revolution occurs, shattering the paradigm and replacing it with an entirely new one.

      I like to think of science as progress. But maybe we should be better and not sweep as much under the rug. Maybe progress shouldn't be forced because of all the difficulties and things swept under. That seems to be human nature, wait until things get so bad and then we change them??

    6. if they are shown by experiment not to be correct, will serve to render the theory invalid.

      Seems too rigid for science. If the theory is incorrect, that's also insightful information.

    7. There have been many attempts at formulating a general theory of how science works, or at least how it should work, starting, as we have seen, with the theory of Sir Francis Bacon.

      I don't think there is one general way in which science works.

    8. “That’s exactly how a Lord Chancellor would do science,” William Harvey is said to have grumbled.

      I find this to be pretty funny. Growing up we learned about the scientific method pretty early on in school. We wrote lab reports and conducted home experiments using this method, but now I almost find the chronological sequence of scientific method really hard to follow. Not because it doesn't make sense, but because of the way we do research now. Do we make an experiment based on a hypothesis or do we make the hypothesis based off an experiment? Where the former can cause issues because you're already looking for answers to a question which can lead to biases in the experiment. Or do you do the latter and perform the experiment without a solidified hypothesis to see what you can find? Will you receive funding without a solidified hypothesis?

    9. Daubert decision

      The Daubert standard is a rule of evidence regarding the admissibility of expert witnesses' testimony during United States federal legal proceedings. Pursuant to this standard, a party may raise a Daubert motion, which is a special case of motion in limine raised before or during trial to exclude the presentation of unqualified evidence to the jury.

  5. Nov 2019
    1. Drizzle falling from the cloud deck and evaporat-ing into the unsaturated air below it also affects theboundary-layer heat balance. The condensation ofwater vapor within the cloud deck releases latentheat, and the evaporation of the drizzle drops in thesubcloud layer absorbs latent heat. The thermody-namic impact of the downward, gravity-driven flux ofliquid water is an upward transport of sensible heat,thereby stabilizing the layer near cloud base

      This is a cool thing to note. I didn't understand why we cared about rain, drizzle, or virga when we were flying. I always related it to cloud/aerosol properties but didnt think about the dynamic consequences it would have in terms of changing the heat balance

    2. n regions of strong large-scale subsidence wherethe boundary layer is shallow (i.e., 500 m–1km indepth), convection-driven heating from below andcooling from above intermingle to form a unified tur-bulent regime that extends through the depth of theboundary layer. As the boundary layer deepens thereis a tendency for the two convective regimes tobecome decoupled.

      yes- saw this

    3. Once formed, marine cloud decks tend to persistbecause of this positive feedback.In the cloud-topped boundary layer, convectionmay be driven by heating from below or by coolingfrom above. The heating from belowis determined bythe surface buoyancy flux (i.e., the sensible heat fluxand the moisture flux to the extent that it affects thevirtual temperature) as in the diurnal cycle over land.The cooling from aboveis determined by the fluxof longwave radiation at the top of the cloud layer(see Fig. 4.30). Heating from below drives open cellconvection(Figs. 1.21 and 9.26), with strong, con-centrated, buoyant, cloudy plumes interspersed withmore expansive regions of weak subsidence that arecloud-free. In contrast, cooling from above drivesclosed cell convection(Figs. 1.7 and 9.26) with strong,negatively buoyant, cloud-free downdrafts, inter-spersed with more expansive regions of slow ascen

      good to understand these dynamics when I begin to look at polluted vs not polluted MBL

    4. Boundary-Layer Growth

      BL growth - dependence on SST over the ocean? colder SST = deeper BL? could be wrong but I think thats what we saw? need to look at data (ask sara)

    5. ertical cross-section sketch of net radiative input tothe surface flux, F*, and resulting heat fluxes into the air andground for different scenarios (a) Daytime over a moist vege-tated surface. (b) Nighttime over a moist vegetated surface. (c)Daytime over a dry desert. (d) Oasis effect during the daytime,with hot dry wind blowing over a moist vegetated surface. (SeeFig. 9.10 for explanation of symbols.) [Adapted from Meteorologyfor Scientists and Engineers, A Technical Companion Book toC. Donald Ahrens’ Meteorology Today, 2nd Ed., by Stull, p. 57.Copyright 2000. Reprinted with permission of Brooks/Cole,a division of Thom

      I like this figure - I think W&H did a good job describing this. Grant Petty's Radiation book also does a good job

    6. ig. 9.8Illustration of how to anticipate the sign of turbulent heat fluxes for small-eddy (local) vertical mixing across a region witha linear gradient in the mean potential temperature (thick colored line). Assuming an adiabatic process (no mixing), air parcels(sketched as spheres) preserve their potential temperature (as indicated by their color) of the ambient environment at their startingpoints (1), even as they arrive at their destinations (2). (a) Statically unstable lapse rate. (b) Statically stable lapse rate. [Adaptedfrom Meteorology for Scientists and Engineers, A Technical Companion Book to C. Donald Ahrens’ Meteorology Today, 2nd Ed., by Stull,p. 87. Copyright 2000. Reprinted with permission of Brooks/Cole, a division of Thomson Learning: www.thomsonrights.com Fax800-730-2215.

      I dont like this figure. Paquita explained this really well in her class and we have a nice set of 4 figures illustrating these conditions

    7. or continuous emissions ofsmoke from a smoke stack, smoke plumes loopupand down and spread more rapidly in the verticalthan in the horizontal. Under statically neutral condi-tions, turbulence is almost isotropic, and smokeplumes spread equally in the vertical and in the hori-zontal, yielding a conical envelope, a behaviorreferred to as coning.

      Wonder what we have in the SEA. H2O vapor indicates lack of turbulence and mixing but we see quite some structure in PT

    8. this figure was referred to in one of the HW problems. Just wanted to clarify that the bottom temp is the one with least variance. It is noisy, but in terms of deviating from the mean I would think it has the smallest deviations. Where as the uppper temps have more dramatic deviation from the mean??