112 Matching Annotations
  1. Last 7 days
  2. Jun 2021
    1. I passed all of them except for my math. My senior year I actually passed it, but I didn't graduate. I just would go to school, literally eat lunch, just get out. It got boring for me and I was really good. I should have never started.

      Time in US - education - dropping out - not graduating

    2. And it sucked because other people looked at my potential and I put myself so low that I didn't even look at that. Every time they're like, "Dude, you've got so much potential." And I'm like, "Yeah, right dude, what are you talking about? You just trying to butter me up man."

      Time in US - immigration status - lost opportunities

    3. We didn't know what to do. There was no... Just got to go back to the old things that we were doing. But luckily, I was able to cut hair and do tattoos, and I was able to get by.

      Time in US - employment - job - responsibility

    4. It kind of messed me up, got me depressed a little bit. I started hanging out with bad people, doing the wrong things, and I dropped out my senior year.

      Time in the US - Immigration status - being secretive - lost opportunities - sadness, disillusionment

    5. Once you realize that it's not really how you were taught to believe, or not for you in that case, I feel like a lot of kids just give up and lose hope, because it's already hard as it is. Not being able to get a job and still trying to do things right without breaking the law. And then when you realize it's never going to change for you, man, you just like, "Whatever. Okay." Or, "If I can't get it like this, I'm going to get it like that."

      Time in US - losing hope loss of dreams

    6. I passed all of them except for my math. My senior year I actually passed it, but I didn't graduate. I just would go to school, literally eat lunch, just get out. It got boring for me and I was really good. I should have never started.

      Time in the US - Dropping out of school - higher education

  3. May 2021
    1. Today, as we head into the Anthropocene, we are in the dying days of an era of ice that has lasted for 3m years, as we transition into an era of fire, a Pyrocene that may persist for tens of thousands of years
    1. Estimates derived from satellite measurements[5] show that between 1992 and 2017, the Greenland and Antarctic Ice Sheets lost a total of 6.4 trillion tonnes of ice, contributing 18 mm to global mean sea level rise. Ice losses from the Antarctic Ice Sheet have accelerated during recent decades, rising from 49 ± 67 Gt/yr between 1992 and 1997 to 219 ± 43 Gt/yr between 2012 and 2017. Ice losses from the Greenland Ice Sheet have also accelerated, rising from 46 ± 37 Gt/yr in the 1990s to 244 ± 28 Gt/yr between 2012 and 2017. This means a current global loss from both ice sheets of around 460 Gt per year, which roughly translates to 10 times the volume of Lake Garda, in Italy.
    1. iDie Wassermasse reicht aus, um England jährlich 2m tief unter Wasser zu setzen. Das sind 47% mehr als die Wassermenge, die in Grönland abschmilzt und mehr als das Doppelte der Menge, die in der Antarktis frei wird. Innerhalb von 20 Jahren hat sich der Diclenverluss von durchschnittlich einem Drittel eines Meters auf zwei Drittel verdoppelt. Der Verlust in den Alpen ist doppelt so hoch wie im globalen Durcschnitt.

      Setzen sie die Verluste fort, werden 80-90% der alpinen Gletscher 2050 geschmozen sein. Möglicherweise wird es dann z.B. in der Schweiz kein frisches Gras mehr geben. Die schlimmsten Konsequenzen hat das Abschmelzen der Gletscher für die kontinuierliche Versorgung der großen Fluß-Systeme in Asien (Yangtze, Mekong, Salwenn und Brahmaputra). Von ihnen ist ca. eine Milliarde Menschen abhängig. 200 Millionen leben in Küstenregionen, die vom Anstieg des Meeresspiegels bedroht sind.

      Speed at which world’s glaciers are melting has doubled in 20 years | Glaciers | The Guardian

    1. Der Guardian beginnt eine neue Studie zum Gletscher-Verlust. Die Gletscher (ohne die auf Grönland und in der Antarktis) tragen zirka 20% zum globalen Anstieg des Meeresspiegels bei, das sind zur Zeit etwa 0,74 mm im Jahr. Die Rate, mit der sie dünner werden, hat sich in 20 Jahren verdoppelt. Besonders hoch sind die Verluste in den Alpen. Im Durchschnitt haben sie im Jahr 267 Gigatonnen verloren.

      'We Need to Act Now': Glaciers Melting at Unprecedented Pace, Study Reveals - EcoWatch

    1. Einer im Fachmagazin »Nature« veröffentlichten internationalen Studie zufolge verloren die Gletscher zwischen 2000 und 2019 im Durchschnitt 267 Milliarden Tonnen (Gigatonnen) Eis pro Jahr, am meisten aber in den vergangenen fünf Jahren. Inzwischen trägt das schmelzende Eis demnach zu mehr als 20 Prozent zum Anstieg des Meeresspiegels bei.
  4. Apr 2021
  5. Mar 2021
  6. Jan 2021
    1. And Unity ditching for something that’s still not on par with it, had already broken a bit my trust in Ubuntu as a stable option at work. Now snap is coming closer and broader…
  7. Oct 2020
    1. Before you start a weight-loss program, it’s crucial to identify and create a treatment plan for any obesity related illnesses or diseases.

      Find out more about medical weight loss here.

  8. Sep 2020
    1. Knowing this, if you want someone to make a decision they might consider risky (like abandoning an age-old software platform for something that works), it helps to talk about the bad things that will happen if they don’t take the risk. They’re more apt to respond to that than if you talk about the good things that will happen if they take the risk. In fact, talking about positive outcomes makes people more risk-averse (http://bkaprt.com/dcb/03-12/).
    2. loss aversion. We are way more scared of losing what we have than excited about getting something new.

    Tags

    Annotators

  9. Aug 2020
    1. Altig, D., Baker, S. R., Barrero, J. M., Bloom, N., Bunn, P., Chen, S., Davis, S. J., Leather, J., Meyer, B. H., Mihaylov, E., Mizen, P., Parker, N. B., Renault, T., Smietanka, P., & Thwaites, G. (2020). Economic Uncertainty Before and During the COVID-19 Pandemic (Working Paper No. 27418; Working Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w27418

  10. Jul 2020
    1. "that text has been removed from the official version on the Apache site." This itself is also not good. If you post "official" records but then quietly edit them over time, I have no choice but to assume bad faith in all the records I'm shown by you. Why should I believe anything Apache board members claim was "minuted" but which in fact it turns out they might have just edited into their records days, weeks or years later? One of the things I particularly watch for in modern news media (where no physical artefact captures whatever "mistakes" are published as once happened with newspapers) is whether when they inevitably correct a mistake they _acknowledge_ that or they instead just silently change things.
  11. Jun 2020
    1. Barry, D., Buchanan, L., Cargill, C., Daniel, A., Delaquérière, A., Gamio, L., Gianordoli, G., Harris, R., Harvey, B., Haskins, J., Huang, J., Landon, S., Love, J., Maalouf, G., Matthews, A., Mohamed, F., Moity, S., Royal, D.-C., Ruby, M., & Weingart, E. (2020, May 27). Remembering the 100,000 Lives Lost to Coronavirus in America. The New York Times. https://www.nytimes.com/interactive/2020/05/24/us/us-coronavirus-deaths-100000.html

    1. but it launched with a plethora of issues that resulted in users rejecting it early on. Edge has since struggled to gain traction, thanks to its continued instability and lack of mindshare, from users and web developers.
  12. May 2020
  13. Apr 2020
  14. Oct 2019
    1. the generator and discriminator losses derive from a single measure of distance between probability distributions. In both of these schemes, however, the generator can only affect one term in the distance measure: the term that reflects the distribution of the fake data. So during generator training we drop the other term, which reflects the distribution of the real data.

      Loss of GAN- How the two loss function are working on GAN training

  15. Apr 2019
    1. We all know health is wealth. Protein plays an important role in keeping our body healthy and perfect. Whey protein isolate is the perfect source for protein and other essential nutrients which our body required. To help your body to get in proper shape, elements like exercise, food, and water are not enough. Whey protein isolate powder contains less than 1% lactose which helps the people who have lactose intolerance. Whey protein powder helps for weight loss and also to give shape your body.

  16. Feb 2019
    1. Deep Learning on Small Datasets without Pre-Training using Cosine Loss

      在当代深度学习中,有两件事似乎无可争议:

      1. softmax激活后的分类交叉熵损失是分类的首选方法;
      2. 在小型数据集上从零开始训练CNN分类器效果不佳。在本文中作者证明,当处理小数据样本类时余弦损失函数比交叉上能够提供更好的性能。
    2. Towards a Deeper Understanding of Adversarial Losses

      研究了各种对抗生成训练的 losses,还可以 know which one of them makes an adversarial loss better than another。

  17. Jan 2019
    1. Training Neural Networks with Local Error Signals

      自 GoogLeNet 之后,local loss 这个 idea 恐怕并不新鲜了吧~

    2. Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit

      好猛的 paper,全文就一页,仅两个引用!一个简单的 idea:引入两个辅助参数,使得新 loss 的任何局部极小都是原 loss 的全局最小。小槽点:

      1. 一点实验都不做,都留给读者,真的合适么?[笑cry]

      2. 配图很好看,主要是感觉和 “Neural Ordinary Differential Equations” 很像~

    3. Learning with Fenchel-Young Losses

      又见到 Blondel 的关于 Fenchel-Young Losses 的 paper,虽然看不懂,不过不明觉厉~

  18. Nov 2018
    1. Here is how you reach net profit on a P&L (Profit & Loss) account: Sales revenue = price (of product) × quantity sold Gross profit = sales revenue − cost of sales and other direct costs Operating profit = gross profit − overheads and other indirect costs EBIT (earnings before interest and taxes) = operating profit + non-operating income Pretax profit (EBT, earnings before taxes) = operating profit − one off items and redundancy payments, staff restructuring − interest payable Net profit = Pre-tax profit − tax Retained earnings = Profit after tax − dividends

      $$Sales Revenue = (Price Of Product) - (Quantity Sold)$$

      $$Gross Profit = (Sales Revenue) - (Cost)$$

      $$Operating Profit = (Gross Profit) - (Overhead)$$

      Earnings Before Interest and Taxes (EBIT) $$EBIT = (Operating Profit) + (Non-Operating Income)$$ Earnings Before Taxes (EBT) $$EBT = (Operating Profit) - (One Off Items, Redundancy Payments, Staff Restructuring) - (Interest Payable$$

      $$Net Profit = (EBT) - (Tax)$$

      $$ Retained Earnings = (Net Profit) - (Dividends)$$

    1. Smooth Loss Functions for Deep Top-k Classification

      其实还是挺有创意的~ 通过推广 Multi-class SVM 的 loss,进一步构造光滑性 (无限可微),其可 reduce 回到交叉熵 loss,实验给出对噪声更好的鲁棒性,顺道讨论了如何降低“光滑性”带来的算法复杂度。

    1. For house staff in internal medicine, the introduc-tion of hospitalists may mean a greater likelihood ofbeing supervised by attending physicians who arehighly skilled and experienced in providing inpatientcare. House staff have long enjoyed a certain amountof autonomy, because many of their faculty supervi-sors have been relatively unfamiliar with moderninpatient care. Such autonomy may be diminishedwith the new approach to inpatient care. Althoughthere is bound to be transitional pain, we believethat the potential for improved inpatient teachingwill more than compensate for it. Moreover, thischange will help answer public calls for closer andmore effective faculty oversight of house staff andstudents.34
  19. May 2018
  20. Mar 2018
  21. Jul 2017
  22. www.cell.com www.cell.com