30 Matching Annotations
  1. Nov 2021
    1. local variables such as x are often kept in registers rather thanstored in memory locations. Register access is much faster than memory access.

      local variables 通过会存在哪里,为什么?

    Tags

    Annotators

  2. Oct 2021
  3. Sep 2021
  4. Jun 2021
    1. I've seen (and fixed) Ruby code that needed to be refactored for the client objects to use the accessor rather than the underlying mechanism, even though instance variables aren't directly visible. The underlying mechanism isn't always an instance variable - it can be delegations to or manipulations of a class you're hiding behind a facade, or a session store with a particular format, or all kinds. And it can change. 'Self-encapsulation' can help if you need to swap a technology, a library, an object specification, etc.
    2. a principle I use is: If you have an accessor, use the accessor rather than the raw variable or mechanism it's hiding. The raw variable is the implementation, the accessor is the interface. Should I ignore the interface because I'm internal to the instance? I wouldn't if it was an attr_accessor.
    3. I have been wrapping instance variables in accessor methods whenever I can though.
    4. Also, Sandi Metz mentions this in POODR. As I recall, she also advocates wrapping bare instance variables in methods, even when they're only used internally. It helps avoid mad refactoring later.
  5. May 2021
  6. Mar 2021
  7. Feb 2021
    1. Local variables can even be declared with the same name as a global variable. If this happens, there are actually two different variables with the same name: one local and one global. This helps ensure that an author writing a local variable doesn’t accidentally change the value of a global variable they aren’t even aware of.
  8. Dec 2020
  9. Nov 2020
    1. Thirty-nine right-handed (Snyder&Harris, 1993) healthy adults

      Participants: 39 adults (no children as plasticity would be highly varied; adults are known for lower plasticity); being right-handed is an interesting note, but means that all adults are left-brain dominant. Researchers' attempt to keep variation low and the experimental group uniform.

  10. Oct 2020
  11. Sep 2020
    1. Otherwise, please take the time to read about declarations hoisting (MDN, Adequately Good) and variable shadowing, as these concepts are key to fully understanding TDZ.
  12. Aug 2020
  13. Jul 2020
  14. Jun 2020
  15. May 2020
  16. Dec 2019
    1. PHP variables begin with the dollar symbol $ and PHP variable names adhere to the following rules:• Names are case sensitive• Names may contain letters, numbers, and the underscore character• Names may not begin with a number
  17. Jul 2019
    1. However, the gain ratio is the most important metric here, ranged from 0 to 1, with higher being better.
    2. en: entropy measured in bits mi: mutual information ig: information gain gr: gain ratio
    1. Feature predictive power will be calculated for all features contained in a dataset along with the outcome feature. Works for binary classification, multi-class classification and regression problems. Can also be used when exploring a feature of interest to determine correlations of independent features with the outcome feature. When the outcome feature is continuous of nature or is a regression problem, correlation calculations are performed. When the outcome feature is categorical of nature or is a classification problem, the Kolmogorov Smirnov distance measure is used to determine predictive power. For multi-class classification outcomes, a one vs all approach is taken which is then averaged to arrive at the mean KS distance measure. The predictive power is sensitive towards the manner in which the data has been prepared and will differ should the manner in which the data has been prepared changes.
    1. Mutual information is one of many quantities that measures how much one random variables tells us about another. It is a dimensionless quantity with (generally) units of bits, and can be thought of as the reduction in uncertainty about one random variable given knowledge of another. High mutual information indicates a large reduction in uncertainty; low mutual information indicates a small reduction; and zero mutual information between two random variables means the variables are independent.