18 Matching Annotations
  1. Jan 2019
  2. Dec 2018
    1. Large Public University 1 (LPU1) is a large public institution serving more than 44,000 students with 26,000 full-time undergraduate students. At the time of the interviews, 53 sections of Calculus I were taught in the fall by 48 instructors, who were mostly graduate students. The current approach to Calculus I at LPU1 can be traced back to the early 1990s reform movement. For more than two decades, the department has been focusing on conceptual understanding and student engagement. Calculus I is taught in small sections (of approximately 32 students) using the Harvard Consortium Calculus text (Hughes-Hallet, 2012). The graduate students who teach calculus are called Graduate Student Instructors (GSIs) and have autonomy over their in-class instruction, but are encouraged to feature small-group work. All students take common midterms and a common final. Homework is also common, with students doing online homework that is procedurally focused and team homework problems that are conceptually focused. The GSIs participate in a robust training program that includes instruction in the ambitious teaching practices that characterize the program. Training is used to explain the benefits of the ambitious practices and to provide practical instruction in implementing them. Each semester, one of the GSIs helps to coordinate the course. This GSI coordinator is responsible for writing some of the team homework problems and conducting classroom observations of new GSIs.

      Dope pope

  3. Nov 2018
    1. A list is the Python equivalent of an array, but is resizeable and can contain elements of different types:

      wow testing

    1. Theorem 54.5.The fundamental group ofS1is isomorphic to the additive group ofintegers.

      to generalize

    Annotators

    1. Empathize. Start with a genuine desire to connect to your audience and to help them understand your message. (This tip is from my very empathetic friend Youngsun.)

      uck!

    1. epth`={1,...,6}andwidthhl=β0(D)when1≤l≤`andhl∈{1,...,500}whenl= 0. We will denote individual architectures by thepair(`,h0)

      messy

    2. We compute persistent homology ofeach of the two labeled classes therein and accept topo-logical features with lifespans greater than two standarddeviations from the mean for each homological dimension

      how is this summary statistic on the labeled classes obtained?

    3. Ifis chosensuch that certain topologically noisy features (Fasy et al.,2014) are included in the estimate ofhphase, then at worstthe architecture is overparameterized, but still learns

      overfitting the noise comes at what cost? https://machinelearning.subwiki.org/wiki/Overfitting

    4. constructing a filtration on Heavisidestep function of the difference of the outputs

      where is this filtration described? what difference of outputs?

    5. CIFAR-10.We compute the persistent homology of severalclasses of CIFAR-10 using the Python library Dionysus

      TODO

    6. In the context of architecture selection, the foregoing mini-mality condition significantly reduces the size of the searchspace by eliminating smaller architectures which cannoteven express the ’holes’ (persistent homology) of the dataH(D). This allows us to return to our original questionof finding suitably expressive and generalizable architec-tures but in the very computable language of homologicalcomplexity: LetFAthe set of all neural networks with’architecture’A, thenGiven a datasetD, for which architecturesAdoes thereexist a neural networkf2FAsuch thatHS(f) =H(D)?

      main idea

    1. Probing the Pareto frontier for basis pursuit solutionsE Van Den Berg, MP FriedlanderSIAM Journal on Scientific Computing 31 (2), 890-912

      woah! so popular

    1. Figure 5: Accuracy improvement or reduction in choosing pre-trained classifiers with topologicalcomplexity close to the dataset versus complexity far from the dataset. Complexity measures used:(a) Sum of total lifetimes ofH0andH1groups, (b) Total lifetimes ofH0groups, (c) Total lifetimes ofH1groups. Blue bars show the accuracy difference when using only pre-trained classifiers with lesstopological complexity than the dataset, orange bars correspond to those with greater complexity, andgreen bars correspond to using all pre-trained classifiers. The black lines show the95%confidenceinterval.

      TODO

    2. In contrast to [8], which simply appliesknown, standard, persistent homology inference methods to different classes of data separately anddoes not scale to high dimensions, we introduce new techniques and constructions for characterizingdecision boundariesand apply them to several commonly used datasets in deep learning.

      yikes

    1. ELDANSHAMIRFigure 1: The left figure representsφ(x)ind= 2dimensions. The right figure represents a croppedand re-scaled version, to better show the oscillations ofφbeyond the big origin-centeredbump

      add graphic to fy19soml?

    1. Feature selection is the process of selecting the set of features of a given input datum that we will use to predict the corresponding output value.

      compare to "architecture selection"

    1. onstruct an explicit deformation retraction of the toruswith one point deletedonto a graph consisting of two circles intersecting in a point, namely, longitude andmeridian circles of the torus.

      woot! turn a bike tube into a figure 8