15 Matching Annotations
  1. Oct 2022
    1. noting that differentiation is linear

      I guess this is because differentiation is an operation that has.. [ x + (x+ h) ] / (x+ h) where h approaches 0 So this operation is a linear operation. Not exponential.

    2. Differentiation Rules

      in the courselink discussion. Sayana posted more rules for derivatives that can be helpful.

    3. by collecting these partial derivatives

      gives a jacobian matrix here, defined right below

    4. The gradient is thenthe collection of these partial derivatives

      the difference between the gradient and a partial derivative

    5. The generalization of the derivative to functions of sev-eral variables is the gradient

      the difference between a derivative and a gradient

  2. Sep 2022
    1. We typically write 〈x, y〉 instead of Ω(x, y)

      noting the change in syntax/notation

    2. Figure 3.3 Fordifferent norms, thered lines indicatethe set of vectorswith norm 1. Left:Manhattan norm;Right: Euclideandistanc

      Does anyone know what the axis are for this? x1, x2 and if more dimensions xi?

    3. Throughout this book, we will use the Euclidean norm (3.4) bydefault if not stated otherwise

      noting the euclidean norm will be used as default for the rest of the book

    4. The Manhattan norm is also called `1 `1 normnorm.

      I have had a few courses refer to the norms as L1, L2. they mention which norm it as at the bottom of the definition

    5. AA−1 = I = A−1A

      I think this would of been useful to see in section 2.3. In it they use a B value.

    6. A′

      Does A' have a name. seems linked random how they got this

    7. Definition 2.2 (Identity Matrix). In Rn×n, we define the identity matrix

      Does anyone remember the functional use case of the identity matrix. it mentions multiplication below, however that appears to be just multiple by 1.

    8. From the first and third equation, it follows that x1 = 1

      how did this get x1 = 1? I don't see from the 1st and 3rd equation this follows. I assume we add eq (1) and (3), however that does not create x1 = 1? there would still be an x2 left over

  3. Oct 2021
    1. Example 4.1 (Testing for Matrix Invertibility)Let us begin with exploring if a square matrix A is invertible (see Sec-tion 2.2.2). For the smallest cases, we already know when a matrixis invertible. If A is a 1 × 1 matrix, i.e., it is a scalar number, thenA = a =⇒ A−1 = 1a. Thus a 1a = 1 holds, if and only if a 6= 0.For 2 ×2 matrices, by the definition of the inverse (Definition 2.3), weknow that AA−1 = I. Then, with (2.24), the inverse of A is

      Directly relates to figure 1 and the paragraphs above

    2. Figure 4.1 A mindmap of the conceptsintroduced in thischapter, along withwhere they are usedin other parts of thebook.Determinant Invertibility CholeskyEigenvaluesEigenvectors Orthogonal matrix DiagonalizationSVD

      As mentioned in the discussion board, I found this diagram very helpfully for understanding the chapter. hopefully it helps others.