2 Matching Annotations
  1. Last 7 days
    1. In Section 3.1, we already discussed norms that we can use to computethe length of a vector. Inner products and norms are closely related in thesense that any inner product induces a norm

      I was in Neil Bruce's Computer vision class, and he was highlighting how in extremely high dimensional spaces, euclidean distance can become more of an abstract concept and can behave weirdly globally, but can be consistent on a local level. Do not really understand why, but look forward to learning more.

    2. Example 3.1 (Manhattan Norm)The Manhattan norm on Rn is defined for x ∈ Rn as Manhattan norm∥x∥1 :=n∑i=1|xi| , (3.3)where | · | is the absolute value. The left panel of Figure 3.3 shows allvectors x ∈ R2 with ∥x∥1 = 1. The Manhattan norm is also called ℓ1 ℓ1 normnorm.

      Interesting to see cross over from topics class done in Mathematics on artificial intelligence. We used Manhattan distance for tasks involved with greedy search strategies in checkers. I wonder what limits will be like down the line in terms of training AI if we use a non-differentiable function.