6 Matching Annotations
  1. Aug 2025
    1. A.cumsum(axis=0)

      A.sum(axis=1, keepdims=True): Sums across columns (axis=1 means “go along each row”)

      A.cumsum(axis=1) Cumulative sum along the row.

    1. %%tab pytorch A = X.numpy() B = torch.from_numpy(A) type(A), type(B)

      X.numpy() → returns a NumPy ndarray backed by the same memory buffer as the TensorFlow tensor (if possible). torch.from_numpy(A) → wraps that NumPy array without copying data, creating a PyTorch tensor view of the same memory. (NumPy array) and B (PyTorch tensor) share the same underlying memory. Changing A will change B, and vice versa — unless you explicitly .clone() one of them in PyTorch or .copy() in NumPy.

    2. %%tab tensorflow A = X.numpy() B = tf.constant(A) type(A), type(B)

      X.numpy() → gives you a NumPy ndarray (A), usually sharing memory with X if possible (for tf.Tensor in eager mode).

      tf.constant(A) → creates a new TensorFlow tensor from the NumPy data.

      This copies the data into TensorFlow’s internal memory — B does not share memory with A

    3. Y = X + Y

      in both TensorFlow and PyTorch, a tensor variable is essentially a reference (pointer-like) to an underlying block of memory, not the data itself copied every time you assign it.