tf.norm(tf.ones((4, 9)))
B = tf.constant([[3.0, 4.0], [0.0, 0.0]]) print(tf.norm(B).numpy()) # sqrt(3² + 4² + 0² + 0²) = sqrt(25) = 5.0
tf.norm(tf.ones((4, 9)))
B = tf.constant([[3.0, 4.0], [0.0, 0.0]]) print(tf.norm(B).numpy()) # sqrt(3² + 4² + 0² + 0²) = sqrt(25) = 5.0
A.cumsum(axis=0)
A.sum(axis=1, keepdims=True): Sums across columns (axis=1 means “go along each row”)
A.cumsum(axis=1) Cumulative sum along the row.
A.numel()
total number of elements in a tensor.
%%tab pytorch A = X.numpy() B = torch.from_numpy(A) type(A), type(B)
X.numpy() → returns a NumPy ndarray backed by the same memory buffer as the TensorFlow tensor (if possible). torch.from_numpy(A) → wraps that NumPy array without copying data, creating a PyTorch tensor view of the same memory. (NumPy array) and B (PyTorch tensor) share the same underlying memory. Changing A will change B, and vice versa — unless you explicitly .clone() one of them in PyTorch or .copy() in NumPy.
%%tab tensorflow A = X.numpy() B = tf.constant(A) type(A), type(B)
X.numpy() → gives you a NumPy ndarray (A), usually sharing memory with X if possible (for tf.Tensor in eager mode).
tf.constant(A) → creates a new TensorFlow tensor from the NumPy data.
This copies the data into TensorFlow’s internal memory — B does not share memory with A
Y = X + Y
in both TensorFlow and PyTorch, a tensor variable is essentially a reference (pointer-like) to an underlying block of memory, not the data itself copied every time you assign it.