405 Matching Annotations
  1. Sep 2023
    1. Dempe, S. 2002. Foundations of Bilevel Programming. Kluwer Academic Publishers, Dordrecht, The Netherlands

    Annotators

    1. Eliminating the parameter s,

      It means presenting it to implict form instead of the parametric form.

    Annotators

  2. May 2023
    1. maximalelement

      maximal element of a partial order.

      A maximal element is, of the set \(M\), not the set \(W\).

    2. upper bound

      Upper bound of a subset of a partially ordered set.

      Observe that an upper bound is not necessarily from the subet \(W\subseteq M\), of the partial order.

    3. 4.1-6 Zorn's lemma.

      Useful for abstract vector spaces Banach Hahn.

    4. 4.7-3 Uniform Boundedness Theorem.

      A sequence of linear operator such that, pointwise the value of the linear operators are bounded is enough for the norm of the operator to be bounded for the limit of the sequence of linear operators.

    5. 4.4-1 Riesz's Theorem (Functionals on C[a, b]).

      Every Bounded linear functionals on C[a, b] is Riemann Integrals.

    6. 4.8-4 Theorem (Strong and weak convergence).

      Strong convergence, equivalences and converse.

    7. 4.8-3 Lemma (Weak convergence).

      weak convergences and its lemmas.

    8. 4.3-3 Theorem (Bounded linear functionals).

      Bounded Linear Functional Theorems in Banach Space.

    9. 4.3-2 Hahn-Banach Theorem (Normed spaces).

      Hahn Banach Theorem in normed vector spaces

    10. 4.8-2 Definition (Weak convergence).

      weak convergence in a normed space

    11. 4.8-1 Definition (Strong convergence).

      Strong convergence in a normed space

    12. 4.7-4 Space of polynomials.

      The space of polynomials is not a complete space.

    13. 4.6-2 Lemma (Canonical mapping).

      The canonical mapping properly defined

    14. 4.5-1 Definition (Adjoint operator T X )

      Definiton of Adjoint operator in Normed spaces (Not Hilbert!!!)

    15. 4.3-4 Corollary (Norm, zero vector).

      Representing vector norm using linear functional from the dual spaces.

    16. 4.2-1 Hahn-Banach Theorem (Extension of linear functionals).
    17. 2.10-7 Space ,p.

      Example for looking for the dual space of \(l^p\) sequence space.

    18. 2.10-5 Space Rn

      The example that finite Euclidean space is a self-normed space.

    1. Clear from (5.1).

      Sequence \(\Vert x_n - c\Vert\) bounded, monotone, and for all \(c \in C\). The real sequence is bounded above and monotone decreasing, limit exists in norm for all \(c\in C\).

    2. Proposition 5.4

      Consequence of Fejer Monotonicity

    Tags

    Annotators

    1. 8.3.1. I NTEGRAL C ONVERGENCE T HEOREM

      Integral of the limit of uniform converging function is the same as the integral of the lmit. Pay attention to conditions: 1. Closed and bounded interval \([a, b]\). 2. Sequence of CONTINUOUS functions. 3. Of course, converges uniformly.

  3. Apr 2023
    1. 4.3.8. T HEOREM .

      open set, closed complement and vice versa. In finite Euclidean space.

    2. 4.3.9. P ROPOSITION .

      in \(\mathbb R^n\), intersections/union of open subets are still open.

    3. 4.3.3. P ROPOSITION .

      Union andintersection preserves closedness of sets, in \(\mathbb R^n\).

    1. Theorem 5.7.

      Complexity of the Floyd warshall Algorithm.

    2. 11.16
      1. Convert the tree structure to a flow.
      2. compute the potential and the reduced costs.
      3. Select an arc to pivot, keeping the strong feasibility of the tree when changing the arcs on the tree!

      Strong feasibility of the tree is at pg 421

    3. We now state two alternate definitions ofa strongly feasible spanning tree.

      strong feasible spanning tree solution definition.

    4. with the first eligible pivot rul

      pg: 417

    5. Theorem 11.9 (Triangularity Property).

      the incidence matrix of a spanning tree graph can be, lower triangular.

    6. the basic idea in the procedure is tostart at node 1 and fan out along the tree arcs using the thread indices to computeother node potentials.

      The starting node is, the first element from thread. First node from the in-order tree traversal.

    7. Theorem 11.2 (Spanning Tree Property).
    8. Theorem 11.1 (Cycle Free Property).
    9. tail nodes

      for arc (i, j), the frist index is the tail.

    10. 9.16.

      Do (a) - (c)

      (a): No negative cycle, then flow optimal, we might need the reduced costs positive optimaity conditions?

    11. Lemma 9.14

      out of kilter algorithm

    12. Lemma 9.13.

      Out of Kilter Algorithm

    13. Theorem 9.5 (Weak Duality Theorem)

      (9.11) is literally the dual LP formulation of the min cost flow problem onthe network.

    14. 9.28.

      This is the other way of the proof for theorem 9.4 listed earlier in the chapter.

      Expression (9.8) is at page 310 for the complementary slackness conditions, expression (9.7) is page 309, thorem 9.3.

    15. Theorem 9.4 (Complementary Slackness Optimality Conditions)

      Some flow is optimal, if and only if, there is some potential, where, the reduced costs for the potential satisfies the complementary slackness conditions for the duality of the linear program of the min cos flow problem.

    16. Proof

      We actually didn't show that the cases that proved below are mutually exclusive cases!

    17. 9.44.
    18. 9.41.
    19. 11.12
    20. 9.22.

      Draw the residual graph G(x) with appropriate arc costs and the updated Residual.

    1. 2.10-4 Theorem (Dual space).

      The dual spaces theorem

    2. 2.10-3 Definition (Dual space X').
    3. 2.10-6 Space II

      The dual space of \(l^1\) is \(l^\infty\).

    4. 2.10-2 Theorem (Completeness)
    5. 2.10-1 Theorem (Space B(X, Y».
    6. The definition willinvolve a general field K, but in functional analysis, K will be R or C.The elements of K are called scalars; hence in our case they will ber.eal or complex numbers.

      This is probably very important due to the fact that, both field is complete, and they are equipped with norm. And of course, the reals and the complex are technically also a vector field with dim = 1.

      The total orderness property is probably important for vector fields in functional analysis.

    7. extension

      It's just augmenting the domain of the linear operators to a set which the domain is a subset of.

    8. F is a linearly independent set since

      I don't think we ever defined the concept of linear indepence for a set of linear functionals.

    9. (5a)

      It's inner product.

    10. sublinear functional

      A linear functional such that: 1. Subadditive. 2. Positive Homoegeneous.

    11. 2.9-2 Lemma· (Zero vector)
    12. 2.9-3 Theorem (Algebraic re8exivity).

      Dual of the dual is just the original space.

    13. 2.9-1 Theorem (Dimension of X*).

      The space and the dual space is having the same dimension.

    14. dual space

      first mention of dual space in this textbook?

    15. 2.8-3 Theorem (Continuity and boundedness)

      It's an example of theorem 2.7-9. A linear operator has boundedness and continuity being an equivalent conditions.

    16. 2.8-2 Definition (Bounded linear functional)

      Very similar to the definition of a linear mapping between 2 normed space.

    17. 2.8-1 Definition (Linear functional)

      Linear functonal maps from a vector space to the real space.

    18. 2.7-10 Corollary (Continuity, null space)

      For a bounded linear operator \(T\), we have: 1. a sequence converge in the domain of T will converge in the image of T after applying the linear transformation 2. The null space of the operator is a closed linear subspace.

    19. Proof.

      The proof make a special kind of construction. It takes a point in \(Z\) but not \(Y\) , and then it find the cloest point in \(Y\) to approxiamte such a vector. Then, the unit vector of, the closest approximation point to the point being approximated can generate a unitary vector that satisfies the conditions we talked about.

    20. 2.7-1 Definition (Bounded linear operator)

      If a vector is bounded, and we put this vector into the linear operator, then the output vector from the space would be bounded by the norm too.

    21. 2.7-9 Theorem (Continuity and boundedness)

      Let T be a linear mapping between 2 normed space, then:

      1. T is continous if and only if it's bounded.
      2. T is continous at a single point then it's continous.
    22. 2.4-1 Lemma (Linear combinations)

      Norm of the linear combinations of vectors are, larger than the sum of absolute value of all the scalar weight, by a strictly positive constant. Only for finite dimensional spaces.

      \( \Vert \alpha_1 + \cdots + \alpha_n\Vert \ge c(|\alpha_1| + \cdots + |\alpha_n|) \)

    23. 2.7-8 Theorem (Finite dimension).

      Linear operator, and bounded linear operators are equivalent when the vector space is finite dimensional.

    24. 2.7-2 Lemma (Norm).

      operator norm is actually a type of norm. Linear operators as homomorphism between the vector spaces are indeed a vector space itself. See here.

      The properties of the operator norm basically interits the properties from the normed vector space. This is direct from the definition. However, these results are only for the bounded linear operators.

    25. 2.6-11 Lemma (Inverse of product)
    26. 2.6-10 Theorem (Inverse operator)

      (1) Inverse exists is equivalen to having only \(\vec 0\) for the null space of the operator. (2) When the inverse exists, then it's a linear operator. (3) if dim of T is finite dimensional, then dim of range and domain is the same for T.

    27. 2.6-9 Theorem (Range and null space).
      1. Range is a vector space.
      2. If the dim of the range is less than infinitely, then the dim of the range is \(\le\) dim of the domain.
    28. 2.6-1 Definition (Linear operator).

      Zero vector being the identity of the linear operator is not direct from the axiom, but rather, the property of the real field.

    29. 2.4-4 Definition (Equivalent norms).

      Take note that, if we treat the norm as a type of metric, then the conditions for equivalent norm is strictly stronger than the conditions needed for metric, which is stated in convergence of sequences.

    30. 5
    31. 1.

      A finite dimensional spaces/subspaces have equivalence between compactness and Closed and Bounded. In this case, we infinite dimensions. Then there is some closed and bounded spaces that is not compact.

      That was my first thought that is all.

    32. 12.

      From 11, we have: 1. Norm implies convex norm ball.

      Hence the contra positive of the statement will state that, if the norm ball is not convex, then it's not a norm.

    33. 11. (Convex set, segment)

      Unit Ball in a Normed space is convex

    34. subspaces of a normed spaceX (of any dimension)

      I just discovered that the subspaces in vector spaces are very different compare to metric spaces.

      1. A subspace of a metric space just have to be a metric space.
      2. A subspace of a vector space will still have to retain the vector space structure. But if it's viewed as a metric space, this doesn't have to be the case.

      Also take note that this is talking about any spaces of dimensions.

    35. 2.5-6 Theorem (Continuous mapping)

      Continuous mapping preserves compactness in finite dimensional spaces.

    36. 2.5-5 Theorem (Finite dimension)

      Compact Closed unit ball in a normed spaces would mean that we have finite dimension.

    37. 2.5-4 F. Riesz's Lemma

      Riesz's Lemma, preparing for a theorem about the norm ball in finite dimensional spaces.

    38. 2.5-3 Theorem (Compactness).

      compactness is euivalent to closed and boundedness in finite dimensional spaces.

    39. 2.5-2 Lemma (Compactness)
    40. 2.5-1 Definition (Compactness)
    1. 18.1 Theorem.

      extreme value theorem. A continuous function attains some type of minimum and maximum over an compact set in its domain.

    2. 11.4 Theorem.

      Monotone Subsequence Theorem for real Sequences.

    3. is beyond all the dominant terms

      index n1 is larger than the indices for all the dominating terms in the sequence.

    Tags

    Annotators

    1. 1.9 Theorem (attainment of a minimum)

      The existence of a minimizer for functions.

    Tags

    Annotators

    1. Lemma 2.13

      The cute formula, generalized version.

    Tags

    Annotators

  4. Mar 2023
    1. 2.4-5 Theorem (Equivalent norms).

      In a finite dimensional space, every norm is Equivalent

    2. 2.4-3 Theorem (Closedness)

      Every finite dimensional Banach space is closed.

    3. 2.4-2 Theorem (Completeness).

      Every finite dimension subspace of the normed space is complete, so are their subspace.

    4. 2.3-2 Theorem (Completion)

      Alternative explanations from Walfram Math world: here.

      In brief, you can use an isometry map from a banach space to another subspace in a different Banach space such that it's dense.

    5. 2.3-1 Theorem (Subspace of a Banach space).

      Similar to 1.4-7

    6. It was given only quite recently, by P.Enflo (1973) who was able to construct a separable Banach spacewhich has no Schauder basis.
    7. A fortiori,

      "from the stronger reasoning, we have:", it basically means: "implied" and not "equivalent. "

    8. 1.5-2 Completeness of l""

      Classical example, and this feels like uniform convergence of functions, similar principles.

    9. 2.
    10. 8.
    11. 6.
    1. Theorem 9.7.
    2. Theorem 9.6 (Strong Duality Theorem)
    3. Theorem 9.3 (Reduced Cost Optimality Conditions)
    4. Theorem 9.1

      no negative cycles optimal solutions min cost flow theorem

    5. 7.14

      Undergraduate based.

      We have to consider it recursively to find a flormula for the number of saturating and nonsaturating pushes/relabel.

      Ordered by distancer label and then the lexcicographic order of the nodes label.

    6. Thus the maximum capacity augmenting path has residual capacity atleast (v* - v)/m. Now consider a sequence of 2m consecutive maximum capacityaugmentations starting with the flow x. If each of these augmentations augments atleast (v* - v)/2m units of flow, then within 2m or fewer iterations we will establisha maximum flow.

      Why is this \(2m\) what the fuck dude.

      Ok, check: here for more information.

    7. This argument shows that within 2m consecutive iterations, thealgorithm either establishes a maximum flow or reduces the residual capacity of themaximum capacity augmenting path by a factor of at least 2.

      This is an unstated claim where, the proof comes before this paragraph.

    8. 6.18

      gradute based, proof.

      Observe that: A minimum flow will require some lowerbound, or else this is trivial because zero flow will always be an optimal solution.

      We need to convert the problem to an max flow, with/without the lower bound. 2 Applications of the maxflow probably means that we need a lower bound yes.

    9. this result would imply that the al-gorithm could saturate any arc at most k/2 times.

      2 relabeling is required to saturate one arc once.

    10. Theorem 9.10
    11. algorithm cycle-canceling

      algorithm

    12. break ties infavor of a node with the smallest node number

      I think that, the set of active nodes can never contains the same \(j\) such that \(j'\) is also an active node.

    13. 7.4

      Undergraduaet based.

      Show the distance labels and excesses at each step. Verify you have the maximum flow by exhibiting an appropriate cut.

      There is double arcs on here again what the hell.

    14. 7.2

      Undergraduate based, algorithm.

      Show your residual Δ network after each augmentation. Verify you have the maximum flow by exhibiting an appropriate cut.

      Why is there a double arc for the graph 7.21(b)? Is it a lower bound for the flow on the network?

    15. Lemma 7.12.

      Upper bound for the distance label of the preflow push algorithm

    16. Lemma 7.11.

      dipath connection for all nodes in the graph.

    17. active

      A node is active in the preflow push algorithm when its excess is strictly positive, during the operations of the preflow push algorithm.

    18. An augmentation on arc (i, j) might, however,create an additional arc (j, i) with rji > 0 and therefore also create an additionalinequality d(j) ~ d(i) + 1 that the distanceJabels must satisfy.

      This is just using the inductive hypothesis. The indecutive hypothesis hold for all the neighbours of the curren node \(i\), after the delection of the arcs, the neigboring vertices decreased, therefore, valid distance label condition still holds.

    19. Lemma 7.5

      The shortest augmenting path matains a correct distance label for all the vertices during the iterations. By valid, we mean that there exists some way such that \(d(j) = d(i) + 1\), where \((i, j)\in A\), for all \(i\in N\).

    20. Property 7.2.
    21. Working with Reduoed Costs

      A reduced cost is generated from a potential assignment on the vertices of the graph.

    22. (2.3)

      This is the definition of a reduced costs with potential on the nodes.

    23. We have seen earlier thatsome feasible circulation Xl in G(XO) satisfies the property that x = XO + Xl.

      Applying the flow decomposition theorem from before, we decompose any flows into a sum of cycles flow and paths flows.

    24. admissible arc

      \((i, j)\) is an admissible arc when \(d(j) = d(i) + 1\), whre \(d\) is the distance label from the algorithm, and \(r_{i, j} > 0\).

    25. Property 7.7.

      The complexity of the memeroized shortest aug path max flow.

    26. 7.4 SHORTEST AUGMENTING PATH ALGORITHM

      Not quite Dinic, Not quite Edmonds Karp. It's something in between.

    27. blocking flows
    28. 6.43.

      graduate base, proof, see page 195 for theorem 6.11

    29. Figure 6.22 Examples for Exercises 6.12 and 6.13.

      Undergraduate based:

      Using the shortest augmenting path algorithm, solve the maximum flow problem give in Figure 6.22(b). Describe the augmenting paths at each iteration as well as any advances and/or retreats and the distance labels at each iteration. Verify you have the maximum flow by exhibiting an appropriate cut.

      In brief, run the Dinic's algorithm for this graph.

    30. Establishing a Feasible Flow

      Feasibility search for the max flow problem with lower bound constraint.

      Didn't mention what happen to the \(s, t\) vertices for the mass balance constraints.

    31. 6.34

      Undergraduate based. (a) F, (b) F. (c) F, (d) F, (e) T, (f) F.

      \(x_{i, j}= 0 \dot\vee x_{j, i} = 0\)

    32. Consider a node j that is connected to the source node by a shortest path s =io - i] - i2 - ... - ik - ik +] = j consisting of k + 1 arcs, but has no shortestpath containing fewer than k + 1 arcs.

      shortest path complexity proof is actually around here.

    33. Theorem 7.10
    34. If we apply themodified label-correcting algorithm to a problem with nonnegative arc lengths andwe always examine a node from LIST with the minimum distance label, the resultingalgorithm is the same as Dijkstra's algorithm discussed in Section 4.5

      Dijkstra's interpretation from the generic labeling algorithm.

    35. Theorem 5.3

      The complexity of the FIFO label correcting algorithms.

    36. Modified Label-Correcting Algorithm

      Not Bellmand ford, but it's an generic version of the FIFO it looks like.

    37. Consequently, we can store all nodeswhose distance labels change during a pass, and consider (or examine) only thosenodes in the next pass.

      this is the key that helps to improve the labeling algorithm and develope the idea of FIFO.

    38. Dinic's algorithm

      The BFS for max flow with retreat.

    39. Theorem 7.4
    40. Property 7.1
    41. Property 7.3
    42. Proof

      The proof uses the algorithm.

    43. Theorem 3.5 (Flow Decomposition Theorem).

      We implicitly assume that the flow is positive due to how the algorithm works.

    44. Property 3.6. A circulation x can be represented as cycle flow along at mostm directed cycles.

      Circulation property. Why is it not along a \(m\) many closed walks?

      This is only true from the view of the flow decomposition algorithm and not the LP.

    45. Weneed to show that the distance label d(i) of node i is optima

      Hypothesis (1) holds for set \(s\cup {i}\).

    46. Potential Functions and Amortized Complexity

      Potential function and its application in Amortized Complexity.

    47. In the Floyd-Warshall algorithm, we detect the presence of a negative cyclesimply by checking the condition d[i, i] < 0

      Detecting the negative cycles using Floyd Warshall Algorithm.

    48. An algorithm is said to be !l(f(n» if for some numbers e / and no and all n ~no, the algorithm takes at least e'f(n) time on some problem instance.

      Big Omega definition

    49. An algorithm is said to be e(f(n» if the algorithm is both O(f(n» and !l(f(n».

      Definition of Big Theta.

    50. An algorithm is said to run in O{f(n» time if for some numbers c and no, thetime taken by the algorithm is at most cf{n) for all n 2: no.

      Big O definition

    51. 7.13

      Graudate based.

    52. 6.20

      Undergraduate, modeling based:

      Make sure to prove the transformation works after describing it by proving a correspondence between flows of the multiple source/sink problem and the single source/sink problem.

    53. We refer totwo directed paths from node s to node t as arc disjoint if they do not have any arcin common.

      arc disjont s-t path.

    54. 6.7 FLOWS WITH LOWER BOUNDS

      A small generalization of the maximum flow problem where lower capacity constraints are introduced for all the arcs.

    55. 6.13.

      undergraduate, hw description:

      Find the longest path in the residual network by inspection. (Note: finding the longest path in a network is NP-complete unless the network is acyclic.)

      Labeling Algorithm Page 184

    56. Theorem 6.12.
    57. Theorem 6.11 (Circulation Feasibility Conditions)

      This is a NECESSARY CONDITIONS for feasibility of the circulation problem.

    58. Theorem 6.10 (Generalized Max-Flow Min-Cut Theorem).
    59. by definition, the capacity 3 of an augmenting path is always positive

      positive flow path is part of the definition of an augmenting paths!

    1. Hence (xn)converges.

      Subsequence of a Cauchy sequence as the same limit as the mother sequence.

    2. and hence∞∑k =1xNk +1 − xNk converges

      By the statemet hypothesis.

    3. c0 is itscompletion

      The limit of sequences in \(c_{0, 0}\) that is not in the set \(l^\infty\) completes it by taking the closure of it.

    4. Example 27

      \(c_0\) is the set of all sequences that converges to \(0\) in norm \(\Vert \cdot\Vert_\infty\).

    5. Proof.

      It's about the maximum of partially order sets:

      It states that a partially ordered set containing upper bounds for every chain (that is, every totally ordered subset) necessarily contains at least one maximal element.

    6. Theorem 26

      Warning, property of metric space, not Banach Space specific.

    7. (xn) converges, so it is Cauchy

      This is not directly from the definition of complete metric, it's from the fact that a convergence sequence is Cauchy for the same metric!

    8. There exists a sequence (xn) in Ythat converges to y

      By \(y\) being in the closure of \(Y\).

    9. Example 24

      WTF, absolute convergence, the convergene on the norm of some sequences of elements from the vector spaces doesn't imply that the sequence itself is going to a limit in the space.

      Relevant discussion: here, Please think and read carefully about them yes.

    10. general normed spaces

      Some norm on a vector space, but the vector space might be complete, or incomplete.

    11. sequece in

      Banach Space yes.

    12. Proof

      Why Cauchy when we already know that the sequence of the function is just the heaviside function at \(x = 1/2\)?????

      Because convering to some limit in the spaces never meant that the metric of the sequence can be Cauchy. The metric. To show that it's incomplete, we need to start with the \(d(x, y)\) metric first, and then show that the convergence of the sequence of piecwise linear function converges in metric and then show that the limit is not in the space. Just showing thta it's not in the space is not entirely valid. But we sure can make use of the fact that a converging sequence in metric should also be Cauchy yes.

    Tags

    Annotators

  5. Feb 2023
    1. 5.41.

      Graduate base, proof. Description:

      Gives a proof if the answer is yes or a no.

      We would need an arc that goes over $O(n^2)$ many pairs of paths for all the possible path, and we need to firstly show that the previous algorithm 5.40 is not going to work anymore and, if would have no choice to look over almost all the possible path to figure out the optimal path for the pertrubed costs!

    2. 5.40

      Graduate based, proof. Description:

      Use the answers from 4.20 to help.

    3. 4.20

      Graduate based, proof. Description:

      The state requires that \(s\neq t\) to be true.

      By length, the question probably mean the total cost of the directed path.

      (a) I think this is just a direct application of the characterization of shortest path conditions, \(d(i, j) = d(i, k) + d(k, j) \; \forall i, j \in N\). (b) I have a feeling that this is a similar proof compare to the Floyd Warshall algorithm as discussed before.

    4. 5.4

      Undergraduate homework question. Description from the assignment sheet:

      Order the nodes 1,...,7, and order the arcs with tail node i, A(i), increasing by their head nodes. See next page for the algorithm.

      The graph is at the same page. The FIFO Label-Correcting algorithm is exactly the same as Moore Bellman Ford Algorithm. Labels at the end should be: [0, -5, 15, -5, -10]

    5. (4.1a)

      undergraduated based. Problem description:

      Consider the LP formulation of the shortest path problem given on page 94 of the text. Show that shortest path problem LP is unbounded for any feasible network that has a negative cycle.

    6. (a) (b)Figure 4.15 Some shortest path networks.

      Undergraduate, description:

      Solve the all-pairs shortest path problem using the Floyd-Warshall algorithm to the network given in Fig. 4.15(a). Show the distance matrix for each iteration. To make the matrix more readable, relabel the nodes a,b, ..., f for 1,2,... ,6, and z for node 0.

      What the heck is this "z for node 0" under the homework discription, no such node exists in the graph, nor is thre additional nodes for the FW algorithm.

    7. Assumption 6.4.

      \((i, j)\in A\iff (j, i)\in A\). We allow for arcs with zero capacity. For all single direction arc, the opposite direction arc on the graph is having a capacity of zero.

      I think this is essentially a residual graph on the zero flow, which is trivially feasible. Residual graph on the network is covered in the later part of the chapter (pg177).

    8. Theorem 6.6.

      complexity of the max flow labeling algorithm.

    9. Theorem 6.7.

      Min cut capacity and the cardinality of maximum number of arc disjoint path from source to destination.

    1. Proof.

      It's proving by showing the existence of a basis that contains infinitely many elements in it.

      Recall that function is an element from an vector space and the inner product of the space is the integrals of stuff.

    2. This representation is unique for every nonzero x ∈ X

      I think it's unique also for zero vector since the set of vector is linear independent?

      Also observe that the Hamel basis seems to be finite.

    3. e1, e2, ... is not a Hamel basis of l2, but is a Schauder basis of l2

      I am guessing that orthogonality is the key. Shcuader Basis is an orthogonal basis and Hamel basis is not...? Also observe that Hamel basis seems to be finite.

    Annotators

    1. not proven here

      Probably by strong convexity and the fact that the function is differentiable on its entire.

      Max Monotone Operator related, related to the Surjectivity of the operator or something.

    2. 〈∇f (x) − ∇f (y ), x − y

      Strong Monotonicity of the gradient/subgradient operator when the function is strongly convex.

    Annotators