Algorithms and Data Structures: Network Flows. 24th & 28th Oct, 2014

Similar documents
Stanford University CS261: Optimization Handout 9 Luca Trevisan February 1, 2011

Modeling, Analysis and Optimization of Networks. Alberto Ceselli

Graphs and Network Flows IE411. Lecture 14. Dr. Ted Ralphs

Connected Identifying Codes

MITOCW watch?v=vyzglgzr_as

1.6 Congruence Modulo m

1 = 3 2 = 3 ( ) = = = 33( ) 98 = = =

Domination game and minimal edge cuts

17. Symmetries. Thus, the example above corresponds to the matrix: We shall now look at how permutations relate to trees.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Constructions of Coverings of the Integers: Exploring an Erdős Problem

Lower Bounds for the Number of Bends in Three-Dimensional Orthogonal Graph Drawings

CSE 20 DISCRETE MATH. Fall

CSE 573 Problem Set 1. Answers on 10/17/08

BRITISH COLUMBIA SECONDARY SCHOOL MATHEMATICS CONTEST, 2006 Senior Preliminary Round Problems & Solutions

Odd king tours on even chessboards

Carmen s Core Concepts (Math 135)

Constructing K-Connected M-Dominating Sets

In Response to Peg Jumping for Fun and Profit

Analysis of Power Assignment in Radio Networks with Two Power Levels

Tilings with T and Skew Tetrominoes

SCHEDULING Giovanni De Micheli Stanford University

Notes for Recitation 3

It is important that you show your work. The total value of this test is 220 points.

Network-building. Introduction. Page 1 of 6

ON SPLITTING UP PILES OF STONES

Chameleon Coins arxiv: v1 [math.ho] 23 Dec 2015

Fast Sorting and Pattern-Avoiding Permutations

Problem Set 8 Solutions R Y G R R G

4.4 Shortest Paths in a Graph Revisited

LECTURE 7: POLYNOMIAL CONGRUENCES TO PRIME POWER MODULI

Asymptotic Results for the Queen Packing Problem

Mechanism Design without Money II: House Allocation, Kidney Exchange, Stable Matching

Computational aspects of two-player zero-sum games Course notes for Computational Game Theory Section 3 Fall 2010

Two-person symmetric whist

UNIVERSITY of PENNSYLVANIA CIS 391/521: Fundamentals of AI Midterm 1, Spring 2010

How to divide things fairly

Easy to Win, Hard to Master:

Game Theory and Algorithms Lecture 19: Nim & Impartial Combinatorial Games

Network-Wide Broadcast

ON THE EQUATION a x x (mod b) Jam Germain

ALGEBRA: Chapter I: QUESTION BANK

CCO Commun. Comb. Optim.

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

An interesting class of problems of a computational nature ask for the standard residue of a power of a number, e.g.,

UMBC CMSC 671 Midterm Exam 22 October 2012

arxiv: v1 [cs.cc] 21 Jun 2017

More Great Ideas in Theoretical Computer Science. Lecture 1: Sorting Pancakes

Optimal Transceiver Scheduling in WDM/TDM Networks. Randall Berry, Member, IEEE, and Eytan Modiano, Senior Member, IEEE

Connecting Identifying Codes and Fundamental Bounds

Index Terms Deterministic channel model, Gaussian interference channel, successive decoding, sum-rate maximization.

Commuting Graphs on Dihedral Group

Definitions and claims functions of several variables

28,800 Extremely Magic 5 5 Squares Arthur Holshouser. Harold Reiter.

PUZZLES ON GRAPHS: THE TOWERS OF HANOI, THE SPIN-OUT PUZZLE, AND THE COMBINATION PUZZLE

Optimal Multicast Routing in Ad Hoc Networks

Robust Location Detection in Emergency Sensor Networks. Goals

Lecture 2: Sum rule, partition method, difference method, bijection method, product rules

Solutions to Part I of Game Theory

English Version. Instructions: Team Contest

Low-Latency Multi-Source Broadcast in Radio Networks

Algorithms. Abstract. We describe a simple construction of a family of permutations with a certain pseudo-random

Wilson s Theorem and Fermat s Theorem

THE mobile wireless environment provides several unique

Introduction to Modular Arithmetic

Introduction to Combinatorial Mathematics

Principle of Inclusion-Exclusion Notes

STAJSIC, DAVORIN, M.A. Combinatorial Game Theory (2010) Directed by Dr. Clifford Smyth. pp.40

Lecture 13 Register Allocation: Coalescing

Modular Arithmetic. claserken. July 2016

Optimal Results in Staged Self-Assembly of Wang Tiles

18.204: CHIP FIRING GAMES

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007

Solution: This is sampling without repetition and order matters. Therefore

Game Theory and Randomized Algorithms

Introduction to Source Coding

Rumors Across Radio, Wireless, and Telephone

Dummy Fill as a Reduction to Chip-Firing

Acentral problem in the design of wireless networks is how

Interference-Aware Joint Routing and TDMA Link Scheduling for Static Wireless Networks

UMBC 671 Midterm Exam 19 October 2009

CSCI 2200 Foundations of Computer Science (FoCS) Solutions for Homework 7

arxiv: v1 [math.co] 11 Jul 2016

Chapter 1. The alternating groups. 1.1 Introduction. 1.2 Permutations

Permutations and codes:

Axiom A-1: To every angle there corresponds a unique, real number, 0 < < 180.

Radical Expressions and Graph (7.1) EXAMPLE #1: EXAMPLE #2: EXAMPLE #3: Find roots of numbers (Objective #1) Figure #1:

RAINBOW COLORINGS OF SOME GEOMETRICALLY DEFINED UNIFORM HYPERGRAPHS IN THE PLANE

arxiv: v1 [math.co] 7 Aug 2012

Crossing Game Strategies

A Graph Theoretic Approach for Channel Assignment in Cellular Networks

Minmax and Dominance

The Product Rule The Product Rule: A procedure can be broken down into a sequence of two tasks. There are n ways to do the first task and n

Problem Set 10 Solutions

A theorem on the cores of partitions

Section II.9. Orbits, Cycles, and the Alternating Groups

Topology Control. Chapter 3. Ad Hoc and Sensor Networks. Roger Wattenhofer 3/1

Solutions to the problems from Written assignment 2 Math 222 Winter 2015

Number Theory - Divisibility Number Theory - Congruences. Number Theory. June 23, Number Theory

Math 127: Equivalence Relations

Transcription:

Algorithms and Data Structures: Network Flows 24th & 28th Oct, 2014 ADS: lects & 11 slide 1 24th & 28th Oct, 2014

Definition 1 A flow network consists of A directed graph G = (V, E). Flow Networks A capacity function c : V V R such that c(u, v) 0 if (u, v) E and c(u, v) = 0 for all (u, v) / E. Two distinguished vertices s, t V called the source and the sink, respectively. We read (u, v) to mean u v. Assumption Each vertex v V is on some directed path from s to t. This implies that G is connected (but not necessarily strongly connected), and that E V 1. ADS: lects & 11 slide 2 24th & 28th Oct, 2014

Example 20 u v w 20 20 s r t x y 1 z 1 For this graph, V = {s, r, u, v, w, x, y, z, t}. The edge set is E = {(s, u), (s, r), (s, x), (u, v), (u, x), (v, x), (v, w), (r, w), (r, y), (x, y), (y, r), (y, z), (z, w), (z, t), (w, t)}. Some examples of capacities are c(s, x) =, c(r, y) =, c(v, x) = 20 and c(v, r) = 0 (since there is no arc from v to r). ADS: lects & 11 slide 3 24th & 28th Oct, 2014

Network Flows Definition 2 Let N = (G = (V, E), c, s, t) be a flow network. A flow in N is a function satisfying the following conditions: f : V V R Capacity constraint: f (u, v) c(u, v) for all u, v V. Skew symmetry: f (u, v) = f (v, u) for all u, v V. Flow conservation: For all u V \ {s, t}, v V f (u, v) = 0. ADS: lects & 11 slide 4 24th & 28th Oct, 2014

Network Flows (cont d) N = (G = (V, E), c, s, t) flow network, f : V V R flow in N. For u, v V we call f (u, v) the net flow at (u, v). The value of the flow f is the number f = v V f (s, v). Notice that our particular defn. of flow (the skew-symmetry constraint) ensures that f (u, v) is truly the net flow in the usual sense of the word (e.g. if (r, y) on slide 2 was to carry flow 3, and (y, r) to carry flow 4, we will have f (r, y) = 1). ADS: lects & 11 slide 24th & 28th Oct, 2014

Example A flow of value 18. s 8/ / 8/ 20 u v w 8/ 20 20 r / / t x 8/ y 8/ 1 z 8/ 1 Only positive net flows are shown. ADS: lects & 11 slide 6 24th & 28th Oct, 2014

The Maximum-Flow Problem Input: Network N Output: Flow of maximum value in N The problem is to find the flow f such that f = v V largest possible (over all legal flows). f (s, v) is the ADS: lects & 11 slide 7 24th & 28th Oct, 2014

The Ford-Fulkerson Algorithm Published in 196 by Delbert Fulkerson and Lester Randolph Ford Jr. Algorithm Ford-Fulkerson(N) 1. f flow of value 0 2. while there exists an s t path P in the residual network do 3. f f + f P ; 4. Update the residual network.. return f The residual network is N with the used-up capacity removed. To make this precise, we need notation, and proofs - this lecture. ADS: lects & 11 slide 8 24th & 28th Oct, 2014

Some Technical Observations N = (G = (V, E), c, s, t) flow network, f : V V R flow in N, u, v V. 1. f (u, u) = 0 for all u V. Proof : f (u, u) = f (u, u) by skew symmetry. 2. For any v V \ {s, t}, u V f (u, v) = 0. Proof: u V f (u, v) = u V conservation. f (v, u) = 0 by skew symmetry and flow 3. If (u, v) / E and (v, u) / E then f (u, v) = f (v, u) = 0. Proof: Either f (u, v) or f (v, u) 0 by skew symmetry. Say, f (u, v) 0. Then 0 f (u, v) c(u, v) = 0 by the capacity constraint. So f (u, v) = 0. By skew symmetry, this shows f (v, u) = 0. ADS: lects & 11 slide 9 24th & 28th Oct, 2014

One More Technical Observation 4. The positive net flow entering v is: u V f (u,v)>0 f (u, v). The positive net flow leaving v is defined symmetrically. Flow conservation now says: positive net flow in = positive net flow out. All these observations are just to make it easy for us to talk about flows. ADS: lects & 11 slide 24th & 28th Oct, 2014

Working with Flows Implicit summation notation: For X, Y V put f (X, Y ) = f (u, v) = u X v Y (u,v) X Y f (u, v). Abbreviations: Conservation of flow is now: f (u, Y ) stands for f ({u}, Y ) and f (X, v) stands for f (X, {v}). f (u, V ) = 0 for all u V \ {s, t}. ADS: lects & 11 slide 11 24th & 28th Oct, 2014

Working with Flows (cont d) Lemma 3 N = (G = (V, E), c, s, t) flow network, f flow in N. Then for all X, Y, Z V, 1. f (X, X ) = 0. 2. f (X, Y ) = f (Y, X ). 3. If X Y = then f (X Y, Z) = f (X, Z) + f (Y, Z), f (Z, X Y ) = f (Z, X ) + f (Z, Y ). Lemma lifts Network flow properties to sets-of-vertices. ADS: lects & 11 slide 12 24th & 28th Oct, 2014

Proof of Lemma 3 1. f (X, X ) = f (u, v) by defn. of f (X, X ) = (u,v) X X {u,v} X `f (u, v) + f (v, u) take (u, v), (v, u) together = 0. by skew-symm 2. f (X, Y ) = f (u, v) by defn of f (X, Y ) = (u,v) X Y (u,v) X Y = (v,u) Y X f (v, u) by skew-symmetry f (v, u) take outside the summation = f (Y, X ). by defn of f (Y, X ) ADS: lects & 11 slide 13 24th & 28th Oct, 2014

Proof of Lemma 3 (cont d) 3. f (X Y, Z) = f (u, v) u X Y v Z = f (u, v) + f (u, v) f (u, v) u X v Z u Y v Z u X Y v Z (expand sum into X and Y, subtract duplicates in X Y ) = f (u, v) + f (u, v) u X v Z u Y v Z (but X Y =, so third term disappears) Moreover, = f (X, Z) + f (Y, Z). f (Z, X Y ) = f (X Y, Z) = `f (X, Z) + f (Y, Z) = f (Z, X ) + f (Z, Y ). ADS: lects & 11 slide 14 24th & 28th Oct, 2014

Working with Flows (cont d) Corollary 4 N = (G = (V, E), c, s, t) flow network, f flow in N. Then f = f (V, t). Proof: f = f (s, V ) (by definition) = f (V, V ) f (V \ {s}, V ) (by Lemma 3 (3.)) = f (V \ {s}, V ) (by Lemma 3 (1.)) = f (V, V \ {s}) (by Lemma 3 (2.)) = f (V, t) + f (V, V \ {s, t}) (by Lemma 3 (3.)) = f (V, t) + v V \{s,t} f (V, v) (by Definition) = f (V, t) (by flow conservation) ADS: lects & 11 slide 1 24th & 28th Oct, 2014

Residual Networks Idea is to capture possible extra flow given current flow. Definition N = (G = (V, E), c, s, t) flow network, f flow in N. 1. For all u, v V V, the residual capacity of (u, v) is c f (u, v) = c(u, v) f (u, v). 2. The residual network of N induced by f is N f ((V, E f ), c f, s, t), where E f = {(u, v) V V c f (u, v) > 0} Notice that E f may contain edges not originally in E ( back-edges ). ADS: lects & 11 slide 16 24th & 28th Oct, 2014

Example A flow and the corresponding residual network s / 20 u v w 20 20 r / / t x y 1 z 1 s 20 u v w 20 20 r t x y 1 z 1 ADS: lects & 11 slide 17 24th & 28th Oct, 2014

Adding Flows Lemma 6 Let N = (G = (V, E), c, s, t) be a flow network. Let f be a flow in N. Let g : V V R be a flow in the residual network N f. Then the function f + g : V V R defined by is a flow of value f + g in N. (f + g)(u, v) = f (u, v) + g(u, v) ADS: lects & 11 slide 18 24th & 28th Oct, 2014

Proof of Lemma 6 First we have to check that f + g is actually a flow in N. Capacity constraints: (f + g)(u, v) = f (u, v) + g(u, v) f (u, v) + c f (u, v) = f (u, v) + c(u, v) f (u, v) = c(u, v). Skew symmetry: (f +g)(u, v) = f (u, v)+g(u, v) = f (v, u) g(v, u) = (f +g)(v, u). Flow Conservation: For every u V \ {s, t}: v V(f + g)(u, v) = v V f (u, v) + v V g(u, v) = 0 + 0 = 0. ADS: lects & 11 slide 19 24th & 28th Oct, 2014

Proof of Lemma 6 (cont d) Next we have to check that f + g does have the value that we claimed for it. Value: f + g = v V(f + g)(s, v) = v V f (s, v) + v V g(s, v) = f + g. ADS: lects & 11 slide 20 24th & 28th Oct, 2014

Augmenting Paths Definition 7 N = (G = (V, E), c, s, t) flow network, f flow in N. Then an augmenting path for f is a path P from s to t in the residual network N f. The residual capacity of P is c f (P) = min{c f (u, v) (u, v) edge on P}. Note that c f (P) > 0, by definition of E f (recall that we only keep edges in E f if their residual capacity is strictly positive). ADS: lects & 11 slide 21 24th & 28th Oct, 2014

Example s 20 u v w 20 20 r t x y 1 z 1 An augmenting path of residual capacity s / 20 u v w 20 20 r x z y / / 1 t / 1 ADS: lects & 11 slide 22 24th & 28th Oct, 2014

Pushing Flow through an Augmenting Path Lemma 8 N = (G = (V, E), c, s, t) flow network, f flow in N. P augmenting path. Then f P : V V R defined by f P (u, v) = is a flow in N f of value c f (P). c f (P) if (u, v) is an edge of P, c f (P) if (v, u) is an edge of P, 0 otherwise Proof left as an exercise. It is not too difficult - just have to check that the three conditions of a flow are satisfied (and that the value is c f (P)). Similar to Lemma 6. ADS: lects & 11 slide 23 24th & 28th Oct, 2014

Augmenting a Flow Corollary 9 N = (G = (V, E), c, s, t) flow network, f flow in N. Let P be an augmenting path. Then f + f P is a flow in N of value f + c f (P) > f. Proof: Follows from Lemma 6 and Lemma 8. ADS: lects & 11 slide 24 24th & 28th Oct, 2014

The Ford-Fulkerson Algorithm Algorithm Ford-Fulkerson(N) 1. f flow of value 0 2. while there exists an augmenting path P in N f do 3. f f + f P 4. return f To prove that Ford-Fulkerson correctly solves the Maximum Flow problem, we have to prove that: 1. The algorithm terminates. 2. After termination, f is a maximum flow. ADS: lects & 11 slide 2 24th & 28th Oct, 2014

Cuts Definition N = (G = (V, E), c, s, t) flow network. A cut of N is a pair (S, T ) such that: 1. s S and t T, 2. V = S T and S T =. The capacity of the cut (S, T ) is c(s, T ) = u S,v T c(u, v). ADS: lects & 11 slide 26 24th & 28th Oct, 2014

Example A cut of capacity 4. 20 u v w 20 20 s r t x y 1 z 1 ADS: lects & 11 slide 27 24th & 28th Oct, 2014

Example A cut of capacity 2. 20 u v w 20 20 s r t x y 1 z 1 ADS: lects & 11 slide 28 24th & 28th Oct, 2014

Cuts and Flows Lemma 11 N = (G = (V, E), c, s, t) flow network, f flow in N, (S, T ) cut of N. Then f = f (S, T ). Proof: We apply Lemma 3: f = f (s, V ) = f (s, V ) + f (S {s}, V ) [t S f (S {s}, V ) = 0] = f (S, V ) = f (S, T ) + f (S, S) = f (S, T ). ADS: lects & 11 slide 29 24th & 28th Oct, 2014

Cuts and Flows (cont d) Corollary 12 The value of any flow in a network is bounded from above by the capacity of any cut. Proof: Let f be a flow and (S, T ) a cut. Then f = f (S, T ) c(s, T ). ADS: lects & 11 slide 30 24th & 28th Oct, 2014

The Max-Flow Min-Cut Theorem Theorem 13 Let N = (G = (V, E), c, s, t) be a flow network. Then the maximum value of a flow in N is equal to the minimum capacity of a cut in N. ADS: lects & 11 slide 31 24th & 28th Oct, 2014

Proof of the Max-Flow Min-Cut Theorem Let f be a flow of maximum value and (S, T ) a cut of minimum capacity in N. We shall prove that f = c(s, T ). 1. f c(s, T ) follows from Corollary 12. So all we have to prove is that there is a cut (S, T ) such that c(s, T ) f. 2. First remember that f has no augmenting path. Proof: If P was an augmenting path, then f + f P would be a flow of larger value (because by definition of N f, all edges in N f have strictly positive weights). 3. Thus there is no path from s to t in N f. Let and T = V \ S. Then (S, T ) is a cut. S = {v there is a path from s to v in N f } ADS: lects & 11 slide 32 24th & 28th Oct, 2014

Proof of the Max-Flow Min-Cut Theorem (cont d) 4. By definition of S, and because reachability in graphs is a transitive relation, there cannot be any edge from S to T in N f. Thus for all u S, v T we have c(u, v) f (u, v) = 0.. Thus (by Lemma 11). c(s, T ) = c(u, v) = f (u, v) = f (S, T ) = f u S v T u S v T ADS: lects & 11 slide 33 24th & 28th Oct, 2014

Corollaries Corollary 14 A flow is maximum if, and only if, it has no augmenting path. Proof: This follows from the proof of the Max-Flow Min-Cut theorem. Corollary 1 If the Ford-Fulkerson algorithm terminates, then it returns a maximum flow. Proof: path. The flow returned by Ford-Fulkerson has no augmenting ADS: lects & 11 slide 34 24th & 28th Oct, 2014

Termination Let f be a maximum flow in a network N. If all capacities are integers, then Ford-Fulkerson stops after at most f iterations of the main loop. If all capacities are rationals, then Ford-Fulkerson stops after at most q f iterations of the main loop, where q is the least common multiple of the denominators of all the capacities. For arbitrary real capacities, it may happen that Ford-Fulkerson does not stop. ADS: lects & 11 slide 3 24th & 28th Oct, 2014

A Nasty Example 1,000,000 1,000,000 999,999 1,000,000 s 1 t s 1 1 1 t 1,000,000 1,000,000 1,000,000 999,999 999,999 999,999 999,998 999,999 s 1 999,999 1 1 1 t s 2 999,999 1 1 2 t 1 999,999 1 999,998 ADS: lects & 11 slide 36 24th & 28th Oct, 2014

The Edmonds-Karp Heuristic Idea Always choose a shortest augmenting path. n number of vertices, m number of edges. Recall that n m + 1 A shortest augmenting path can be found by Breadth-First-Search (reading assignment) in time O(n + m) = O(m). Theorem 16 The Ford-Fulkerson algorithm with the Edmonds-Karp heuristic stops after at most O(nm) iterations of the main loop. Thus the running time is O(nm 2 ). ADS: lects & 11 slide 37 24th & 28th Oct, 2014

Interesting Example 20 u v w 20 20 s r t x y 1 z 1 We will run Ford-Fulkerson (with the Edmonds-Karp heuristic) on this network. This is interesting because we will see the back-edges being used to undo part of an previous augmenting path. ADS: lects & 11 slide 38 24th & 28th Oct, 2014

Interesting Example cont. s / 20 u v w 20 20 r / / t x y 1 z 1 1st augmenting path: s r w t. Length is 3 (so we satisfy Edmonds-Karp rule to take a shortest possible path). Min capacity is, so we push flow of along the path. Starting flow becomes. ADS: lects & 11 slide 39 24th & 28th Oct, 2014

Interesting Example cont. s 20 u v w 20 20 r t x y 1 z 1 Residual network after adding first flow of value along s r w t. The newly-created back-edges are shown in red. ADS: lects & 11 slide 40 24th & 28th Oct, 2014

Interesting Example cont. s / 20 u v w 20 20 r x z y / / 1 t / 1 There is no longer any augmenting path of length 3, and the only one of length 4 is s x y z t, which has a minimum capacity min{,, 1, 1}, ie. We push this extra flow of value along s x y z t, bringing overall flow to 20. ADS: lects & 11 slide 41 24th & 28th Oct, 2014

Interesting Example cont. s u v 20 w 20 20 r x y z t Residual network after adding flow from second augmenting path s x y z t, overall flow now 20. ADS: lects & 11 slide 42 24th & 28th Oct, 2014

Interesting Example cont. / s / / 20 u v w 20 20 / r t / x z / y / Now there is only one simple augmenting path - s u v w r y z t, with minimum residual capacity. Notice we use the back-edge w r in our path. This is essentially re-shipping units from the first flow-path away from r w t and along r y z t instead. ADS: lects & 11 slide 43 24th & 28th Oct, 2014

Interesting Example s 1 u v w 20 20 r 1 1 x z y t Residual network after adding 3rd flow, of value total flow 2. There is no longer any augmenting path in our residual network (set of vertices reachable from s is {s, u, v, x, w, r}). ADS: lects & 11 slide 44 24th & 28th Oct, 2014

Reading and Problems [CLRS] Chapter 26 For breadth-first search: [CLRS], Section 22.2. Problems 1. Exercise 26.1- of [CLRS] (ed 2). Not in [CLRS] (ed 3). Question is: consider Figure 26.1(b) and find a pair of subsets X, Y V such that f (X, Y ) = f (V \ X, Y ). After that, find a pair of subsets X, Y V for which f (X, Y ) f (V \ X, Y ). 2. Exercise 26.2-2 of [CLRS] (2nd ed), Ex 26.2-3 of [CLRS] (3rd ed). 3. Prove Lemma 8. 4. Problem 26-4 of [CLRS]. ADS: lects & 11 slide 4 24th & 28th Oct, 2014