You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
@@ -508,4 +526,66 @@ than 0 was transformed as edges of the graph.
508
526
509
527
The implementation was slightly modified as my implementation.
510
528
511
-
###Search For Better Algorithm
529
+
Further optimization of implemented Dinic's algorithm and also finding better
530
+
version is in progress.
531
+
532
+
533
+
>Ford–Fulkerson algorithm, Edmond Karp algorithm and Dinic’s algorithm
534
+
535
+
536
+
###Finding More Efficient
537
+
538
+
Push-Relabel Maximum Flow Algorithm
539
+
Push-Relabel approach is the more efficient than Ford-Fulkerson algorithm. In this post, Goldberg’s “generic” maximum-flow algorithm is discussed that runs in O(V^2E) time. This time complexity is better than O(E^2V) which is time complexity of Edmond-Karp algorithm (a BFS based implementation of Ford-Fulkerson). There exist a push-relabel approach based algorithm that works in O(V3) which is even better than the one discussed here.
540
+
541
+
Similarities with Ford Fulkerson
542
+
Like Ford-Fulkerson, Push-Relabel also works on Residual Graph (Residual Graph of a flow network is a graph which indicates additional possible flow. If there is a path from source to sink in residual graph, then it is possible to add flow).
543
+
Differences with Ford Fulkerson
544
+
Push-relabel algorithm works in a more localized. Rather than examining the entire residual network to find an augmenting path, push-relabel algorithms work on one vertex at a time (Source : CLRS Book).
545
+
In Ford-Fulkerson, net difference between total outflow and total inflow for every vertex (Except source and sink) is maintained 0. Push-Relabel algorithm allows inflow to exceed the outflow before reaching the final flow. In final flow, the net difference is 0 for all except source and sink.
546
+
Time complexity wise more efficient.
547
+
548
+
Dinic's algorithm can be improved to O(VElogV) with dynamic trees.
549
+
550
+
There exists another more efficient algorithm James B Orlin's + KRT (King, Rao, Tarjan)'s algorithm to O(VE).
In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is that looking at the worst-case run time per operation, rather than per algorithm, can be too pessimistic. - Wikipedia
572
+
573
+
Amortization
574
+
In computer science, amortised analysis is a method of analyzing the execution cost of algorithms over a sequence of operations. In the context of zoning regulations, amortisation refers to the time period a non-conforming property has to conform to a new zoning classification before the non-conforming use becomes prohibited. - Wikipedia
575
+
576
+
Amortization
577
+
A technique from financial analysis, but we've appropriated it in computer science as an analysis technique to say, well, let's not worry about every single operation worst case cost, let's just worry about the total operation, the sum of all the operations cost. -https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-design-and-analysis-of-algorithms-spring-2015/lecture-videos/lecture-5-amortization-amortized-analysis
578
+
579
+
This lecture is about a cool data structure for maintaining rooted trees (potentially very unbalanced) in O(log n) time per operation. The operations include linking two trees together by adding an edge, and cutting an edge to split a tree into two trees, so the data structure is called link-cut trees. This result is our first solving a dynamic graph problem (where in general we can insert and delete edges); next lecture will see other solutions for trees and generalization from trees to undirected graphs. Link-cut trees have specific advantages in that they can represent information over root-to-node paths; for example, they can easily compute the min/max/sum of values stored in nodes or edges along any root-to-node paths. Most importantly, link-cut trees introduce two powerful tree decompositions: preferred-path decomposition (which we already used in Tango trees) and heavy-light decomposition. As we will cover them, link-cut trees also demonstrate how a surprisingly “care-free” use of splay trees is actually still efficient.