Combinatorial algorithms

The Mastery of Computer Programming: Primary Algorithms - Sykalo Eugene 2023

Combinatorial algorithms
Algorithms for Combinatorial Problems

Permutations

Permutations are arrangements of objects in a specific order. There are n! permutations of a set of n distinct objects. In this section, we will discuss algorithms for generating permutations of a given set of objects, as well as algorithms for computing the number of permutations of a set.

Generating Permutations

One algorithm for generating permutations of a set of n objects is the "lexicographic" algorithm. This algorithm generates all permutations of the set in lexicographic order, which means that the permutations are ordered by their "dictionary" order.

Another algorithm for generating permutations of a set of n objects is the "recursive" algorithm. This algorithm generates permutations by selecting one element at a time and recursively generating permutations of the remaining elements.

Counting Permutations

The number of permutations of a set of n distinct objects is n!. This can be proven using the "multiplication principle", which states that the number of ways to perform a sequence of independent tasks is the product of the number of ways to perform each task.

Another way to compute the number of permutations of a set of n objects is to use the "factorial" function. The factorial of a number n, denoted n!, is the product of all positive integers up to n. Therefore, n! is the number of permutations of a set of n distinct objects.

Combinations

Combinations are subsets of a given set of objects. A combination of size k is a selection of k objects from a set of n objects, where the order of the objects does not matter. In this section, we will discuss algorithms for generating combinations of a given set of objects, as well as algorithms for computing the number of combinations of a set.

Generating Combinations

One algorithm for generating combinations of a set of n objects is the "recursive" algorithm. This algorithm generates combinations by selecting one element at a time and recursively generating combinations of the remaining elements. Another algorithm for generating combinations is the "binary counting" algorithm. This algorithm uses a binary representation of the integers from 0 to 2^n - 1 to generate all possible combinations.

Counting Combinations

The number of combinations of a set of n distinct objects of size k, denoted C(n,k), can be computed using the formula:

C(n,k) = n! / (k! * (n-k)!)

This formula can be proven using the "multiplication principle" and the "division principle", which state that the number of ways to perform a sequence of independent tasks is the product of the number of ways to perform each task, and that the number of ways to divide n objects into k groups is equal to the number of ways to select k objects out of n.

Graph Algorithms

Graph algorithms are used to solve problems that involve graphs. A graph is a collection of vertices (or nodes) and edges that connect pairs of vertices. In this section, we will discuss algorithms for graph traversal, shortest paths, and minimum spanning trees. We will also explore algorithms for solving problems on directed and weighted graphs.

Graph Traversal

Graph traversal algorithms are used to visit all the vertices in a graph. One example of a graph traversal algorithm is the "depth-first search" (DFS) algorithm. This algorithm starts at a given vertex and explores as far as possible along each branch before backtracking. Another example of a graph traversal algorithm is the "breadth-first search" (BFS) algorithm. This algorithm starts at a given vertex and explores all the vertices at distance one before exploring the vertices at distance two, and so on.

Shortest Paths

Shortest path algorithms are used to find the shortest path between two vertices in a graph. One example of a shortest path algorithm is Dijkstra's algorithm. This algorithm works by maintaining a set of "tentative" distances from the starting vertex to each vertex in the graph, and iteratively selecting the vertex with the smallest tentative distance until the destination vertex is reached. Another example of a shortest path algorithm is the "Bellman-Ford" algorithm, which is able to handle negative edge weights.

Minimum Spanning Trees

A minimum spanning tree (MST) is a tree that connects all the vertices in a graph with the minimum possible total edge weight. One example of an MST algorithm is Kruskal's algorithm. This algorithm works by initially placing each vertex in a separate tree, and iteratively adding the shortest edge that connects two trees until all the vertices are in the same tree. Another example of an MST algorithm is Prim's algorithm, which starts with a single vertex and iteratively adds the vertex that is closest to the current tree until all the vertices are in the same tree.

Directed and Weighted Graphs

Many real-world problems involve directed and weighted graphs, in which the edges have a direction and a weight (or cost). Algorithms for solving problems on directed and weighted graphs include the "shortest path" and "maximum flow" algorithms. The "shortest path" algorithms, such as Dijkstra's algorithm and the Bellman-Ford algorithm, can be adapted to handle directed graphs by considering the direction of the edges. The "maximum flow" algorithms, such as the Ford-Fulkerson algorithm, are used to find the maximum flow in a network, which represents the maximum amount of goods or information that can be transported from a source vertex to a destination vertex subject to capacity constraints on the edges.

Network Flow Algorithms

Network flow algorithms are used to solve problems that involve the flow of goods or information through a network. In this section, we will discuss algorithms for finding maximum flow and minimum cut in a network.

Maximum Flow

The maximum flow problem is the problem of finding the maximum amount of flow that can be sent from a source vertex s to a sink vertex t in a network. The flow is subject to capacity constraints on the edges, which represent the maximum amount of flow that can be sent along each edge. The maximum flow can be found using the "Ford-Fulkerson" algorithm, which works by iteratively finding an augmenting path in the residual graph and increasing the flow along that path until no more augmenting paths exist.

Minimum Cut

The minimum cut problem is the problem of finding the minimum capacity cut that separates the source vertex s from the sink vertex t in a network. A cut is a partition of the vertices into two disjoint sets, one containing s and the other containing t. The capacity of a cut is the sum of the capacities of the edges that cross the cut from the set containing s to the set containing t. The minimum cut can be found using the "max-flow min-cut" theorem, which states that the maximum flow in a network is equal to the minimum capacity cut that separates the source vertex from the sink vertex.

The minimum cut can also be found using the "Stoer-Wagner" algorithm, which works by iteratively contracting the vertices in the graph until only two vertices remain, and then finding the minimum capacity cut between those two vertices. The algorithm maintains a "cut-set" of edges that cross the cut and updates the cut-set as the vertices are contracted.

Dynamic Programming

Dynamic programming is a technique used to solve problems by breaking them down into smaller subproblems. The idea is to solve each subproblem only once and store the solution, so that if the same subproblem occurs later in the computation, it can be solved more efficiently by retrieving the stored solution rather than recomputing it.

One example of a combinatorial problem that can be solved using dynamic programming is the "binomial coefficient" problem. The binomial coefficient C(n,k) is the number of ways to select k objects out of a set of n objects, where the order of the objects does not matter. The value of C(n,k) can be computed using the following recurrence relation:

C(n,k) = C(n-1,k-1) + C(n-1,k)

This recurrence relation can be used to compute the value of C(n,k) for any n and k, by starting with the base cases C(n,0) = 1 and C(n,n) = 1 and using the recurrence relation to compute the remaining values.

Another example of a combinatorial problem that can be solved using dynamic programming is the "knapsack" problem. The knapsack problem is the problem of packing a set of items into a knapsack with a given capacity, in such a way as to maximize the total value of the items packed. The knapsack problem can be solved using a dynamic programming algorithm that computes the maximum value that can be obtained by packing a subset of the items into the knapsack, subject to the capacity constraint.

Dynamic programming can also be used to solve problems on strings, such as the "longest common subsequence" problem. The longest common subsequence of two strings is the longest subsequence that appears in both strings in the same relative order. The longest common subsequence problem can be solved using a dynamic programming algorithm that computes the length of the longest common subsequence of two strings.