In some applications, a graph (G) is augmented by associating a weight or cost with each edge; such a graph is called a **weighted graph**. In such cases, instead of being interested in just any spanning tree, we may be interested in a **least cost spanning tree**, that is, a spanning tree such that the sum of the costs of the edges of the tree is as small as possible. For example, this would be the least expensive way to connect a set of towns by a communication network, burying the cable in such a way as to minimize the total cost of laying the cable.

This problem is one that can be solved by a **greedy algorithm**. Roughly speaking, a greedy algorithm is one that makes choices that are optimal in the short run. Typically this strategy does not result in an optimal solution in the long run, but in this case this approach works.

Definition: weighted graph

A weighted graph is a graph (G) together with a cost function (ccolon E(G) o R^{>0}). If (H) is a subgraph of (G), the cost of (H) is (c(H)=sum_{ein E(H)} c(e)).

The Jarník Algorithm

Given a weighted connected graph (G), we construct a minimum cost spanning tree (T) as follows. Choose any vertex (v_0) in (G) and include it in (T). If vertices (S={v_0, v_1,ldots,v_k}) have been chosen, choose an edge with one endpoint in (S) and one endpoint not in (S) and with smallest weight among all such edges. Let (v_{k+1}) be the endpoint of this edge not in (S), and add it and the associated edge to (T). Continue until all vertices of (G) are in (T).

This algorithm was discovered by Vojtěch Jarník in 1930, and rediscovered independently by Robert C. Prim in 1957 and Edsger Dijkstra in 1959. It is often called *Prim's Algorithm*. The algorithm proceeds by constructing a sequence of trees (T_1, T_2,ldots,T_{n-1}), with (T_{n-1}) a spanning tree for (G). At each step, the algorithm adds an edge that will make (c(T_{i+1})) as small as possible among all trees that consist of (T_i) plus one edge. This is the best choice in the short run, but it is not obvious that in the long run, that is, by the time (T_{n-1}) is constructed, that this will turn out to have been the best choice.

Theorem 5.6.2

The Jarník Algorithm produces a minimum cost spanning tree.

Proof

Suppose (G) is connected on (n) vertices. Let (T) be the spanning tree produced by the algorithm, and (T_m) a minimum cost spanning tree. We prove that (c(T)=c(T_m)).

Let (e_1, e_2,ldots,e_{n-1}) be the edges of (T) in the order in which they were added to (T); one endpoint of (e_i) is (v_i), the other is in ({v_0,ldots,v_{i-1}}). We form a sequence of trees (T_m=T_0, T_1,ldots, T_{n-1}=T) such that for each (i), (c(T_i)=c(T_{i+1})), and we conclude that (c(T_m)=c(T)).

If (e_1) is in (T_0), let (T_1=T_0). Otherwise, add edge (e_1) to (T_0). This creates a cycle containing (e_1) and another edge incident at (v_0), say (f_1). Remove (f_1) to form (T_1). Since the algorithm added edge (e_1), (c(e_1)le c(f_1)). If (c(e_1)< c(f_1)), then (c(T_1)< c(T_0)=c(T_m)), a contradiction, so (c(e_1)=c(f_1)) and (c(T_1)=c(T_0)).

Suppose we have constructed tree (T_i). If (e_{i+1}) is in (T_i), let (T_{i+1}=T_i). Otherwise, add edge (e_{i+1}) to (T_i). This creates a cycle, one of whose edges, call it (f_{i+1}), is not in (e_1, e_2,ldots,e_i) and has exactly one endpoint in ({v_0,ldots,v_i}). Remove (f_{i+1}) to create (T_{i+1}). Since the algorithm added (e_{i+1}), (c(e_{i+1})le c(f_{i+1})). If (c(e_{i+1})< c(f_{i+1})), then (c(T_{i+1})< c(T_i)=c(T_m)), a contradiction, so (c(e_{i+1})=c(f_{i+1})) and (c(T_{i+1})=c(T_i)).

(square)

## Minimum spanning tree

A **minimum spanning tree** (**MST**) or **minimum weight spanning tree** is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. That is, it is a spanning tree whose sum of edge weights is as small as possible. More generally, any edge-weighted undirected graph (not necessarily connected) has a **minimum spanning forest**, which is a union of the minimum spanning trees for its connected components.

There are many use cases for minimum spanning trees. One example is a telecommunications company trying to lay cable in a new neighborhood. If it is constrained to bury the cable only along certain paths (e.g. roads), then there would be a graph containing the points (e.g. houses) connected by those paths. Some of the paths might be more expensive, because they are longer, or require the cable to be buried deeper these paths would be represented by edges with larger weights. Currency is an acceptable unit for edge weight – there is no requirement for edge lengths to obey normal rules of geometry such as the triangle inequality. A *spanning tree* for that graph would be a subset of those paths that has no cycles but still connects every house there might be several spanning trees possible. A *minimum spanning tree* would be one with the lowest total cost, representing the least expensive path for laying the cable.

## Contents

Several pathfinding algorithms, including Dijkstra's algorithm and the A* search algorithm, internally build a spanning tree as an intermediate step in solving the problem.

In order to minimize the cost of power networks, wiring connections, piping, automatic speech recognition, etc., people often use algorithms that gradually build a spanning tree (or many such trees) as intermediate steps in the process of finding the minimum spanning tree. [1]

The Internet and many other telecommunications networks have transmission links that connect nodes together in a mesh topology that includes some loops. In order to avoid bridge loops and routing loops, many routing protocols designed for such networks—including the Spanning Tree Protocol, Open Shortest Path First, Link-state routing protocol, Augmented tree-based routing, etc.—require each router to remember a spanning tree.

A special kind of spanning tree, the Xuong tree, is used in topological graph theory to find graph embeddings with maximum genus. A Xuong tree is a spanning tree such that, in the remaining graph, the number of connected components with an odd number of edges is as small as possible. A Xuong tree and an associated maximum-genus embedding can be found in polynomial time. [2]

A tree is a connected undirected graph with no cycles. It is a spanning tree of a graph *G* if it spans *G* (that is, it includes every vertex of *G*) and is a subgraph of *G* (every edge in the tree belongs to *G*). A spanning tree of a connected graph *G* can also be defined as a maximal set of edges of *G* that contains no cycle, or as a minimal set of edges that connect all vertices.

### Fundamental cycles Edit

Adding just one edge to a spanning tree will create a cycle such a cycle is called a **fundamental cycle**. There is a distinct fundamental cycle for each edge not in the spanning tree thus, there is a one-to-one correspondence between fundamental cycles and edges not in the spanning tree. For a connected graph with *V* vertices, any spanning tree will have *V* − 1 edges, and thus, a graph of *E* edges and one of its spanning trees will have *E* − *V* + 1 fundamental cycles (The number of edges subtracted by number of edges included in a spanning tree giving the number of edges not included in the spanning tree). For any given spanning tree the set of all *E* − *V* + 1 fundamental cycles forms a cycle basis, a basis for the cycle space. [3]

### Fundamental cutsets Edit

Dual to the notion of a fundamental cycle is the notion of a **fundamental cutset**. By deleting just one edge of the spanning tree, the vertices are partitioned into two disjoint sets. The fundamental cutset is defined as the set of edges that must be removed from the graph *G* to accomplish the same partition. Thus, each spanning tree defines a set of *V* − 1 fundamental cutsets, one for each edge of the spanning tree. [4]

The duality between fundamental cutsets and fundamental cycles is established by noting that cycle edges not in the spanning tree can only appear in the cutsets of the other edges in the cycle and *vice versa*: edges in a cutset can only appear in those cycles containing the edge corresponding to the cutset. This duality can also be expressed using the theory of matroids, according to which a spanning tree is a base of the graphic matroid, a fundamental cycle is the unique circuit within the set formed by adding one element to the base, and fundamental cutsets are defined in the same way from the dual matroid. [5]

### Spanning forests Edit

In graphs that are not connected, there can be no spanning tree, and one must consider **spanning forests** instead. Here there are two competing definitions:

- Some authors consider a spanning forest to be a maximal acyclic subgraph of the given graph, or equivalently a graph consisting of a spanning tree in each connected component of the graph. [6]
- For other authors, a spanning forest is a forest that spans all of the vertices, meaning only that each vertex of the graph is a vertex in the forest. For this definition, even a connected graph may have a disconnected spanning forest, such as the forest in which each vertex forms a single-vertex tree. [7]

To avoid confusion between these two definitions, Gross & Yellen (2005) suggest the term "full spanning forest" for a spanning forest with the same connectivity as the given graph, while Bondy & Murty (2008) instead call this kind of forest a "maximal spanning forest". [8]

The number *t*(*G*) of spanning trees of a connected graph is a well-studied invariant.

### In specific graphs Edit

In some cases, it is easy to calculate *t*(*G*) directly:

- If
*G*is itself a tree, then*t*(*G*) = 1 . - When
*G*is the cycle graph*C*with_{n}*n*vertices, then*t*(*G*) =*n*. - For a complete graph with
*n*vertices, Cayley's formula[9] gives the number of spanning trees as*n**n*− 2 . - If
*G*is the complete bipartite graph K p , q> , [10] then t ( G ) = p q − 1 q p − 1 q^ > . - For the
*n*-dimensional hypercube graph Q n> , [11] the number of spanning trees is t ( G ) = 2 2 n − n − 1 ∏ k = 2 n k ( n k ) -n-1>prod _ ^ k^ > .

### In arbitrary graphs Edit

More generally, for any graph *G*, the number *t*(*G*) can be calculated in polynomial time as the determinant of a matrix derived from the graph, using Kirchhoff's matrix-tree theorem. [12]

Specifically, to compute *t*(*G*), one constructs the Laplacian matrix of the graph, a square matrix in which the rows and columns are both indexed by the vertices of *G*. The entry in row *i* and column *j* is one of three values:

- The degree of vertex
*i*, if*i*=*j*, - −1, if vertices
*i*and*j*are adjacent, or - 0, if vertices
*i*and*j*are different from each other but not adjacent.

The resulting matrix is singular, so its determinant is zero. However, deleting the row and column for an arbitrarily chosen vertex leads to a smaller matrix whose determinant is exactly *t*(*G*).

### Deletion-contraction Edit

If *G* is a graph or multigraph and *e* is an arbitrary edge of *G*, then the number *t*(*G*) of spanning trees of *G* satisfies the *deletion-contraction recurrence* *t*(*G*) = *t*(*G* − *e*) + *t*(*G*/*e*), where *G* − *e* is the multigraph obtained by deleting *e* and *G*/*e* is the contraction of *G* by *e*. [13] The term *t*(*G* − *e*) in this formula counts the spanning trees of *G* that do not use edge *e*, and the term *t*(*G*/*e*) counts the spanning trees of *G* that use *e*.

In this formula, if the given graph *G* is a multigraph, or if a contraction causes two vertices to be connected to each other by multiple edges, then the redundant edges should not be removed, as that would lead to the wrong total. For instance a bond graph connecting two vertices by *k* edges has *k* different spanning trees, each consisting of a single one of these edges.

### Tutte polynomial Edit

The Tutte polynomial of a graph can be defined as a sum, over the spanning trees of the graph, of terms computed from the "internal activity" and "external activity" of the tree. Its value at the arguments (1,1) is the number of spanning trees or, in a disconnected graph, the number of maximal spanning forests. [14]

The Tutte polynomial can also be computed using a deletion-contraction recurrence, but its computational complexity is high: for many values of its arguments, computing it exactly is #P-complete, and it is also hard to approximate with a guaranteed approximation ratio. The point (1,1), at which it can be evaluated using Kirchhoff's theorem, is one of the few exceptions. [15]

### Construction Edit

A single spanning tree of a graph can be found in linear time by either depth-first search or breadth-first search. Both of these algorithms explore the given graph, starting from an arbitrary vertex *v*, by looping through the neighbors of the vertices they discover and adding each unexplored neighbor to a data structure to be explored later. They differ in whether this data structure is a stack (in the case of depth-first search) or a queue (in the case of breadth-first search). In either case, one can form a spanning tree by connecting each vertex, other than the root vertex *v*, to the vertex from which it was discovered. This tree is known as a depth-first search tree or a breadth-first search tree according to the graph exploration algorithm used to construct it. [16] Depth-first search trees are a special case of a class of spanning trees called Trémaux trees, named after the 19th-century discoverer of depth-first search. [17]

Spanning trees are important in parallel and distributed computing, as a way of maintaining communications between a set of processors see for instance the Spanning Tree Protocol used by OSI link layer devices or the Shout (protocol) for distributed computing. However, the depth-first and breadth-first methods for constructing spanning trees on sequential computers are not well suited for parallel and distributed computers. [18] Instead, researchers have devised several more specialized algorithms for finding spanning trees in these models of computation. [19]

### Optimization Edit

In certain fields of graph theory it is often useful to find a minimum spanning tree of a weighted graph. Other optimization problems on spanning trees have also been studied, including the maximum spanning tree, the minimum tree that spans at least k vertices, the spanning tree with the fewest edges per vertex, the spanning tree with the largest number of leaves, the spanning tree with the fewest leaves (closely related to the Hamiltonian path problem), the minimum diameter spanning tree, and the minimum dilation spanning tree. [20] [21]

Optimal spanning tree problems have also been studied for finite sets of points in a geometric space such as the Euclidean plane. For such an input, a spanning tree is again a tree that has as its vertices the given points. The quality of the tree is measured in the same way as in a graph, using the Euclidean distance between pairs of points as the weight for each edge. Thus, for instance, a Euclidean minimum spanning tree is the same as a graph minimum spanning tree in a complete graph with Euclidean edge weights. However, it is not necessary to construct this graph in order to solve the optimization problem the Euclidean minimum spanning tree problem, for instance, can be solved more efficiently in *O*(*n* log *n*) time by constructing the Delaunay triangulation and then applying a linear time planar graph minimum spanning tree algorithm to the resulting triangulation. [20]

### Randomization Edit

A spanning tree chosen randomly from among all the spanning trees with equal probability is called a uniform spanning tree. Wilson's algorithm can be used to generate uniform spanning trees in polynomial time by a process of taking a random walk on the given graph and erasing the cycles created by this walk. [22]

An alternative model for generating spanning trees randomly but not uniformly is the random minimal spanning tree. In this model, the edges of the graph are assigned random weights and then the minimum spanning tree of the weighted graph is constructed. [23]

### Enumeration Edit

Because a graph may have exponentially many spanning trees, it is not possible to list them all in polynomial time. However, algorithms are known for listing all spanning trees in polynomial time per tree. [24]

Every finite connected graph has a spanning tree. However, for infinite connected graphs, the existence of spanning trees is equivalent to the axiom of choice. An infinite graph is connected if each pair of its vertices forms the pair of endpoints of a finite path. As with finite graphs, a tree is a connected graph with no finite cycles, and a spanning tree can be defined either as a maximal acyclic set of edges or as a tree that contains every vertex. [25]

The trees within a graph may be partially ordered by their subgraph relation, and any infinite chain in this partial order has an upper bound (the union of the trees in the chain). Zorn's lemma, one of many equivalent statements to the axiom of choice, requires that a partial order in which all chains are upper bounded have a maximal element in the partial order on the trees of the graph, this maximal element must be a spanning tree. Therefore, if Zorn's lemma is assumed, every infinite connected graph has a spanning tree. [25]

In the other direction, given a family of sets, it is possible to construct an infinite graph such that every spanning tree of the graph corresponds to a choice function of the family of sets. Therefore, if every infinite connected graph has a spanning tree, then the axiom of choice is true. [26]

The idea of a spanning tree can be generalized to directed multigraphs. [27] Given a vertex *v* on a directed multigraph *G*, an *oriented spanning tree* *T* rooted at *v* is an acyclic subgraph of *G* in which every vertex other than *v* has outdegree 1. This definition is only satisfied when the "branches" of *T* point towards *v*.

## Mathematical Statistical Physics

### 2.3 Group structure

Besides its relation to spanning trees , there are some more fascinating properties of the set ℛ. Consider *N* = 2 for the sake of (extreme) simplicity. Define the operation ⊕ on ℛ by

where the ordinary + means point-wise addition. This gives rise to the following table

We recognize here the Cayley table of an abelian group, i.e., (ℛ, ⊕) is an abelian group with neutral element 22. Remark that we can define ⊕ on the whole of Ω, but (Ω, ⊕) is not a group.

We now introduce still another group (which is isomorphic to the preceding one, as we will see later). Let us introduce the addition operator *a _{i}* : Ω → Ω

for *i ∈* <1, …, *N*>. In words, *a _{i}η* is the stable result of an addition at site

*i*. Accept (or verify) for the moment that for all

*i, j ∈*<1, …,

*N*>,

Later we will prove this so-called abelian property in full detail and generality. By definition of recurrence, if a configuration η is recurrent then there exist integers *n _{i}* > 0 such that

The product in ( 2.3 ) is well-defined by abelianness. The fact that *n _{i}* can be chosen strictly positive derives from the fact that in the course of the Markov chain one adds to every site with strictly positive probability. Call e = ∏ i = 1 N a i n i and consider

By definition *A* is not empty (*η ∈ A*), and if g = ∏ i = 1 N a i m i for some integers *m _{i}* ≥ 0, then we have the implication

*“ζ ∈ A*implies

*gζ ∈ A*”. Indeed, by abelianness, for

*ζ ∈ A*,

Therefore, *A* is a “trapping set” for the Markov chain, i.e., a subset of configurations such that once the Markov chains enters it, it never leaves it. As a consequence *A* ⊃ ℛ, because the Markov chain has only one recurrent class which contains the maximal configuration. Since by definition we have *A* ⊆ ℛ, *A* = ℛ. Therefore, acting onℛ, *e* is neutral. Since *n _{i}* > 0, we can define

From ( 2.4 ) we conclude that

acting on ℛ defines an abelian group.

Of course not all the products of addition operators defining *G* are different. In fact, it is easily seen that the group is finite, and we will show that once again

For that, it is sufficient to show that the group acts transitively and freely on ℛ, i.e., for all η ∈ ℛ the orbit *O*_{η} = <*gη : g ∈ G*> = ℛ and if *gη* = *g*′η for some *g, g*′ *∈ G*, then *g* = *g*′, i.e., *gζ* = *g*′ζ for all ζ ∈ ℛ. For the first statement, if η ∈ ℛ and *g ∈ G*, then *gη* can be reached from η in the Markov chain, hence *g*η ∈ ℛ, and *O*_{η} is clearly a trapping set for the Markov chain, hence *O*_{η} ⊃ ℛ. To prove the second statement, consider for *gη* = *g*′η the set

then *A* = ℛ with the same kind of reasoning used in the definition of inverses. Therefore, for all η, the map

is a bijection between *G* and ℛ.

However, there is still another way to see that |*G*| = *N* = 1. This way of reasoning will be useful because in the general case we will not so easily be able to count the recurrent configurations. The equality |*G*| = |ℛ| is however completely general, and that will be useful to obtain |ℛ|. Counting the number of elements of a group can become an easy task if we find a treatable isomorphic group. For this, we have to look for closure relations in *G*. Here is an easy one. Suppose you add two files to some commissioner. Since he has at least one file (to save his face), he will certainly get crazy and give one file to each of his neighbors (modulo the boundary conditions of course). In symbols this means

Using the toppling matrix Δ introduced in ( 2.1 ), this is summarized as

for all *i* ∈ *V*. Acting on ℛ we can bring the right hand site to the left, and obtain

for all *i* ∈ *V*. By abelianness, we infer from ( 2.9 ) that for all *n : V* → ℤ

for all *n* : *V* → ℤ. We will show now that, conversely, if

for some *m _{i} ∈* ℤ then there exists

*n*:

*V*→ ℤ such that

In words, this closure relation means that the only “trivial additions” on ℛ are (integer column) multiples of the matrix Δ.

where *m ∈* ℤ *V* . Write *m* = *m* + − *m* − where *m* + and *m* − are non-negative integer valued. The relation ( 2.12 ) applied to a recurrent configuration η yields

In words, addition of *m* + or *m* − to η leads to the same final stable configuration, say ζ. But then there exist *k* + , *k* − non-negative integer valued functions on *V* such that

Arrived at this point, we can invoke a well-known theorem of elementary algebra. If you have a group *G* and a group *H* and a homomorphism

then *G* is isomorphic to the quotient *H*/*Ker*(Ψ) where *Ker*(Ψ) is the set of *h ∈ H* which are mapped to the neutral element of *G*. In our case, define

with group operation pointwise addition. Next

Then what we just discussed can be summarized in the equality

and hence we have the isomorphism

To see the last equality, note that ℤ *V* is the |*V*| dimensional hypercubic lattice, with a volume one unit cell. Δℤ *V* is another lattice with the columns of Δ as vectors defining the unit cell. The quotient of these two lattices can geometrically be viewed as the non-equivalent points *n* ∈ ℤ *V* of the unit cell of the lattice Δℤ *V* . Equivalence is here defined as

if there exists *k* ∈ ℤ *V* such that

This number of non-equivalent points is precisely the volume of the unit cell of the lattice Δℤ *V* , which is *det*(Δ) (Puzzle this out in the case *N* = 2 to be convinced). In general, the equality |ℤ *V* /*A*ℤ *V* | = *det*(*A*) (with *A* a symmetric matrix with integer elements and non-negative determinant) is trivial for a diagonal matrix. Indeed, in that case *A _{ij}* =

*a*and

_{ii}δ_{ij}an hence | ℤ V / A ℤ V | = ∏ i = 1 n a i i = d e t ( A ) . Since by row and column operations (i.e., addition and subtraction of columns, or permutation of columns) one can make every integer-valued matrix diagonal, see e.g. [ 22 ], we just have to remark that such operations do not change the determinant of a matrix, and do not change (up to isomorphism) the lattice ℤ *V* /*A*ℤ *V* .

Here is still another, geometrical proof. |ℤ *V* /*A*ℤ *V* | is the number of non-equivalent points in the unit cell defined by *A* (i.e., the parallelepiped spanned by the rows of *A*). We can cover ℝ |*V*| by disjoint copies of this unit cell. Consider now a large cube *C _{n}* = [−

*n, n*] |

*V*| . Let

*N*denote the number of integer points (i.e., points of ℤ

_{n}*V*) in the cube, let

*x*denote the number of unit cells (copies of

_{n}*A*) in

*C*, and let

_{n}*y*denote the number of non-equivalent points in one unit cell. Then we have

Dividing these two relations and taking the limit *n* → ∞ gives

## 5.6: Optimal Spanning Trees - Mathematics

oQ j7apj"5")CSj)H?oe:W/c-n8s')[email protected][JLo?aCLAm[-4RdbCMZ/P^sAqJbEe/0i(4L-L 5.6: Optimal Spanning Trees - Mathematics,[nobr][H1toH2]V?j`O4*Wj6G`c5^^I'eTk2]_0t=Dr.fR>*Ddh40Tk77 U[KJ"7F(?_hP9>?2J`W79n$KHZq">PnY9Z>A4J#YkO9 [email protected]>Hl)(Hj/>5mSp'B>YGLhb+WU P(5#'7.0u,[email protected]=rO`ucL"gRa/5(QO5fTgBOlHg(B%ps2FIHF3ub7./J35qr

4UWGN^0W%:]WQu#"fhDDRVJLu674ZjZ18/AEKKN: B]9E5u,)cML,aiI#L.PC9H"+m94&W0'AJX,3i)Ss%rPDPuPb O-Ic$G>5W05W0obs$3Bf3 >2Nmuk>99']3TP[H-ZF90D-9CHu!W>lE&aPs>]OdHC36X&XCB*aY0.b1Q `[email protected][h/HtO!T?hB!6qteA#]4MHOIS52 &

,`FX)m0 TH9sa2s7 cgMlc(JZo0Uj/W:)Fb]fV8SoP'uBH rWd80hAHX&SRZb%5H*Ch'1Ca*p(DH$=O^@8bd0.Q"l`T-4?F4gcRLqNDSsT_bh ALaRA_aZ)*^J[J!4=at4OHO")GlZs)rotn!^]`]`k"E4/l W#"SCjS[P6?S Ysa6p$?[LdIg Ho2eCbm +2e'H`o%%q-5a4>sYaFg)*]l0j&N*jD&p BZd7'P7jW=u !%&!cmD *[email protected] AdBQNF6SH*tpbFp2$%NNeD5.6: Optimal Spanning Trees - Mathematics,[nobr][H1toH2]Q9agmAC/e4 s#7,)t+4QL`N[Tb+X8T/)S0JUZAl(:sOuX_$?uGi?jEfrUMfgnp1`

### P&7SKjnioaE?8Y0[eA]iD-7n-*J*-no,&3//&=h.##sl.P5.6: Optimal Spanning Trees - Mathematics,[nobr][H1toH2] "nT7it g[/[email protected]`'dK)+m./#_-N>cZ+Po_m MpZ&rl++f^q$!$R7PCIRej"T%16q^G-m"1'JfT*8Rp.D*&t`L`6 gDdVsS8SppYCc),NW79^AZdD0ghc`IX+B7 [email protected]:C/]kr][email protected]%[[email protected])]QE+E'P"RL_:DN**+l"k?0YneN *50Bc=510%LXLn%hZBs=?Df4=3^WY=#^r)-f/QC7>4IEm.MFWF/Yph,4kA]e?Yp>[email protected]=Pm+1O/Ob /-kI:`)[email protected]"R'II2t(A"PY_f'1+[+q-aZB5_D7rV$Pf pGetUGs+:[email protected][NYJj=$]8#Q^Qt ^gu0"W==kOunpJPT2a[1rsq$)0GW/7>U=JQbq%L4?09W*f0)aDGG% jNSi^+lO!kW RUoGL#fIYs=G)q#:XZGp1(@_[&]diMeFM`i 7,LJoX5C1=r'LOK'B5%HR&`ahjsa9:[`qR(AoD._7(+/r?!rbs[oO6Mm!D# Fj)f1 (:U"[email protected]'/4 L'VjR5K"?k]r#Qa+jI:^:kEQ,?f_hF6,%-` Q`@*HMV-(ri`kd^bKP6.o9Y"7.W. E("INq/np^D`4kqdY`l)L8_6^OA6$TR5j!RO2kT7 Q^=':"1V8S cIXe0eajL:G>HKa'nB^fF_LM0tUiQrZ_fFC%b9!GjulE8)[lg^=qa+?=Nl$HuJFTtSDMf1:oC5`iH= ?*,/HbC87 1i[QqY+WEdAGCjKLZ5gP+^l gl:8``2$s1uakhoAc!Ke3L$,FYNllZJTD7/^I4JL11*XXH^p?Xo0Be8REQ7#9&bA"U!J3ba6j!U] [MB,,S5'g>+JRM,0[6Q'88EUYmHbu/hZn,3E7fP+'=XG([G^ Lg9fc7![[email protected],=d"dB5b7 3Q_-sUqu+9#nq[_pOQ_(2=-r7JIUbNm7c/'nYbCnp#Hi/GY3_X?TEY$o)+KOc&- W9[8os4tuPJQh)XOgWRjJRPuTiOGYB6>B50KP#DrEX_kL>.jd6]ePX9D.>`snQ2Wj"_KGY2m,qu4 /[email protected]*AIRcOS7g):@?n9$7ARUu1f?,@f$p!_"V>Z8UC]5qVGF>&"tlEiB=#CsY9G[AWNj6JV17 ,e:["11ggm1F%pu5gQs]1^!ben2?^ecW_(aT"0!`b1':4rdI9T7%NcIq0f*[email protected][g`.c /3_*][email protected]#U'i,[email protected]/1]iV-Nta6o>7EHR$kYpA2NGRO-,Vs./[email protected])T:525jfD(-7 G(q)6V$CZ78>3i%$4-fe&:nH"F[^HBcE/Q:Sr)$%]]@:_ge"H:?GkoBqV``fZ([*6_ E)j.#Snq?nPNSb1])ik VsIS!DC,-'r9%mTIsZfD#CfCRZAB2]2*(I379?:D&IcApU8'3WN_iU+`fLsV" 3ja]88#go*dR]Y:"%FFq*&TuCa]Z-[CRfTpPJ'Ugfks"5"B-BS1TDL'YiH4[h,k>MG-]8[[email protected] r_h"=t^]Ns&# )4plU8baXfCEG!(kT6(1J5ZF`W>?h8!Mq4$o _-`rQ MUZW"aOHK. [email protected]]X]Qm`&OY7$[eV`4X6nU` VY_hV?]+UkGCjV=[(J9XG+%S25o'T#S:o>(EkWtT:5J^?4,3._fQOjt? YlVfP.jpgCXb)t?"[email protected]!N!SL1X lH)o$K8Atcr^cpCmus3BbBDQSrq]PAIF9^671=:2c1:[email protected])qsA+'ZLUh'dd4WC'RC=Z%+KKl^ WblS*[email protected],XMqHe=B[K eLB7Zo"Sk-^^S]O9L:GYC0%aeK Jp7` FJ)UGW$d-U$.)[=rCs2&Ybt6+E/0A]*T+]HKOb>0:0s9'aYDAqUTimE4cg.OtsKjEot"*A+:"j [email protected]@prD?iqTIk9D*@Z8B/r47F(p>"Bt#-X#.>XL#0/!.HRd4LNb_jQ%A?UT0'5"0%JaKGUDgc8Z dZ2`$5l'ABJo"8KK44+$^JV4bf#!F(cu],SIV!'SV3AY?0Si>L=G:/g&p`9KD'&,F`%a&Q*>6=pO i=Y36f[:PA:?6b[6-J'Y1D*t!^GV'6j]b6QHGjdSaKl8CRC5^+-7WGR]o6YG%I64U+"@NZ>)=^Q$JI ekWjdDk:0glB_l6QJ,H=U&3JP[fVYU:^?=,o?k-YCgQ0+/5N_Ntoum]:XM0o//BU8=0)5B/bd'? ^18$jrZ*q3hL-#`e48a#.$Ua[`tQt#Bm7bAn$d&5Hh1[D=L`A7o*6QjQ5D[kb/73b5X ][email protected]?nS:fq6:p[=>=(>/PFHF0Nf3VfP/Ih(9QcZm%8)Z) $?TG5gmX:6&h0_o*SS^ G*ZF1g+818_Es!Jk7ku4lbpna&6mDfYg"KPo`T9-eZ8K>8b10l+Am [email protected],ZTe-FWO:5+]fuM>?ul.EQV8`&5bfI-58*[es=8OGgI5.6: Optimal Spanning Trees - Mathematics,[nobr][H1toH2]

## 5.6: Optimal Spanning Trees - Mathematics

Let G be a connected graph. A spanning tree in G is a subgraph of G that includes all the vertices of G and is also a tree. The edges of the trees are called branches.

For example, consider the following graph G

The three spanning trees G are:

We can find a spanning tree systematically by using either of two methods.

**Cutting-down Method**- Start choosing any cycle in G.
- Remove one of cycle's edges.
- Repeat this procedure until there are no cycle left.

For example, given the graph G

1. We remove the edge ac which destroy the cycle adca in the above graph and we get

2. We remove the edge cb , which destroy the cycle adcba in the above graph and we get

3. We remove the edge ec , which destroy the cycle decd in the above graph and thus obtained the following spanning tree.

**Building-up Method**- Select edges of G one at a time. in such a way that no cycles are created.
- Repeat this procedure until all vertices are included.

For example, for the following graph G

2. Next choose the edge de as follows:

3. After that choose the edge ec as follows:

4. Finally, we choose the edge cb and thus obtain the following spanning tree.

**Theorem**A graph is connected if and only if it has a spanning tree.**Proof****Centers and Bicenters**It is convenient to start building up the tree at the middle of a tree and move outwards. This was the approach used by Arthur Cayley when he counted the number of chemical molecules by building them step by step. But what do we mean by the "middle" of a tree?

There are two straight forward ways to compute centers and bicenters.

- Remove all the vertices of degree1, together with their incident edges.
- Repeat the process until we obtain either a single vertex (the center) or two vertices joined by an edge (the bicenter).

A tree with a center is called a central tree, and a tree with a bicenter is called a bicentral tree. Note that every tree is either central or bicentral, but not both.

For example, given a following tree.

Remove all vertices of degree 1.

Remove all vertices of degree 1.

Therefore, a tree is central with center e .

Another example, given a following tree.

Remove all vertices of degree 1.

Remove all vertices of degree 1.

Therefore, a given tree is bicentral with bicenter cd.

For each vertex

*v*of the degree 2 or more, count the number of vertices in each of the subtrees emanating from*v*, and let*n*be the maximum of these numbers. If the tree has_{v}*n*vertices it can be shown that either there is just one vertex*v*for which*n*≤ 1/2(_{v}*n*-1) (the centroid or centroid tree) or there are two adjacent vertices*v*and*w*for which*n*=_{v}*n*= 1/2_{w}*n*(the bicentroid or bicentroid tree). It is easy to see that every tree is either centroidal or bicentroidal, but not both.Note that we can think of the centroid or bicentroid as the ' center of gravity ' of the tree.

For example, given a following tree

Since

*n*= 4,_{c}*n*= 4,_{e}*n*= 5 and_{f}*n*= 6. Therefore, we have a bicentroidal tree with bicentroid_{g}*ce*.Another example, given a following tree.

Since

*n*= 6,_{b}*n*= 5,_{c}*n*= 3 and_{d}*n*= 5. Therefore, we have a centroidal tree with centroid_{f}*d*.

## Discrete Mathematics

Proof:

Show that if f (x) = f (y) for arbitrary x, y ∈A, then x = y.if and only if for every element in the codomain there is an element in the domain where f(a) = b

Proof:

Consider an arbitrary element y ∈B and find an element x ∈A such that f (x) = y.It will take a finite time to reach any value in the set

The cardinal number of the set is larger than the cardinal number of N

-linear transformations

- Express which parts of a graph are connected by edges

- size = row x column

row (m) = column(n) , square matrix- every column is a mx1 matrix

- every row is a 1xn matrixMethod (a r,c) and C = AB

r1x c1 = a1,1xb1,1 + a1,2xb2,1 + a1,3xb3,1 = C1,1

r1 x c2 = a1,1xb2,1 + a1,2 x b2,2 + a1,3 x a2,1 = C 1,2ie.) 17 = 5 (mod 6)

because 17-5 = 12A prodcedure that can be broken down into a sequence of tasks

If a task can either be done in one of n1 ways or n2 ways then there are n1+n2 ways to do the task

n = number of objects r = number arrangement qualifier

n = number of objects r = arrangement qualifier

n = number of objects r = arrangement qualifier

n = number of objects r = arrangment qualifier

C(n,r) = 25!/13!(25-13)! x 2^(25-12) x 3^13

--> "Success"

3 s's , 2 c's , 1 u, 1 es = C(n,r) = C(7,3) with (n-r= 7-3) remaining positions

c's now have 4 positions to work through so C(n,r) = (4,2) with (4-2) free postions

u has 2 positions to work throught so C(n,r) = (2,1) and leaves 2-1 postions free

## Trees: A Mathematical Tool for All Seasons

I hope to convince you that mathematical trees are no less lovely than their biological counterparts.

### Introduction

Trees have been the inspiration of poets and no one doubts the beauty of many wood sculptures nor the ubiquitous uses for lumber. The leaves of trees are beautiful and varied. Many find the spectacular fall colors of trees an inspiration and travel great distances to see the fall foliage. No wonder that mathematicians use the suggestive term "

*tree*" for the special class of structures sampled below, where two rather different drawings of the same tree structure are shown.

I hope to convince you that mathematical trees are no less lovely than their biological counterparts.### Basic ideas

A powerful way to represent relationships between objects in visual form can be done using a mathematics structure called a

*graph*. One uses dots called*vertices*to represent objects and line segments which join the dots, called*edges*, to represent some relationship between the object which the dots represent. For example, a chemist might draw the diagram below to represent a methane molecule:

In a graph the number of edges that meet a vertex of the graph is called the*degree*or*valence*of a vertex. The use of the term valence here reflects the fact that in forming molecules certain atoms "hook up" with a fixed number of other atoms. Hydrogen has a valence of 1 and carbon a valence of 4. In the graph of the methane molecule we see that the hydrogen atoms have valence (degree) 1, while the carbon atom has valence (degree) 4. The vertices of valence 1 in a tree are often known as*leaves*(singular:*leaf*) of a tree.Here the diagram shows a person's ancestors.

The two graphs we have just drawn are special because they lack

*cycles*. A cycle in a graph is a collection of edges which make it possible to start at a vertex and move to other vertices along edges, returning to the start vertex without repeating either edges or vertices (other than the start vertex). The graph below has a variety of cycles,*abda*,*adea*,*adfcba*, and*bdfcb*. The cycles*dbcfd*and*dfcbd*are considered the same as the cycle*bdfcb*because the same edges are used. Can you find a list of all of the cycles in this graph?

All the graphs we have drawn are also special because they have the property that given any two vertices of the graph, they can be joined by a*path*: a collection of edges that does not repeat any edges or vertices and that joins a vertex to a distinct vertex. For example,*aedbcf*is a path while*adeab*is not because the vertex*a*was repeated. A graph which has the property that any pair of vertices in the graph can be joined by a path is called*connected*. We will define a*tree*to be a connected graph which has no cycles. A pioneer in the study of the mathematical properties of trees was the British mathematician Arthur Cayley (1821-1895).Trees have a variety of very nice properties:

a. Given two distinct vertices

*u*and*v*in a tree, there is a unique path from*u*to*v*. (For technical reasons it is convenient to allow a graph with a single vertex to be a tree.)b. If we cut (or remove) any edge of a tree, the graph is no longer connected. (Thus, trees are examples of

*minimally connected graphs*. Connected graphs which are not trees must have edges which can be removed and still preserve the property that the graph is connected. These edges are edges that lie on cycles.)c. If a tree has

*n*vertices, then it has (*n*-1) edges. If we denote the number of edges of a graph G by |E(G)| and the number of vertices of G by |V(G)|, then |V(G)| = |E(G)| + 1. In this form this result can be thought of as a special case of Euler's (polyhedral) Formula.d. If

*u*and*v*are two distinct vertices of a tree not already joined by an edge, then adding the edge from*u*to*v*to the tree will create exactly one cycle.It is these appealing properties of trees that give rise to the many theoretical and applied aspects of graphs which make them so ubiquitous as tools in mathematics and computer science. Trees are widely used as data structures in computer science.

### Minimum cost spanning trees

One example of the power of using trees as a tool appears in a problem which often arises in operations research. Here is one applications setting. Consider the graph with weights as pictured below.

Each weight gives the cost of creating a communications link between the vertices at the end of the edge, so that we can send a message between the endpoints of the edges. (Edges which are omitted above have such high cost that it is prohibitive to create these links.) Our goal is to be able to send a message between any pair of vertices by selecting links in the graph above so that the weight of putting in the selected links has the smallest possible total cost. Note that we are not requiring links between every pair of vertices because if the link from A to B is inserted and the link between B and C is inserted, then one can send a message from A to C by relaying it via B. (In the interests of simplification we do not consider relay costs.) Note that if the links selected formed a cycle, one could omit the edge of the cycle with the largest weight and still be able to send messages between any pair of vertices which make up the cycle. Thus, the best answer for the network must be a tree. One is given a graph with weights on the edges, typically positive weights, though negative weights are allowed and can be thought of in applications as subsidies for the use of the edge of a graph having a negative weight. The goal is to try to find a*spanning tree*of the graph which has the property that the sum of the weights of the edges in the tree is a minimum. The meaning of being a spanning tree is that the tree includes all of the vertices of the original graph. (The word "spanning" in graph theory is often used for a subgraph of a graph which includes all of the vertices of the original.) Were we to select the blue edges in the diagram below as links, it would be possible to relay messages between any pair of vertices. However, can you see why this is not the cheapest way to create such a network?Keep reading to see a variety of different elegant methods to find a way of choosing the links in a situation such as this which achieve minimum cost. The mathematical literature typically refers to the problem being described here as the

*minimal spanning tree problem*or MST. This name is not ideal because one can conceive of a variety of interpretations for minimal. Here I will use the more suggestive term "minimum cost spanning tree."### History of algorithms to find minimum cost spanning trees

The history of the minimum cost spanning tree problem is rather interesting and complex. Until recently, most textbook discussions of this problem made reference to the work of two American mathematicians, then employees at Bell Laboratories, who each developed an algorithm for solving the minimum cost spanning tree problem. They are Joseph B. Kruskal who published his work in 1956,

(Photo courtesy of Pieter Kroonenberg (U. of Leiden), who took the photo.)

and Robert C. Prim who published his work in 1957.

In the textbook and research literature that developed concerning what Kruskal and Prim accomplished it was largely lost that others had worked on this problem previously. In particular it was mostly overlooked that in the papers of both Prim and Kruskal, reference was made to the work of the Czech mathematician Otakar Borůvka (1899-1995). Borůvka's work dated to 1926 and his algorithm for the minimum cost spanning tree problem is different from that of either Kruskal or Prim! Furthermore, Borůvka's algorithm is very elegant and deserves as much attention as that of Prim and Kruskal. His work was published in Czech (one paper with a German summary). Recent work on the history of the minimum cost spanning tree problem shows that there were a variety of independent discoveries of the algorithms and ideas involved, which is not surprising in light of the theoretical importance of greedy (making a locally best choice) algorithms and the many applied problems that can be attacked using the mathematics involved. For example, Prim's algorithm had actually been independently discovered much earlier by the Czech mathematician Vojtečh Jarník (1897-1970), who made important contributions to mathematics besides his work on trees. The complete story of the circumstances of Kruskal's and Prim's interest in the minimum cost spanning tree problem stems from problems in pricing "leased-line services" by the Bell System.The algorithm that Kruskal discovered for finding a minimum cost spanning tree is very appealing, though it does require a proof that it always produces an optimal tree. (The analogous algorithm for the Traveling Salesman Problem does not always yield a global optimum.) Also, at the start, Kruskal's algorithm requires the sorting of the weights of all of the edges of the graph for which one seeks a minimum cost spanning tree. This requires that computer memory be used to store this information. Given computer capabilities at the time Kruskal's algorithm did not meet the needs of the Bell System's clients. It occurred to Robert Prim, Kruskal's colleague at Bell Laboratories, that it might be possible to find an alternative algorithm to Kruskal's (even though mathematically Kruskal's work was very appealing), which met the needs of the Bell System's customers. He succeeded in doing this and in noting that his method worked for graphs with ties in cost between edges and with negative weights.

All three of the algorithms due to Borůvka, Kruskal and Prim are "greedy" algorithms that is, they depend on doing something which is locally optimal (best), which "miraculously" turns out to be globally optimal.

Prim's algorithm works by picking, starting at any vertex, a cheapest edge at that vertex, contracting that edge to a single new "super-vertex" and repeating the process. Kruskal's algorithm works by adding the edges in order of cheapest weight subject to the condition that no cycle of edges is created. (One advantage of Prim's algorithm is that no special check to make sure that a cycle is not formed is required.) Borůvka's algorithm (which to work in its simplest form requires that all edges have distinct weights) works by having each vertex grab a cheapest edge. If the resulting structure is not a tree, then the components obtained are shrunk and the process repeated. The details of these algorithms are described below using a simple example.

### Minimum cost spanning tree algorithms

In what follows we will use the following example (Figure 1) to illustrate the way that the various algorithms in which we are interested work. You can think of this graph as giving 6 sites that must be connected by a high transmission electrical system or a cable system of some kind. The sites that have no line segments connecting them are too expensive to connect. Note that the edges all have distinct costs (lengths), which makes the initial exposition a trifle simpler. However, the discussion below can be modified to deal with the situation in which some edges have equal weights. When there are ties for the weights of the edges, the cost associated with a minimum cost spanning tree is the same for all trees which achieve minimum cost. When the edges all have distinct weights there is a unique tree which solves the problem.

There are nine edges in the above graph. If we sort the weights of these edges in increasing order, we get: 4, 5, 9, 10, 11, 12, 13, 20, 24. If we try to get a cheap connecting system by adding edges in order of increasing cost, we would first insert the edges of cost 4 and 5 as shown in the diagram below:

The next cheapest edge would be 9 but its insertion would create a cycle. To send electricity from vertex 0 to vertex 2 does not require the link from 0 to 2 because it can instead be sent from 0 to 1 and then from 1 to 2. Hence, we need not add the link from 0 to 2. Thus, the next edges we would add would be the edges 3 to 4 and 4 to 5 because these are the cheapest at the next two decision stages. After adding these links we get the following situation:

Note that at this stage, the edges selected do not form a connected subgraph of the original edges. Thus, Kruskal's algorithm does not form a tree at every stage of the algorithm. However, by the time the algorithm terminates, the edges will form a tree. The next cheapest edge, having cost 12, would also form a cycle with previously chosen edges, so it is not added to the collection of links. However, the edge with cost 13 can be added. We now have a collection of edges which forms no cycle and which includes all the vertices of the original graph. Thus, we have found a collection of links which makes it possible to transmit electricity from any of the locations to any of the others, using relays if necessary. Since we are seeking a tree as the final collection of edges (shown in red in Figure 4), we can use the fact that all trees with the same number of vertices have the same number of edges to determine how many edges must be selected before Kruskal's algorithm, or that of Prim or Borůvka, will terminate. Specifically, we know that for a tree, the number of vertices and edges are related by the equation:

So that in applying Kruskal's algorithm, when we have selected a connected number of edges to put into our linking network that is one less than the number of vertices to be linked, we know that we have exactly the right number of linking edges!

Prim's algorithm is also a greedy algorithm, in the sense that it repeatedly makes a best choice in a sequence of stages. However, one difference is that Prim's algorithm always results in a tree at each stage of the procedure, producing a spanning tree at the stage where the algorithm terminates. The idea behind Prim's algorithm is to add a cheapest edge which links a new neighboring vertex (a "super-vertex") to a previously selected collection of vertices. We can choose to initiate the algorithm at any vertex of the dot/line diagram. Thus, if we start the procedure at vertex 3, we have three edges to choose from: edge 13 costing 13, edge 34 costing 10 and edge 35 costing 12. Since edge 34 is cheapest, we select this one. At this stage we have the situation shown in Figure 5:

We now think of shrinking the edge from 3 to 4 to a single "super-vertex." This super-vertex has neighboring vertices connected to it by edges. These are the edges 04, 13, 35, and 45 (with costs 24, 13, 12, and 11, respectively). The cheapest of these in cost is the edge 45, so we select this edge next.

At this stage we can think of vertices 3, 4, and 5 as a "super-vertex." The neighbors of this super-vertex are the edges 04, 13, and 25. Note that we do not consider the edge 35 any longer as a neighbor because it joins two vertices which are contained "within" the super-vertex. Since the edge 13, coincidentally with cost 13, is the cheapest of the neighbors of the super-vertex, we next add the edge 13 to the growing collection of links.

We can now continue in this way until our super-vertex consists of all of the vertices of the original graph. Here is the order in which the vertices are added to the super-vertex starting with the initial vertex 3: 4, 5, 1, 2, 0. Thus the edges added are: 34, 45, 13, 12, 10, which gives rise to exactly the same final collection of connecting edges as previously, as shown in red in Figure 4.

Although Kruskal's and Prim's algorithms are quite well known, until relatively recently the algorithm of Borůvka was less well known despite the fact that it was discovered earlier and is cited in the papers of both Kruskal and Prim. How does this algorithm of Borůvka work? It is carried out in stages, just as the those of Kruskal and Prim are. The method applies to dot/line diagrams with weights where all of the weights are distinct, as happens to be the case in our example. Under this assumption, note that the "grabbing" operation described below can not generate a collection of grabbed edges which form a cycle! Thus, the edges selected either form a tree or a collection of trees (i.e. a forest).

One stage of the algorithm consists of each vertex (or in later stages super-vertices) "grabbing" an edge which is adjacent to the vertex which is cheapest, without regard to edges grabbed by other vertices. Thus, since at vertex 0, the edges are 01, 02, and 04, the cheapest being 01 of weight 5, vertex 0 grabs edge 01. Vertex 1 has as adjacent edges 13, 10, and 12 and, thus, vertex 1 grabs the edge 12. Vertex 2, having adjacent edges 21, 20, and 25, grabs the cheapest edge 21, of weight 4. In a similar way, vertex 3 graphs the edge 34, vertex 4 grabs the edge 43, and vertex 5 grabs the edge 54. At this stage we have the current pattern of "grabbed edges":

If the blue edges in Figure 8 formed a tree, we would be finished. However, since they do not, we form a collection of super-vertices which arise from the sets of connected vertices which have currently formed with selected edges. We then carry out another "grabbing" stage of the algorithm. In our current example, there are two super-vertices (one consisting of vertices 0, 1, and 2 and the other consisting of vertices 3, 4, and 5). These super-vertices are linked with edges of weight 13, 20, and 24. If we call the super-vertices A and B respectively, A grabs the edge AB of weight 13, and B graphs the edge BA (which is, of course, the same as AB) of weight 13. This new edge, when changed to color blue, together with the blue edges in Figure 8 gives rise to the connecting edges shown in red in Figure 4. Again, we have found the minimum cost spanning tree.

Let me briefly comment on the issue of what happens for these algorithms when there are edges with the same weight in the graph for which we are trying to find a minimum cost spanning tree. The graph in Figure 9 shows an especially simple situation, but it will help clarify the situation.

In Figure 10, we have shown three copies of the graph in Figure 9, each with a minimum cost spanning tree of cost 102. This illustrates the fact that a graph which has edges with equal weights can have many minimum cost spanning trees, but that one can prove that all of the minimum cost spanning trees have the same cost. Also note that in each case the edge with maximum cost in the original graph can be part of a minimum cost spanning tree. However, in any cycle in a graph for which one seeks a minimum cost spanning tree, if the edges of this cycle have different costs, then the edge of maximum cost in this cycle can not be part of a minimum cost spanning tree.

I have not indicated proofs for the above algorithms. Obviously, it is important to provide such proofs. What is intriguing is that where the greedy algorithms of Kruskal and Prim are optimal for the problem of finding a minimum cost spanning tree, the analogous greedy algorithms for finding a solution to the Traveling Salesman Problem are not optimal.

The idea behind one proof of Kruskal's algorithm is to assume there is a tree T with a cost less than or equal to that of a tree K that is generated by Kruskal's procedure. Construct the list L of the edges of the tree K in the order in which Kruskal's algorithm inserted them into K. If T is not the same tree as K, find the first edge

*e*in list L which is an edge of K but not of T. When*e*is added to T, because of a property of trees mentioned at the start, a unique cycle is formed. This cycle must contain at least one edge*e*' not in K since K, being a tree, has no cycles. Now form the tree T' which consists of adding*e*to T and removing*e*'. The cost of this new tree T' is the cost of T plus the weight on*e*minus the weight on*e*'. Since T is a cheapest tree and since Kruskal's algorithm chooses edges in order of cheapness subject to not forming a cycle, the weight on*e*and*e*' must be equal. This means that T' and T have the same cost, and, thus, K and T' agree on one more edge than K and T in terms of cost. Continuing in this way we see that we may have different trees that achieve minimum cost (these arise from different ways of breaking ties when Kruskal's algorithm is applied) but none that can be less costly than the tree that Kruskal's method produces.One natural consideration in comparing the algorithms which have been discussed concerns the question of which of the approaches is computationally better. This turns out to be a complicated question which depends on the number of vertices and edges in the weighted graph. Questions arise concerning the computational complexity of these algorithms in terms of the data structures that are used for representing the graph and its weights, and the nature of the computer (e.g. serial or parallel) that is doing the calculations.

Although the minimum cost spanning tree algorithms were created in part due to applications involving the creation of various kinds of networks (e.g. phone, electrical, cable), many other applications have been developed. One natural extension of the mathematical model we have just been considering is to have not only a weight on the edges of the graph but also a weight on the vertices of the graph. If a vertex is part of a spanning tree where the degree (valence) of the vertex in that tree is more than one, then relays will perhaps be necessary at that vertex. In this model we seek that spanning tree of the original graph such that the sum of the weights on the edges of the spanning tree T together with the sum of the weights at the vertices of T is a minimum. Unlike the initial model where we found a variety of elegant polynomial time algorithms, this probably more realistic problem is not known to have a polynomial time algorithm. In fact, the problem is known to be NP-Complete, which suggests that a "fast" algorithm for solving this problem may not be possible.

Let us conclude with another, perhaps unexpected, extension of this circle of ideas: Minimum cost spanning trees for points in the Euclidean plane where the cost associated with a pair of points is the Euclidean distance between them. For three points forming the vertices of an equilateral triangle with side length 1, if we restrict ourselves to a spanning tree of the graph formed by the vertices and edges of the equilateral triangle, the minimum cost spanning tree will have a cost of 2. However, if we are allowed to add a fourth point P in the interior of the triangle, which when joined to the vertices of the triangle creates three equal 120 degree angles, then the three segments which join P to the vertices of the triangle, sum to a length of less than 2! The point P is called a

*Steiner point*with respect to the original three. Problems which involve finding minimum cost spanning trees for points in the Euclidean plane when one is allowed to add Steiner point turns out to be both interesting and complex, with many theoretical and applied ramifications. Steiner trees are no less fascinating than minimum cost spanning trees.### References

Bazlamacci C. and K. Hindi, Minimum -weight spanning tree algorithms: A survey and empirical study, Computers & Operations Research, 28 (2001) 767-785.

Bern, M. and R. Graham, The shortest network problem, Scientific American, 1989, p. 66-71.

Blum, A. and R. Ravi, S. Vempala, A constant-factor approximation algorithm for the k-MST problem, Proc. 28th Symposium Theory of Computing, 1996.

Borůvka, O. jistem problemu minimalmim, Praca Moravske Prirodovedecke Spolecnosti, 3 (1926) 37-58 (in Czech)

Cayley, A., A theorem about trees, Quart. J. Math., 23 (1889) 376-378.

Chan, T., Backwards analysis of the Karger-Klein-Tarjan algorithm for minimum spanning trees, Information Processing Letters, 67 (1998) 303-304.

Chazelle, B., A minimum spanning tree algorithm with inverse-Ackermann type complexity, J. ACM, 47 (2000) 1028-1047.

Chazelle, B. and R. Rubinfeld, L. Trevisan, Approximating the minimum cost spanning tree weight in sublinear time, SIAM J. Computing, 34 (2005) 1370-1379.

Cheriton, D. and R. Tarjan, Finding minimum spanning trees, SIAM J. Computing, 5 (1976) 724-742.

Cole, R. and P. Klein, R. Tarjan, A linear-work parallel algorithm for finding minimum spanning trees, Proceedings of SPAA, 1994.

Dijkstra, E., Some theorems on spanning subtrees of a graph, Indag. Math., 22 (1960) 196-199.

Dixon, B. and M. Rauch, R. Tarjan, Verification and sensitivity analysis of minimum spanning trees in linear time, SIAM J. Computing, 21 (1992) 11-84-1192.

Du, D.-Z. and F. Hwang, A proof of the Gilbert-Pollak conjecture on the Steiner ratio, Algorithmica, 7 (1992) 121-135.

Dutta, B. and A. Kar, Cost monotonicity, consistency and minimum cost spanning tree games, Games Economic Behavior, 48 (2004) 223-248.

Fredman, M. and R. Tarjan, Fibonacci heaps and their uses in improved network optimization algorithms, J. ACM, 34 (1987) 596-615.

Fredman, M. and D. Willard, Trans-dichotomous algorithms for minimum spanning trees and shortest paths, J. Comput. Syst. Sci., 48 (1994) 424-436.

Gabow, H. and Z. Galil, T. Spencer, R. Tarjan, Efficient algorithms for finding minimum spanning trees in undirected and directed graphs, Combinatorica, 6 (1986) 109-122.

Gilbert, E. and H. Pollak, Steiner minimal trees, SIAM J. Applied Math., 16 (1968) 1-29.

Eppstein, D. and Z. Galil, G. Italiano, Dynamic graph algorithms, in Algorithms and Theory of Computation Handbook, M. Atallah, editor, CRC, Boca Raton, 1999.

Graham, R. and P. Hell, On the history of the minimum spanning tree problem, Ann. Hist. Computing, 7 (1985) 43-57.

Gusfield, D., Algorithms on Strings, Trees, and Sequences - Computer Science and Computational Biology, Cambridge U. Press, New York, 1997.

Hansen, P. and M. Zheng, Shortest shortest path trees of a network, Discrete Applied Math., 65 (1996) 275-284.

Hochbaum, D., Approximation Algorithms for NP-Hard Problems, PWS Publishing, 1997.

Hu, T., Optimum communication spanning trees, SIAM J. Computing, 3 (1974) 188-195.

Hwang, F. and D. Richards, P. Winter, The Steiner tree problem, Ann. Discrete Math. 53 (1992).

Jarník, V., O jistem problemu minimalnim (in Czech), raca Moravske Prirodovedecke Spolecnosti, 6 (1930) 57-63.

Kar, A., Axiomatization of the Shapley value on minimum cost spanning tree gamse, Games Economic Behavior, 38 (2002) 265-277.

Karger, D. and P. Klein, R. Tarjan, A randomized linear-time algorithm to find minimum spanning trees, J. ACM, 42 (1995) 321-328.

King, V., A simpler minimum spanning tree verification algorithm, Algorithmica, 18 (1997) 263-270.

Ivanov, A. and A. Tuzhilin, Minimal Networks: The Steiner Problem and its Generalizations, CRC Press, Boca Raton, 1994.

Korte, B. and J. Nesetril, Vojtech Jarník's work in combinatorial optimization, Discrete Mathematics, 235 (2001) 1-17.

Komlos, J., Linear verification for spanning trees, Combinatorica, 5 (1985) 57-65.

Korte, B. and L. Lovasz, R. Schrader, Greedoids, Springer-Verlag, Berlin, 1991.

Kou, L. and G. Markowsky, L. Berman, A fast algorithm for Steiner trees, Acta. Inform., 15 (1981) 141-145.

Kruskal, J., On the shortest spanning subtree of a graph and the traveling salesman problem, Proc. Amer. Math. Soc., 7 (1956) 48-50.

Kruskal, J., A reminiscence about shortest spanning trees, Archivum Mathematicum (BRNO), 33 (1997) 13-14.

Lavallée, I., Un algorithm parallèle efficace pur construire un arbre de poids minimal dans un graphe, RAIRO Rech. Oper., 19 (1985) 57-69.

Leeuwen, J. van, Graph Algorithms, in Handbook of Theoretical Computer Science, J. van Leeuwen, (editor), Volume A, Algorithms and Complexity, MIT Press, Cambridge, 1994.

Levcopoupoulos, C. and A. Lingas, There are planar graphs almost as good as the complete graphs and as cheap as minimum spanning trees, Algorithmica, 8 (1992) 251-256.

Nesetril, J., A few remarks on the history of the MST problem, Archivum Mathematicum (BRNO), 33 (1997) 15-22.

Nesetril, J. and E. Milová, H. Nesetrilova, Otakar borůvka on minimum spanning tree problem: Translation of both the 1926 papers, comments, history, Discrete Mathematics 233 (2001) 3-36.

Pettie, S. and V. Ramachandran, An optimal minimum spanning tree algorithm, J. ACM, 49 (2002) 16-34.

Pollak, H., Private communication, 2005.

Prim, R., Shortest connection networks and some generalizations, Bell. System Tech. J., 36 (1957) 1389-1401.

Prim, R. Personal communication, 2005.

Promel, H. and A. Steger, The Steiner Tree Problem, Vieweg, Braunschweig/Wiesbaden, 2002.

Ravi, R. and R. Sundaram, M. Marathe, D. Rosenkrantz, S. Ravi, Spanning trees short or small, SIAM J. Discrete Math., 9 (1996) 178-200.

Rosen, K., (editor), Handbook of Discrete and Combinatorial Mathematics, Chapter 9 (Trees), Chapter 10 (Networks and Flows), CRC, Boca Raton, 2000.

West, D. Introduction to Graph Theory, Second Edition, Prentice-Hall, Upper Saddle River, 2001.

Wu, B.-Y. and K.-M. Chao, Spanning Trees and Optimization Problems, Chapman & Hall/CRC, Boca Raton, 2004.

Xu, Y. and Volman, D. Xu, Clustering gene expression data using graph theoretic approach: An application of minimum spanning trees, Bioinformatics, 18 (2002) 536-545.

Yao, A. An O(|E| log log |V|) algorithm for finding minimum spanning trees, Info. Processing Lett., 4 (1975) 21-23.

**NOTE:**Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some these materials. Some of the items above can be accessed via the ACM Portal, which also provides bibliographic services.Welcome to the

Feature Column!These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics.

Read more . . .

## References

Ackermann, W.: Zum Hilbertschen Aufbau der reellen Zahlen. Math. Ann.

**99**, 118–133 (1928)Ahuja, R.K., Magnanti, T.L., Orlin, J.B.: Network Flows: Theory, Algorithms, and Applications. Prentice-Hall, Inc., Upper Saddle River (1993)

Alves, M.J., Costa, J.P.: Graphical exploration of the weight space in three-objective mixed integer linear programs. Eur. J. Oper. Res.

**248**(1), 72–83 (2016)Andersen, K.A., Jörnsten, K., Lind, M.: On bicriterion minimal spanning trees: an approximation. Comput. Oper. Res.

**23**(12), 1171–1182 (1996)Bökler, F., Mutzel, P.: Output-sensitive algorithms for enumerating the extreme nondominated points of multiobjective combinatorial optimization problems. Lect. Notes Comput. Sci.

**9294**, 288–299 (2015)Cayley, A.: A theorem on trees. Q. J. Math.

**23**, 376–378 (1889)Chazelle, B.: A minimum spanning tree algorithm with inverse-ackermann type complexity. J. ACM

**47**(6), 1028–1047 (2000)Chou, W., Kershenbaum, A.: A unified algorithm for designing multidrop teleprocessing networks. In: Proceedings of the Third ACM Symposium on Data Communications and Data Networks: Analysis and Design, DATACOMM ’73, pp. 148–156. ACM, NY (1973)

Corley, H.W.: Efficient spanning trees. J. Optim. Theory Appl.

**45**(3), 481–485 (1985)Ehrgott, M.: Multicriteria optimization. In: Lecture Notes in Economics and Mathematical Systems, volume 491, 2nd edn. Springer, Berlin (2005)

Ehrgott, M., Klamroth, K.: Connectedness of efficient solutions in multiple criteria combinatorial optimization. Eur. J. Oper. Res.

**97**(1), 159–166 (1997)Eppstein, D.: Representing All Minimum Spanning Trees with Applications to Counting and Generation. Technical Report 95-50, University of California, Irvine, Department of Information and Computer Science,, California (1995)

Esau, L.R., Williams, K.C.: On teleprocessing system design: part ii a method for approximating the optimal network. IBM Syst. J.

**5**(3), 142–147 (1966)Figueira, J., Paquete, L., Simes, M.A.M., Vanderpooten, D.: Algorithmic improvements on dynamic programming for the bi-objective 0,1 knapsack problem. Comput. Optim. Appl.

**56**(1), 97–111 (2013)Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl.

**22**(3), 618–630 (1968)Gorski, J.: Multiple Objective Optimization and its Implications to Single Objective Optimization Problems. Ph.D. Thesis, University of Wuppertal, Germany (2010)

Gorski, J., Klamroth, K., Ruzika, S.: Connectedness of efficient solutions in multiple objective combinatorial optimization. J. Optim. Theory Appl.

**150**(3), 475–497 (2011)Hamacher, H.W., Ruhe, G.: On spanning tree problems with multiple objectives. Ann. Oper. Res.

**52**(4), 209–230 (1994)Kapoor, S., Ramesh, H.: Algorithms for enumerating all spanning trees of undirected and weighted graphs. SIAM J. Comput.

**24**(2), 247–265 (1995)Karger, D.R., Klein, P.N., Tarjan, R.E.: A randomized linear-time algorithm to find minimum spanning trees. J. ACM

**42**(2), 321–328 (1995)Knowles, J.D., Corne, D.W.: A Comparison of Encodings and Algorithms for Multi-objective Minimum Spanning Tree Problems, volume 1, pp. 544–551. IEEE Press, Piscataway (2001)

Kruskal, J.B.: On the shortest spanning subtree of a graph and the traveling salesman problem. Proc. Am. Math. Soc.

**7**(1), 48–50 (1956)Lacour, R.: Exact and Approximate Solving Approaches in Multi-objective Combinatorial Optimization, Application to the Minimum Weight Spanning Tree Problem. Ph.D. Thesis, Université Paris-Dauphine, France (2014)

Loberman, H., Weinberger, A.: Formal procedures for connecting terminals with a minimum total wire length. J. ACM

**4**(4), 428–437 (1957)Özpeynirci, Ö., Köksalan, M.: An exact algorithm for finding extreme supported nondominated points of multiobjective mixed integer programs. Manag. Sci.

**56**, 2302–2315 (2010)Prim, R.C.: Shortest connection networks and some generalizations. Bell Syst. Technol. J.

**36**, 1389–1401 (1957)Przybylski, A., Gandibleux, X., Ehrgott, M.: A recursive algorithm for finding all nondominated extreme points in the outcome set of a multiobjective integer programme. INFORMS J. Comput.

**22**(3), 371–386 (2009)Pugliese, L.P., Guerriero, F., Santos, J.L.: Dynamic programming for spanning tree problems: application to the multi-objective case. Optim. Lett.

**9**(3), 437–450 (2015)Ramos, R.M., Alonso, S., Sicilia, J., González, C.: The problem of the optimal biobjective spanning tree. Eur. J. Oper. Res.

**111**(3), 617–628 (1998)Rossi, J.A., Heiser, R.S., King, N.S.: A Cost Analysis of Minimum Distance TV Networking for Broadcasting Medical Information. Rand Corporation, Santa Monica (1970)

Ruzika, S., Hamacher, H.W.: A Survey on Multiple Objective Minimum Spanning Tree Problems, pp. 104–116. Springer, Berlin (2009)

Saltman, R.G., Bolotsky, G.R., Ruthberg, Z.G.: Heuristic cost optimization of the federal telpak network. In: United States Department of Commerce. National Bureau of Standards. Technical note. U.S. National Bureau of Standards (1973)

Seipp, F.: On Adjacency, Cardinality, and Partial Dominance in Discrete Multiple Objective Optimization. Ph.D. Thesis, Technische Universität Kaiserslauern, Germany (2013)

Sourd, F., Spanjaard, O.: A multiobjective branch-and-bound framework: application to the biobjective spanning tree problem. INFORMS J. Comput.

**20**(3), 472–484 (2008)Steiner, S., Radzik, T.: Computing all efficient solutions of the biobjective minimum spanning tree problem. Comput. Oper. Res.

**35**(1), 198–211 (2008)Steuer, R.: Multiple Criteria Optimization: Theory, Computation, and Application. Robert E. Krieger Publishing Company, Malabar (1989)

Valiente, G.: Algorithms on Trees and Graphs. Springer, Secaucus (2002)

Wu, B.Y., Chao, K.: Spanning Trees and Optimization Problems. Chapman & Hall/CRC, Boca Raton (2004)

Zhou, G., Gen, M.: Genetic algorithm approach on multi-criteria minimum spanning tree problem. Eur. J. Oper. Res.

**114**(1), 141–152 (1999)

## 5.6: Optimal Spanning Trees - Mathematics

**What is Minimum Spanning Tree?**

Given a connected and undirected graph, a*spanning tree*of that graph is a subgraph that is a tree and connects all the vertices together. A single graph can have many different spanning trees. A*minimum spanning tree (MST)*or minimum weight spanning tree for a weighted, connected, undirected graph is a spanning tree with a weight less than or equal to the weight of every other spanning tree. The weight of a spanning tree is the sum of weights given to each edge of the spanning tree.*How many edges does a minimum spanning tree has?*

A minimum spanning tree has (V – 1) edges where V is the number of vertices in the given graph.*What are the applications of*the*Minimum Spanning Tree?*

See this for applications of MST.Below are the steps for finding MST using Kruskal’s algorithm

**1.**Sort all the edges in non-decreasing order of their weight.**2.**Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If cycle is not formed, include this edge. Else, discard it.**3.**Repeat step#2 until there are (V-1) edges in the spanning tree.Step #2 uses the Union-Find algorithm to detect cycles. So we recommend reading the following post as a prerequisite.

Union-Find Algorithm | Set 1 (Detect Cycle in a Graph)

Union-Find Algorithm | Set 2 (Union By Rank and Path Compression)

The algorithm is a Greedy Algorithm. The Greedy Choice is to pick the smallest weight edge that does not cause a cycle in the MST constructed so far. Let us understand it with an example: Consider the below input graph.The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed will be having (9 – 1) = 8 edges.

Now pick all edges one by one from the sorted list of edges

**1.***Pick edge 7-6:*No cycle is formed, include it.**2.***Pick edge 8-2:*No cycle is formed, include it.**3.***Pick edge 6-5:*No cycle is formed, include it.**4.***Pick edge 0-1:*No cycle is formed, include it.**5.***Pick edge 2-5:*No cycle is formed, include it.**6.***Pick edge 8-6:*Since including this edge results in the cycle, discard it.**7.***Pick edge 2-3:*No cycle is formed, include it.**8.***Pick edge 7-8:*Since including this edge results in the cycle, discard it.**9.***Pick edge 0-7:*No cycle is formed, include it.**10.***Pick edge 1-2:*Since including this edge results in the cycle, discard it.**11.***Pick edge 3-4:*No cycle is formed, include it.Since the number of edges included equals (V – 1), the algorithm stops here.

## Prim’s Algorithm

Prim’s Algorithm also use Greedy approach to find the minimum spanning tree. In Prim’s Algorithm we grow the spanning tree from a starting position. Unlike an

**edge**in Kruskal's, we add**vertex**to the growing spanning tree in Prim's.**Algorithm Steps:**- Maintain two disjoint sets of vertices. One containing vertices that are in the growing spanning tree and other that are not in the growing spanning tree.
- Select the cheapest vertex that is connected to the growing spanning tree and is not in the growing spanning tree and add it into the growing spanning tree. This can be done using Priority Queues. Insert the vertices, that are connected to growing spanning tree, into the Priority Queue.
- Check for cycles. To do that, mark the nodes which have been already selected and insert only those nodes in the Priority Queue that are not marked.

Consider the example below:

In Prim’s Algorithm, we will start with an arbitrary node (it doesn’t matter which one) and mark it. In each iteration we will mark a new vertex that is adjacent to the one that we have already marked. As a greedy algorithm, Prim’s algorithm will select the cheapest edge and mark the vertex. So we will simply choose the edge with weight 1. In the next iteration we have three options, edges with weight 2, 3 and 4. So, we will select the edge with weight 2 and mark the vertex. Now again we have three options, edges with weight 3, 4 and 5. But we can’t choose edge with weight 3 as it is creating a cycle. So we will select the edge with weight 4 and we end up with the minimum spanning tree of total cost 7 ( = 1 + 2 +4).

**Implementation:****Time Complexity:**

The time complexity of the Prim’s Algorithm is $O((V + E)logV)$ because each vertex is inserted in the priority queue only once and insertion in priority queue take logarithmic time.

## Watch the video: Fuchsiahavens Morbær træer (December 2021).