## Amortized Analysis

Amortized analysis is a worst-case analysis of a a *sequence* of operations — to obtain a tighter bound on the overall or average cost per operation in the sequence than is obtained by separately analyzing each operation in the sequence. For an example, when we considered the union and find operations for the disjoint set data abstraction, we were able to bound the running time of individual operations by *O*(log *n*). However, for a sequence of *n* operations, it is possible to obtain tighter than an *O*(*n* log *n*) bound (although that analysis is more appropriate to 4820 than to this course). Here we will consider a simplified version of the hash table problem above, and show that a sequence of *n* insert operations has overall time *O*(*n*).

Three main techniques used for amortized analysis are given below :

- The aggregate method, where the total running time for a sequence of operations is analyzed.
- The accounting (or banker’s) method, where we impose an extra charge on inexpensive operations and use it to pay for expensive operations later on.
- The potential (or physicist’s) method, in which we derive a
*potential function*characterizing the amount of extra work we can do in each step. This potential either increases or decreases with each successive operation, but cannot be negative.

Consider an extensible array that can store an arbitrary number of integers, like an ArrayList or Vector in Java. These are implemented in terms of ordinary (non-extensible) arrays. Each `add`

operation inserts a new element after all the elements previously inserted. If there are no empty cells left, a new array of double the size is allocated, and all the data from the old array is copied to the corresponding entries in the new array. For instance, consider the following sequence of insertions, starting with an array of length 1:

+--+ Insert 11 |11| +--+ +--+--+ Insert 12 |11|12| +--+--+ +--+--+--+--+ Insert 13 |11|12|13| | +--+--+--+--+ +--+--+--+--+ Insert 14 |11|12|13|14| +--+--+--+--+ +--+--+--+--+--+--+--+--+ Insert 15 |11|12|13|14|15| | | | +--+--+--+--+--+--+--+--+

The table is doubled in the second, third, and fifth steps. As each insertion takes *O*(*n*) time in the worst case, a simple analysis would yield a bound of *O*(*n*^{2}) time for *n* insertions. But it is not this bad. Let’s analyze a sequence of *n* operations using the three methods.

### Aggregate Method :

Let *c _{i}* be the cost of the

*i*-th insertion:

c=_{i}iifi−1is a power of 2 1 otherwise

Let’s consider the size of the table *s _{i}* and the cost

*c*for the first few insertions in a sequence:

_{i}i1 2 3 4 5 6 7 8 9 10s1 2 4 4 8 8 8 8 16 16_{i}c1 2 3 1 5 1 1 1 9 1_{i}

Alteratively we can see that *c _{i}*=1+

*d*where

_{i}*d*is the cost of doubling the table size. That is

_{i}d=_{i}i−1ifi−1is a power of 2 0 otherwise

Then summing over the entire sequence, all the 1’s sum to *O*(*n*), and all the *d _{i}* also sum to

*O*(

*n*). That is,

Σ≤ n + Σ_{1≤i≤n}c_{i}_{0≤j≤m}2^{j−1}

where *m* = log(*n* − 1). Both terms on the right hand side of the inequality are *O*(*n*), so the total running time of *n* insertions is *O*(*n*).

#### Latest posts by admin (see all)

- Extended ER model - June 20, 2015
- File system organization in dbms - June 20, 2015
- Data models - June 19, 2015