# The complexity of the algorithms: Algorithms and data structures for beginners

Regardless of whether you are a student or a working programmer, and what field you are working in, knowledge of algorithms and data structures is necessary. These are important building blocks for solving problems.

Of course, you probably used a list or a stack, but do you know how they work? If not, you cannot be sure that you are making the right decisions about which algorithm to use.

Understanding algorithms and data structures begin with the ability to identify and compare their complexity.

## Asymptotic analysis

When we talk about measuring the complexity of algorithms, we mean analyzing the time it takes to process a very large set of data. Such an analysis is called asymptotic. How long does it take to process an array of ten elements? Thousands? Ten million? If the algorithm processes a thousand items in five milliseconds, what happens if we pass a million into it? Will it last five minutes or five years? Shouldn’t you find out before the customer?

Things decide everything!

### Growth order

The order of growth describes how the complexity of the algorithm increases with increasing size of the input data. Most often it is presented in the form of O-notation *(from it. “Ordnung” – order)* : *O (f (x))* , where *f (x)* is a formula expressing the complexity of the algorithm. The formula may contain a variable n representing the size of the input data. Below is a list of the most common growth orders, but it is by no means complete.

### Constant – *O (1)*

The growth order *O (1)* means that the computational complexity of the algorithm does not depend on the size of the input data. It should be remembered, however, that a unit in a formula does not mean that the algorithm is executed in one operation or takes very little time. It may require a microsecond and a year. It is important that this time does not depend on the input data.

public int GetCount (int [] items) { return items.Length; }

### Linear – *O (n)*

The growth order *O (n)* means that the complexity of the algorithm increases linearly with an increase in the input array. If the linear algorithm processes one element of five milliseconds, then we can expect that it will process a thousand elements in five seconds.

Such algorithms are easily recognized by the presence of a loop for each element of the input array.

public long GetSum (int [] items) { long sum = 0; foreach (int i in items) { sum + = i; } return sum; }

### Logarithmic – *O (* log *n)*

The growth order *O (* log *n)* means that the execution time of the algorithm increases logarithmically with increasing size of the input array. ( *Note. Lane .:* in the analysis of algorithms, the logarithm of base 2 is used by default). Most algorithms that work on the principle of “dividing in half” have logarithmic complexity. Method `Contains`

of binary search trees *(binary search tree)* also has growth order *O (* log *n)* .

### Linear – *O (n ·* log *n)*

The linear (or linear-logarithmic) algorithm has the order of growth *O (n ·* log *n)* . Some divide-and-conquer algorithms fall into this category. In the following parts we will see two such examples – merge sort and quick sort.

### Quadratic – *O (n *^{2} )

^{2})

The running time of the algorithm with the growth order *O (n ^{2} )* depends on the square of the size of the input array. Despite the fact that such a situation is sometimes unavoidable, quadratic complexity is a reason to revise the algorithms or data structures used. The problem is that they do not scale well. For example, if an array of thousands of elements requires

1,000,000 operations, an array of one million elements will require 1,000,000,000,000 operations. If one operation requires a millisecond to execute, the quadratic algorithm will process a million elements for 32 years. Even if he will be a hundred times faster, the work will take 84 days.

We will see an example of an algorithm with quadratic complexity when we study bubble sorting.

### Best, Medium and Worst

What do we mean when we say that the order of growth of the complexity of the algorithm is *O (n)* ? Is this averaged case? Or the worst? And maybe the best?

This usually refers to the worst case situation, with the exception of those cases where the worst and middle are very different. For example, we will see examples of algorithms that, on average, have *O (1)* growth order , but can periodically become *O (n)* (for example, `ArrayList.add`

). In this case, we will indicate that the algorithm works on average for a constant time, and explain cases when the complexity increases.

The most important thing is that *O (n)* means that the algorithm will require **no more than ***n* steps.

### What do we measure?

When measuring the complexity of algorithms and data structures, we usually talk about two things: the number of operations required to complete the work (computational complexity), and the amount of resources, in particular, the memory, which the algorithm needs (spatial complexity).

An algorithm that runs ten times faster, but uses ten times more space, may well be suitable for a server machine with large amounts of memory. But on embedded systems, where the amount of memory is limited, such an algorithm cannot be used.

In these articles we will talk about computational complexity, but when considering sorting algorithms we will also address the issue of resources.

Operations, the number of which we will measure, include:

- comparisons (“more”, “less”, “equal”);
- assignment;
- memory allocation.

What operations we take into account is usually clear from the context.

For example, when describing an element search algorithm in a data structure, we almost certainly mean a comparison operation. Searching is primarily a reading process, so there is no point in assigning or allocating memory.

When we talk about sorting, we can take into account both comparisons, and selections and assignments. In such cases, we will explicitly indicate which operations we are considering.

## To be continued

This concludes our introduction to the analysis of the complexity of the algorithms. Next time we will look at the first data structure – the linked list.