Notes > Models of Computation > Program Complexity
Program complexity is a very important subject within computer science. The complexity of a program or algorithm is gauged by taking into consideration how long the program will take to execute, the amount of resources it will take up and, how easy the code is to read and understand by others.
Even simple problems can be solved using different methods and this applies to programming in the sense that there are different ways of coding solutions to problems. For example, some problems can be solved by writing many simple lines of code but the same result could be achieved by a few lines of code that incorporate a loop. Based upon this general principal, the size of a program does not have a direct bearing on the complexity of it.
When using loops, it is not uncommon for novice programmers (and experienced programmers) to code a loop that does not terminate. It is vital to get the stopping conditions of the loop correct. Using operators such as "<=", ">", and "=" on the loop variable will ensure that the loop will terminate when the specified conditions have been met. It is important to make the loop terminating condition attainable though. If, for example, "i" is initially assigned the value of 100 and is incremented by 1 each time the loop repeats and the loop is to execute only when i>50, then this loop will continue indefinitely.
Complexity is also related to sorting and searching. Lists of elements can be searched sequentially or by using a method known as a "binary chop" (See also Binary Search). Searching through a list sequentially is not an efficient method when large lists are involved as there may be millions of elements that have to be searched through which takes a long time.
A binary chop search can be used assuming the list is ordered. The binary chop method analyzes the middle element of a list and determines whether the number being searched for is less than or greater than the middle element in the list. By doing this, half of the numbers in the list can be eliminated either to the left or right hand side of the middle number.
Sorting a list of elements can be done using various methods that can have varying complexity. A "bubble sort" involves comparing two adjacent values in a list and swapping their position if they are not in the correct relative order.
To find the largest and smallest values in an un-ordered list, the slide method could be used. This involves taking the first two numbers in the list and setting the bigger of the two as the maximum value and the smaller one as the minimum. Each value following these in the list are then compared to the maximum and minimum and if the number is bigger than the max value or smaller than the min value, the value being compared replaces the old max/min value. This is repeated until the end of the list is reached and the maximum and minimum value will have been obtained.
The performance of a program is measured in milliseconds (i.e. its speed of execution) but complexity is measured in the form of "Big-O" notation. For example, "O(1)" means "order 1" and describes programs where statements are simply executed one after the other. These programs/methods are described as "constant time" methods.
Where loops are involved in the program, this gives the program the order of "O(N)" and these programs are known as having "linear time". Quadratic time methods involve loops being nested within another loop and is given the notation "O(N^2)". By doubling the input of programs with methods of complexity O(N^2), the number of steps needed to execute the code to completion will be multiplied by four. See Big-O Notation for more information.
It is worth noting the difference between iteration and recursion. Iteration is the repetition of a sequence of instructions and an example of this is the "do...while..." loop. Recursion is where a procedure or function calls itself.
Search for "Program Complexity" on:
eBay (UK) |
Search for "Program Complexity" on the rest of Computing Students: Program Complexity