When you are programming you face challenges all the way. Getting the problems solved is just the tip of the iceberg, getting it done efficiently is the rest.


**Update**

Graphs are gone in this post. I re-made this post and added more information and images. Checkout it out clicking here: Data Structures and Algorithms (DSA) for Beginners


Why should you care for efficiency?

Solutions to the same problem might take years with certain algorithm, and just minutes using efficient algorithms. For instance, if you have applications that are used for thousands of people over internet, every fraction of second counts. Therefore, efficient algorithms is a must.

How I do my algorithms more efficient?

To improve something you first need to know the actual state. In this case you need to measure the actual effectiveness of your algorithm in other to improve it. It's very common to use running time analysis to measure the speed of algorithms independently from the hardware used (old pc, supercomputer it doesn't matter). 

Run-time analysis

A common way to analyze the algorithms is using the big-O notation. The good thing about this notation is that is independent from the computer used to run the algorithm. You know that if you use a very slow computer (e.g. pentium I) v.s. a supercomputer use in NASA, the latter will run the program much faster. Big-O notation abstract the hardware and just focus in the algorithm per se. The only variable in the big-O notation gives the relative time needed to process an algorithm in function of the input n. Let's clarify this with an example.

Ex.1 - You want to sort an array A of n integers. 

Depending in the algorithm used to do that you may have:

  • selection sort has a running time of O(n^2);
  • merge sort --> O(n log n)

Right now, it doesn't matter if are not familiar with these algorithms (we will cover this the next lessons), the point here is that we have n integer and big-O notations give us a mathematical expression that is in function of the input n. If you plot in a graph n^2 and n log n. You'll see that n^2 grows much faster than n log(n). That means that the algorithm n^2 will take longer than n*log(n) to process as the size of the array n increases.

Common order of Growth

To give you an idea of the common order of growth of runtime expressions. Take a look at the following graph and table. The slower the function growth the better is the algorithm. In order from better performance to worst is:

1 -- log n -- n -- n log n -- n^2 -- n^3 -- 2^n -- n! ...

Approximate growth rate from code.

There are a whole theory and math behind the Big-O notation and other notations related. At this time, just take a look of the typical code and its growth order.

Cases (the good, the bad, and the ugly)

Remember that n is the number of elements in the input. All this runtime growth rate are in function of the input elements. There is another important thing to consider about the input elements: the order! The order of the input elements matters, and that's why algorithms are analyzed in 3 different cases:

  1. Worst-case performance: the input is distributed as worst as it could be for an algorithm.   
  2. Average-case scenario: approximation of the most common arrange of inputs.
  3. Best-case scenario: most favorable distribution of the inputs.
  4. One more: Space. this is how much space the algorithm cosume to execute. 

If you want more depth in these topic read here: 

  • Analysis (pdf) (keynote)
  • Algorithm @ ocw.mit.edu: lectures 1 and 2
  • http://algs4.cs.princeton.edu/home/