How does one measure how fast their code is?
One simple way would be to just use a stopwatch and see how long it took to run the code with the sample input.
It’s a good start. You may have some expectations around the performance of the code in your mind and this simple test may help validate that. But you will soon find that this method doesn’t really scale (no pun intended, seriously).
So what are the problems with a simple timer
The most obvious problem is that this approach is machine dependent and thus not very reliable. The results observed will be different on the developers laptop from that on production servers.
Another issue is the size of the input used for the test. The characteristics of code may not be linearly proportional to the size of the input and may provide false confidence.
Alright, how about using ‘lines of code’ as a measure of speed?
Yeah, no. This one of those things that sounds like a great idea at first, but falls apart very quickly. The number of statements varies with the programming language used. And a language could do a set of tasks quicker with more lines of code than another language that is more concise. Also, every programmer has a different programming style. Another factor to consider is that not all lines of code are equal and do not affect run time equally.
If the most obvious ways are not the way to understand performance characteristics of code, what is? This is where the ‘Bigs’ come into play. The Big-O (big oh), Big-Ω (big omega) and Big-Θ (big theta). These will help us with run time analysis of code in a meaningful and consistent way. Watch out for the next post in this series where we will continue this conversation.