Table of contents
Introduction
I had a hard time understanding O(log n)
.
I'm a high school dropout.
I don't know shit about logarithms.
So I ended up writing this for my own understanding.
Analogy
Let's say you're trying to guess a number between 1 and 1024, and with each guess, I tell you if the correct number is higher or lower. If each guess you make is always in the middle of the remaining range, the number of guesses you will make will be:
Guess 1: Range is 1-1024
Guess 2: Range is either 1-512 or 513-1024
Guess 3: The range is halved again...
This is logarithmic time. Each operation reduces the problem size by in large.
The logarithmic time complexity, O (log n), means that as the input size (n) grows, the number of operations doesn't grow linearly but grows in a logarithmic scale.
This is why algorithms with logarithmic time complexity are efficient. They're handy when dealing with large data sets.
Example in JavaScript
Binary search is the most classic example. Let's say you have a sorted array of numbers and you want to check if a specific number exists in that array.
function binarySearch(array, target) {
let left = 0;
let right = array.length - 1;
while (left <= right) {
let mid = Math.floor((left + right) / 2);
if (array[mid] === target) {
return mid;
}
if (array[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1;
}
Every iteration of the loop, we halve the problem size.
Conclusion
It's actually simpler than you'd expect.
All you need is an analogy.