Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Cover image for What is O(log n)? Learn Big O Logarithmic Time Complexity
Jared Nielsen
Jared Nielsen

Posted on • Originally published atjarednielsen.com

     

What is O(log n)? Learn Big O Logarithmic Time Complexity

Is there a computer science topic more terrifying than Big O notation? Don’t let the name scare you, Big O notation is not a big deal. It’s very easy to understand and you don’t need to be a math whiz to do so. In this tutorial, you’ll learn the fundamentals of Big O logarithmic time complexity with examples in JavaScript.


This is the fourth in a series on Big O notation. If you want to stay in the loop,sign up for my weekly newsletter, The Solution.


What Problem(s) Does Big O Solve?

  • Big O notation helps us answer the question, “Will it scale?”

  • Big O notation equips us with a shared language for discussing performance with other developers (and mathematicians!).

Quick Refresher

If you’re just joining us, you will want to start with that article,What is Big O Notation?

What is Big O?

Big O notation is a system for measuring the rate of growth of an algorithm. Big O notation mathematically describes the complexity of an algorithm in terms of time and space. We don’t measure the speed of an algorithm in seconds (or minutes!). Instead, we measure the number of operations it takes to complete.
The O is short for “Order of”. So, if we’re discussing an algorithm with O(log N), we say its order of, or rate of growth, is “log n”, or logarithmic complexity.

How Does Big O Work?

Big O notation measures theworst-case scenario.

Why?

Because we don’t know what we don’t know.

We need to know just how poorly our algorithm will perform so we can evaluate other solutions.

The worst-case scenario is also known as theupper bound. When we say upper bound, we mean the maximum number of operations performed by an algorithm.

Remember this table?

OComplexityRate of growth
O(1)constantfast
O(log n)logarithmic
O(n)linear time
O(n * log n)log linear
O(n^2)quadratic
O(n^3)cubic
O(2^n)exponential
O(n!)factorialslow

It lists common orders by rate of growth, from fastest to slowest.

We learned O(1), or constant time complexity, inWhat is Big O?, O(n) inBig O Linear Time Complexity, and O(n^2) inBig O Quadratic Time Complexity.

We previously skipped O(log n), logarithmic complexity, because it's easier to understand after learning O(n^2), quadratic time complexity.

Now it's time!

Math O’Clock 🧮 🕝

What is a logarithm?

Logarithms allow us toreverse engineer a power. (Like Kryptonite!) They are the inverse operation of exponentiation.

We can think of logarithms as the opposite operation of exponentiation. Remember this analogy format from standardized tests?

A is to B as X is to Y.

We can say, “Addition is to subtraction as exponentiation is to logarithm.”

We can also say, “Multiplication is to division as exponentiation is to logarithm.”

With quadratic time complexity, we learned thatn *n isn^2.

If the value ofn is to 2, thenn^2 is equal to 4.

It then follows that 2 to the third power,2^3, is equal to 8.

In “long” form, that looks like:

2 * 2 * 2  = 8

What if we were given this problem?

2^y = 8

How would you describe this problem?

We could say, “To what power do we raise 2 for a product of 8?”

🧐

We could also say, “How many 2’s do we need to multiply together to get 8?”

The notation for the equation is:

log2(8) = y

Where2 is thebase,8 is the argument andy is the exponent, or power.

Why is8 theargument? (See what I did there?)

Because log2() is a function!

🤯

We could describe the product as, “The base 2 logarithm of 8 is 3.”

What do we mean bybase?

“In exponentiation, the base is the number b in an expression of the form b^n.”

The three most common bases in regards to logarithms are:

  • 2
  • 10
  • e

In digital electronics and computer science, we (almost always) usebase-2, orbinary numeral system.

In base-2, we count with two symbols: 0 and 1.

Look familiar?

It’s Boolean! 👻

Unlessyou were born with six fingers on each hand, you count inbase-10, or thedecimal numeral system.

10^2 = 10010^3 = 1000etc.

If you really want to nerd out, you can read up one, orEuler’s constant: the unique number whose natural logarithm is equal to one.

☝️ Logarithms are not square roots.

The square root of 8 is 2.82842712475.

The log2 of 8 is 3.

Small difference here, but what if our argument increased?

The square root of 2048 is 45.2548339959.

The log2 of 2048 is 11.

What if we chart a logarithm?

Alt Text

Aerodynamic!

If we compare logarithmic time complexity to other time complexities on theubiquitous Big O cheat sheet, what do we see?

Alt Text

As we can see, logarithmic time complexity is very good!

What if the problem was, “How many 3s, multiplied together, does it take to get 8?”

The answer is 1.8927892607

If you imagined calculating that number by hand is a tedious process, you’d be right.

That’s why we inventedslide rulers and fancy calculators.

Lucky for us, we don’t need to calculate the exact log of a function.

With Big O, we abstract away the details.

We’re not concerned with the specific implementation of our algorithm.

We’re interested in theorder of our algorithm so we can make comparisons and evaluate alternative solutions.

We consider the base of our log a constant, so we drop it, and simply use the following notation:

O(log n)

O(log n): Binary Search

The classic example used to illustrate O(log n) is binary search. Binary search is an algorithm that finds the location of an argument in a sorted series by dividing the input in half with each iteration.

Let’s say we are given the following array and asked to find the position of the number512:

constpowers=[1,2,4,8,16,32,64,128,256,512];

First, let’s review the brute force solution to this problem.

constbruteSearch=(arr,num)=>{for(leti=0;i<arr.length;i++){if(arr[i]===num){return`Found${num} at${i}`;}}}

Let's map the values ofi andarr[i] in an array, J4F:

iarr[i]
01
12
24
38
416
532
664
7128
8256
9512

What’s the Big O ofbruteSearch()?

O(n)

To findnum, we need to check every item in our array, so the time complexity is linear.

Can we do better?

Definitely.

What’s happening in this function?

constpowers=[1,2,4,8,16,32,64,128,256,512];constbinarySearch=(arr,num)=>{letstartIndex=0;letendIndex=(arr.length)-1;while(startIndex<=endIndex){letpivot=Math.floor((startIndex+endIndex)/2);if(arr[pivot]===num){return`Found${num} at${pivot}`;}elseif(arr[pivot]<num){startIndex=pivot+1;}else{endIndex=pivot-1;}}returnfalse;}

In the first iteration of ourwhile loop, we create apivot at the median of our array. We then checknum against the value at the array indexed bypivot.

Ifpivot is equal tonum, we return.

Ifpivot is less thannum, we change the value of ourstartIndex to the value of ourpivot plus one.

Why?

If the value ofarr[pivot] is less thannum, we don’t need to check any of the elements below ourpivot in the next iteration.

We just halved our input!

n / 2

Ifpivot is greater thannum, we change the value of ourendIndex to the value of ourpivot minus one.

If the value ofarray[pivot] is greater thannum, we don’t need to check any of the elements above ourpivot in the next iteration.

In the next iteration, we create a newpivot at the median between our adjustedstartIndex andendIndex, and again check the value ofarr[pivot] against the value ofnum.

In the example above, we again halve our input.

What’s half of half?

A quarter.

n / 4

In the third iteration, we halve the input again:n / 8

In the final iteration, we find our number. There’s also nothing left to halve at that point.

What’s the pattern?

Let’s make a table!

startIndexendIndexpivot# elementsn
09410n
5975n / 2
8982n / 4
9991n / 8

What do you notice about the sequence in the# elements column?

We divide it, more or less, by two, until we reach zero.

What do you notice about the sequence in then column?

2

4

8

Powers of two!

By dividing our input with each iteration, we areinversing an exponent.

📝 When you see this pattern where the input is divided with each iteration, it’s O(log n).

Big O Logarithmic Time Complexity

Does O(log n) scale?

Definitely.

In this tutorial, you learned the fundamentals of Big O logarithmic time complexity with examples in JavaScript. Stay tuned for part five of this series on Big O notation where we’ll look at O(n log n), or log linear time complexity. If you want to stay in the loop,sign up for my weekly newsletter, The Solution.

Top comments(11)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss
CollapseExpand
 
codemouse92 profile image
Jason C. McDonald
Author. Speaker. Time Lord. (Views are my own)
  • Email
  • Location
    Time Vortex
  • Pronouns
    he/him
  • Work
    Author of "Dead Simple Python" (No Starch Press)
  • Joined
CollapseExpand
 
artoodeeto profile image
aRtoo
Web Team at Dentca. Currently hands on Ruby on Rails and Angular. :)
  • Email
  • Location
    orange, california
  • Education
    Bachelors of Science in Information Technology
  • Work
    Fullstack Web Developer at Dentca
  • Joined

Big theta and big Omega are upper bounds and lower bounds, right? But in software engineering, we don't mind the upper or lower bounds that's why we use big O? At least that's what I remembered from the book cracking the coding interview. Not sure though. lols

CollapseExpand
 
codemouse92 profile image
Jason C. McDonald
Author. Speaker. Time Lord. (Views are my own)
  • Email
  • Location
    Time Vortex
  • Pronouns
    he/him
  • Work
    Author of "Dead Simple Python" (No Starch Press)
  • Joined
• Edited on• Edited

Not quite. Big O is the upper-bound (longest possible run time), while big-Ω is the lower bound (shortest possible run time). It's big-Θ that's the combination of the two, althoughit can only be used if the upper and lower bounds are the same.

While big-O is the most commonly useful, since we most often care about the longest possible running time of a particular scenario (e.g. of worst-case), it still has a precise meaning.

When we say "the function is at least O(n)", we're almost always misappropriating big-O (upper bound) to describe big-Ω (lower bound).

What's further, when we say "the function always has a runtime of O(n²)", we're really meaning "Θ(n²)"; it's not just the upper bound, butboth bounds at once.

To put that another way, "lower bound" means "a runtime of at least", and "upper bound" means "a runtime no greater than".This has nothing to do with best-case/worst-case, as we are always speaking about the runtime of worst-case, unless explicitly stated otherwise.

Take a look at my comment on that article I linked to for more details.

Thread Thread
 
v6 profile image
🦄N B🛡
// , “It is not so important to be serious as it is to be serious about the important things. The monkey wears an expression of seriousness... but the monkey is serious because he itches."(No/No)
  • Location
    (No/No)
  • Education
    (No/No)
  • Work
    Security
  • Joined

// , If it gets really big, it's big-Θ-Lawd:youtube.com/watch?v=zlplCs00wTE

Thread Thread
 
codemouse92 profile image
Jason C. McDonald
Author. Speaker. Time Lord. (Views are my own)
  • Email
  • Location
    Time Vortex
  • Pronouns
    he/him
  • Work
    Author of "Dead Simple Python" (No Starch Press)
  • Joined

"Your time complexity is so bad, not even the TARDIS can handle it."

CollapseExpand
 
hpj1992 profile image
Harshit
Software Engineer.
  • Location
    California
  • Joined

Nice explanation.

Can you provide some other common problems which can be solved in O(logn) ?

TIA

CollapseExpand
 
robconery profile image
Rob Conery
Founder of Big Machine and Tekpub. Author of The Imposter’s Handbook, creator of This Developer's Life, method speaker and Azure Developer[0] at @microsoft
  • Location
    Honolulu, HI
  • Joined
• Edited on• Edited

O (log n) is extremely common - and in fact the one that will save you the most time if you've ever dealt with a database slowing down. Just plopping an index on a string value will turn it from an O(n) operation (sequential scan) to an O (log n) operation - divide and conquer. Usual caveats here: databases tend to have some tweaks under the covers, but the typical index is BTREE or the like, which is O (log n).

You can quantify the time savings using the math in this post. On 1000 rows a sequential scan has to do 1000 comparisons to find the record(s) you want. With an index in place, it does log2(1000), which is ~10 (or 10 in our case). That's pretty good - and you an see why an index like that would scale really well.

In terms of algorithms you write - you'd be surprised at the optimizations you can get if you're using something like Redis, which doesn't have indexes (unless you create them). Since it's in memory though, you can scan a list for a given value using binary search - pretty damn useful :).

CollapseExpand
 
omicron666 profile image
Omicron
  • Location
    France
  • Work
    Software Engineer
  • Joined

O(log n) is actually not very common.
You have fast exponentiation if you wanna explore.
At best, you'll encounter O(n*log n) in general for algorithms.

CollapseExpand
 
hpj1992 profile image
Harshit
Software Engineer.
  • Location
    California
  • Joined

Thanks.

CollapseExpand
 
jjant profile image
Julian Antonielli
  • Work
    Software Engineer doing Elm at ActiveState
  • Joined

A simple example is binary searching a number in an ordered array.

CollapseExpand
 
maheshthedev profile image
Mahesh Sv
I'm Data science Enthusiastic and Tech Geek. I love to be #Dev
  • Location
    Vijayawada
  • Work
    Student at Lovely Professional University
  • Joined

Thanks for sharing such a great Concept in Easy Way. Kudos,Bro👏

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

A short bio.
  • Joined

More fromJared Nielsen

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp