I think most of the backlash against Twitter is that it’s a poor format for reading long form writing, content is usually light or unsubstantiated (rumors), it’s a redirect to a better primary source.

It is a bit weird to have notations such as O(3) = 3. Big-O notation is about asymptomatic behavior so it kind of only makes sense with a symbolic n.

You do not mention at all the fact that these bounds are only up to a constant, and for n greater than given threshold, which for example means that matrix-multiplication algorithms with the best big-O are not the most efficient ones in practice. I guess this out of a desire to keep it simple but I’m afraid it could be misleading to some readers.

Apart from the fact that your log is now in fact n/10 as iswrong saif, you also have a typo in your symbolic notation log_n, it should be $log_10(n)$.

Speaking of complexity on problems whose input are an integer can be ambiguous. Strictly speaking, the “size” of the input is the log2 of the integer n, so for example, if you loop n times to print something, some may nitpick and say that you have an exponential complexity.

Your plot of 2^n and log(n) seem a bit off, are they actual plots or just hand drawn? Log in particular seems very flat, but maybe it’ss just the way the plot is scaled.

Thanks Arthur for the feedback. I mostly kept it as simple as possible for the sake of fitting it in an image and be easier to understand at the same time. I agree with all the points that you have mentioned and will address them in the associated article. Thank you!

The logarithmic complexity examples on the right are wrong. If T(10) = 1, T(20) = 2, T(30) = 3, T(40) = 4, etc. the running time grows linearly with the input size, since if the input size doubles, the running time doubles. In logarithmic time (e.g. log 10 for simplicity), you’d expect T(10) = 1, T(100) = 2, T(1000) = 3, etc.

More in general, but I think this is a difference between big-O in theory and how it is used in practice: big-O provides an upper bound on the growth rate of a function. E.g. an algorithm that runs in linear time is also O(n^2). Of course, proving an O(n) upper bound (if possible) is more useful than proving an O(n^2) upper bound.

I like how “the most famous example” is listed twice (O log n). Might be good if the code example was actually a binary search.

I disagree with the general advice about combining multiple complexities: I prefer to keep terms separate if they are easily identifiable and distinct qualities of input. For example, a hash table insertion is O(k) not O(1) because the cost of hashing the input is proportional to the bits (k) in the input (and it isn’t constant). Yes even if the lookup itself isn’t proportional to the number of elements in the set. If you don’t do this, you lose the ability to see (clearly) how an “O(1)” algorithm can be beaten so incredibly easily by an “O(n)” one (or even an “O(n^2)” one) in many circumstances, and that can mean the difference between success/failure where I work.

if you’re the author, don’t you have a non-twitter link?

Not at the moment - I just made it and tweeted about it. However, I plan on putting it under “X in One Picture” series on https://roadmap.sh

I posted it elsewhere, for posterity, or something:

https://i.postimg.cc/k9m1CkpF/bigO.jpg

I also wondered whether tweets are endorsed here…

I think most of the backlash against Twitter is that it’s a poor format for reading long form writing, content is usually light or unsubstantiated (rumors), it’s a redirect to a better primary source.

For the most part, Twitter links do pretty well in Lobster.rs: https://lobste.rs/domain/twitter.com

Nice visuals!

A couple of remarks though:

`O(3) = 3`

. Big-O notation is about asymptomatic behavior so it kind of only makes sense with a symbolicn.n/10asiswrongsaif, you also have a typo in your symbolic notationlog_n, it should be $log_10(n)$.Thanks Arthur for the feedback. I mostly kept it as simple as possible for the sake of fitting it in an image and be easier to understand at the same time. I agree with all the points that you have mentioned and will address them in the associated article. Thank you!

The logarithmic complexity examples on the right are wrong. If T(10) = 1, T(20) = 2, T(30) = 3, T(40) = 4, etc. the running time grows linearly with the input size, since if the input size doubles, the running time doubles. In logarithmic time (e.g. log 10 for simplicity), you’d expect T(10) = 1, T(100) = 2, T(1000) = 3, etc.

More in general, but I think this is a difference between big-O in theory and how it is used in practice: big-O provides an upper bound on the growth rate of a function. E.g. an algorithm that runs in linear time is also O(n^2). Of course, proving an O(n) upper bound (if possible) is more useful than proving an O(n^2) upper bound.

Ah, good catch. I will fix it, thank you!

Why aren’t they in sorted order?

Very nice, I like it. I would have also added O(n log(n)) to the examples.

The text for log n complexity implies that the example is binary search when it is in fact a very contrived example ;)

Perhaps you could fit a binary search implementation in there or change the surrounding text.

I like how “the most famous example” is listed twice (O log n). Might be good if the code example was actually a binary search.

I disagree with the general advice about combining multiple complexities: I prefer to keep terms separate if they are easily identifiable and distinct qualities of input. For example, a hash table insertion is O(k) not O(1) because the cost of hashing the input is proportional to the bits (k) in the input (and it isn’t constant). Yes even if the lookup itself isn’t proportional to the number of elements in the set. If you don’t do this, you lose the ability to see (clearly) how an “O(1)” algorithm can be beaten

so incredibly easilyby an “O(n)” one (or even an “O(n^2)” one) in many circumstances, and that can mean the difference between success/failure where I work.