We were studying median finding and sorting. The professor asked if we could come up with an algorithm to determine if an array of n elements had no duplicates using only comparisons in better than O(n log n) time. This is a classic problem in Computer Science, dubbed element uniqueness. Surely, as a professor of theoretical computer science, he knew this was impossible.
I, however, did not know it was impossible. I had even come up with an algorithm that I thought worked. I sent an email to the professor containing my algorithm. I omit the email from the body of this post to save myself embarrassment, but needless to say it was clearly wrong in retrospect. He told me to present the algorithm in class, knowing I was attempting the provably impossible.
So what happens? I get up in the front of class and present my bogus algorithm. The professor asks the students if anyone sees anything wrong with the algorithm and then someone comes and points out how it doesn't actually work. I make an ass out of myself.
Fast forward to my latest semester at NYU, where I decide to take Fundamental Algorithms. This is basically 344 all over. Although letting my dating life influence my class selection is probably not something I'll do again (I totally wussed out of taking an advanced algorithms class, which would have probably been challenging and worthwhile but taken enough time to making trying to date n girls simultaneously impossible), it wasn't a total waste of time. After dominating the midterm and spotting a couple errors in the homework writeup, the professor offers to turn the class into independent study. For topics to study, I suggest the proof of the element uniqueness lowerbound in the comparison model of computation. Alan Siegel sends me this nice proof in email.
The E. U. LB is trivial.
You need to know what it says: given a set of n numbers, a
comparison-based algorithm that can solve EU does enough comparisons
to determine the sorted order of the data.
Proof in a nutshell. An adversary covers up the numbers but
performs all comparisons and tells the truth (which can be checked
at termination time.)
The data is distinct, so there are no equalities, but the adversary
is free to change the data as long as all of the tested comparisons
are still as stated.
So the EU program runs, decides it is done, and halts with the claim that
the data elts are distinct. Waaalllll ... let each comparison
be a directed edge between data elements. So the EU program builds a
DAG. If the DAG has a unique topologincal sort, the comparisons
determine the sort. If not, there are two elements that could be equal.
(Run the old fashion ready set---blocked set Top Sort). If there are
ever two elts in the ready set they could be made equal since there is
no path from one to the other. (Technically, reducing $x$ just changes
all of its predecessors in the DAG by the same value.)
Basically the idea is that you are an adversary, you force the algorithm to discover enough information to sort the elements, and then you apply the sorting lower bound to show that it must have made O(n log n) comparisons. The proof is nice and elegant.
So thank you, Martin Farach-Colton, for letting me make a fool out of myself in front of ~100 people, so now I understand a proof that I'll never let myself forget.