I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.I haven't done much type system analysis to be entirely convinced. I do use sentinels and/or Null Object whenever I can, but that is rather unrelated to the fundamental question of whether null itself is good or bad.
Watching the video right now. I can say already that I admire him for bravely admitting his mistakes (though I've yet to see this as one).
Eh, I'm still not convinced. Using the same argument shown in the final segment, while it's in many ways unsafe, I still think it's useful enough to have.