29

I have been reading about the (un)convenience of having null instead of (for example) Maybe. After reading this article, I am convinced that it would be much better to use Maybe (or something similar). However, I am surprised to see that all the "well-known" imperative or object oriented programming languages still use null (which allows unchecked access to types that can represent a 'nothing' value), and that Maybe is mostly used in functional programming languages.

As an example, look at the following C# code:

void doSomething(string username)
{
    // Check that username is not null
    // Do something
}

Something smells bad here... Why should we be checking if the argument is null? Shouldn't we assume that every variable contains a reference to an object? As you can see, the problem is that by definition almost all variables can contain a null reference. What if we could decide which variables are "nullable" and which not? That would save us a lot of effort while debugging and looking for a "NullReferenceException". Imagine that, by default, no types could contain a null reference. Instead of that, you would state explicitly that a variable can contain a null reference, only if you really need it. That is the idea behind Maybe. If you have a function that in some cases fails (for example division by zero), you could return a Maybe<int>, stating explicitly that the result may be an int, but also nothing! This is one reason to prefer Maybe instead of null. If you are interested in more examples, then I suggest to read this article.

The facts are that, despite the disadvantages of making most types nullable by default, most of the OO programming languages actually do it. That is why I wonder about:

  • What kind of arguments would you have to implement null in your programming language instead of Maybe? Are there reasons at all or is it just "historical baggage"?

Please ensure you understand the difference between null and Maybe before answering this question.

aochagavia
  • 534
  • 3
  • 11
  • 3
    I suggest reading up about Tony Hoare, in particular his billion dollar mistake. – Oded Dec 15 '13 at 14:38
  • I have, but I suppose that there are also more "rational" arguments, rather than just the fact that "it was so easy to implement". – aochagavia Dec 15 '13 at 14:50
  • 2
    Well, that was his reason. And the unfortunate result is that it was a mistake copied to most languages that followed, up to today. There are languages where null or its concept do not exist (IIRC Haskell is one such example). – Oded Dec 15 '13 at 14:53
  • You are right in that Haskell uses Maybe instead of null, but that is a functional language. I wonder if the creators of Java or C# used the null reference because it was easy... it sounds so mediocre... That is why I think that some people have good arguments for it. – aochagavia Dec 15 '13 at 15:00
  • 9
    Historical baggage is not something to underestimate. And remember that the OSes that these build on were written in languages that have had nulls in them for a long time. It isn't easy to just drop that. – Oded Dec 15 '13 at 15:01
  • So do you think dat they just didn't see it as an issue? That they thought it was simply "normal"? – aochagavia Dec 15 '13 at 15:02
  • Frankly, I don't know when Tony Hoare made that statement and many languages probably predate the press that got and therefore didn't consider it. – Oded Dec 15 '13 at 15:04
  • As null is a value and Maybe a type, you are comparing apples with oranges. Perhaps you should ask the question why there are languages that allow unchecked access to types that can represent a 'nothing' value (called null or Nothing or whatwver). – Bart van Ingen Schenau Dec 15 '13 at 15:18
  • That is actually the question... I am going to edit the title – aochagavia Dec 15 '13 at 15:19
  • 3
    I think you should shed some light on the concept of Maybe instead of posting a link. – JeffO Dec 15 '13 at 15:21
  • I doubt that I can explain it better that the author of the linked article, but I will try it – aochagavia Dec 15 '13 at 15:27
  • please don't significantly change your answer after you have already received meaningful answers. Your edits partially invalidate the answers that have been provided. 2) I believe you should be checking for empty string not a null string in your example. string and int are primitives, not reference values. You would need to explicitly declare those as nullable in your example.
  • –  Dec 15 '13 at 15:50
  • 1
    @GlenH7 Strings in C# are reference values (they can be null). I know that int is a primitive value, but it was useful to show the use of maybe. – aochagavia Dec 15 '13 at 15:55
  • @GlenH7 Java Strings are also reference values, as are Integer (although not int) – Izkata Dec 15 '13 at 15:56
  • Null reference may not be a mistake: https://yinwang0.wordpress.com/2013/06/03/null/ – Daniel Little Feb 18 '14 at 06:28
  • There is also the side point that to work effectively with "null" you need identity checks and branching. To work effectively with Maybe (That is, idiomatically, not just the bare minimum) you need parameterised types, higher order functions, pattern matching, some kind of concept of a monadic functor, so on. Maybe is a much better solution, but it's also a vastly more complex (conceptually) one. – Phoshi Feb 18 '14 at 11:38
  • Why do you assert that very common languages allow unchecked reading of a null? Most don't. Java and C# in particular guarantee that you'll get an exception if you try; the access is formally checked. – Donal Fellows May 03 '14 at 18:01
  • When I say "unchecked" I mean mainly that the type system will not force you to check them explicitly before using them. – aochagavia May 03 '14 at 20:07
  • 1
    Eric Lippert says that the choice of having a null value in C# was in part motivated for interop convenience, and - even with 20/20 hindsight - not a mistake, but a convenience feature. I disagree with this point of view, but I'm nowhere near Lippert in terms of experience, insight and expertise. – Martijn Nov 10 '14 at 11:35