small medium large xlarge

Back to: All Forums  PragPub
21 Mar 2010, 00:01
Edward Ocampo-Gooding (2 posts)

This comment is in regards to by Paolo Perrotta and basically asserts that programs shouldn’t do this because Ruby lacks static analysis to counter what you lose in debugging techniques when following the article’s recommendation to use Black Holes.

Having thought about making each NilClass instance eat all methods sent to it (i.e. treating nils as “Black Holes” according to Perotta’s article) on last Friday, this article was particularly interesting.

I like how Smalltalk, and Objective-C have this behaviour because it means that you can skip having to write extra conditional logic to prevent accidentally causing a NoMethodFound (sic) exception, just like how this article mentioned.

However, after discussing this with some other folks at Shopify (where I work), Tobi Lütke pointed out that NoMethodErrors are an essential part of debugging your Ruby code, which Perotta acknowledges in his conclusion:

bq. On the other hand, Black Holes can cause their own share of trouble. After all, NoMethodErrors are there for a reason: they help you spot bugs in your code. A Black Hole can mask those bugs and make it more difficult for you to notice when things go wrong.

Perotta’s article then asserts that you should give them a shot anyway and decide if they work for you or not because “After all, there is at least one popular language (Objective C) where all null reference are Black Holes by default.”

Here’s my point of contention: a language like Objective-C supports this behaviour by default because they make up for not having NoMethodErrors by having some form of static analysis tell you that there’s a possible bug in your code. Because Ruby doesn’t suport that sort of thing yet, finding bugs becomes much more difficult and not worth the benefits gained from using this technique.

Edward Ocampo-Gooding

PS – Could articles in the future please feature an obvious way to respond to the author, even indirectly?

22 Mar 2010, 13:55
Michael Swaine (69 posts)

Edward, yes we can provide a means of responding to each author, even if indirectly, and I’ll implement it in the April issue. Meanwhile, I’ll ping Paolo about your post here. -Mike

23 Mar 2010, 00:05
Paolo Perrotta (52 posts)

Hello, Edward. Static analysis can only go so far. It can warn you about obvious uninitialized references, but as soon as you add some indirection (for example, you call a method with a null argument), static analysis fails to spot the problem. If it could spot the problem, then the entire Black Hole discussion would be a moot point, because you’d be unlikely to ever call a Black Hole anyway.

As an example, Java is one of the languages that sports the more robust static analysis environments - and yet, unexpected NullPointerExceptions are the bane of Java programmers. Some people would even contend that null references violate Java’s typing assumptions, because there is basically a special “null reference” type that doesn’t respect the interface it claims to have. (I’m not steeped in Objective-C, so I’ll accept any information on that specific language as a learning opportunity).

However, even if we can discuss specific points of contention, I do substantially agree with you: I very rarely use Black Holes, and I don’t think they’re a good idea in general. In the article, I tried hard not to be prescriptive, to push the reader into thinking hard about the trade-offs of Black Holes. I’d rather have people make up their own mind about these techniques than just provide a recipe.

24 Mar 2010, 20:20
Edward Ocampo-Gooding (2 posts)

Mike: thanks!

Paolo: Thanks for following up and for writing the article in the first place. It’s a really interesting topic.

I’ll point this article and thread to my coworker who works in Objective-C/J all day and see what he comes up with.

You must be logged in to comment