Ioke mocking, Mocha as exemplar

A little while back I volunteered to work on a mocking framework for Ioke when my friend Carlos suggested it might be helpful. I’m a bit embarrassed that I don’t have more to show for myself after all of my flailing about, but in my defense, it’s been a spare-time sort of a thing. If you’d like to follow my progress you can do so on my Github fork of Ioke.

Building a mocking framework for Ioke has raised a number of interesting questions, which I’ll run through here. If you have any suggestions or questions, I’d love to hear them.

Is full-bore mocking worth it when prototyping makes it so easy to stub?

Ioke is a prototype-based programming language: any object can mimic any other object. You can declare a constant by capitalizing the name of your mock (Foo = Origin mimic) or an “instance” of that constant by lowercasing the reference (someFoo = Foo mimic). Mimicking an object adopts all of its cells, which is to say, its data and behaviors.

if you would like to instantiate an object you need to ensure that the mimic begins life with unique data structures, because if you don’t it will share the same references as its origin. The convention in Ioke as it stands currently is to declare a create method: someFoo = Foo create. That method will often look something like this:

Foo = Origin mimic do(
  create = method("Creates a new Foo", self with(objects: [], name: "A name"))
)

The advantage of this approach is that stubbing is trivially easy: you can mimic any object, any time, and replacing the behavior of one of its cells is as simple as redeclaring it:

someFoo = Foo create          ;; someFoo objects is now an empty list
someFoo objects = set()       ;; someFoo objects is now a set
someFoo objects = method(...) ;; And so on.

So, why do you need a mocking framework?

Steve Yegge, in his article about what he terms the universal design pattern, praises JavaScript (a similarly prototype-based programming language) for making it so easy to test Java classes. This doesn’t quite get at the whole story; mocking frameworks offer at least three distinct advantages over simply replacing the method call:

  • They can set precise expectations in terms of method arity, arguments, and order of invocation, though I confess that I don’t often use the more advanced features of most mocking frameworks.
  • They can enforce certain rules about the mockability of particular cells; for example, that you don’t set an expectation on a cell that doesn’t actually exist. I wish I used these features much, much more often than I do.
  • They can replace the original cell (method) definition when the test has finished running, which is extremely important if you want to stub out behaviors related to domain models that might be reused in other tests.

All of those seem like things worth having.

Should the existence of macros suggest a different API for mocks?

I started out by following a fairly convention syntax:

foo should receive("bar") with(:qux) andReturn(5)

Ola suggested that I deploy a little macro-foo in support of readability, and that’s the way it stands right now:

foo should receive bar(:qux) andReturn(5)

One could imagine further refinements on the form; if you have any, as always, drop me a line.

This syntax bothers me a bit because it runs so counter to what I’m used to seeing in languages where every argument is eagerly evaluated. In the above example, we expect the cell “bar” to be invoked with an argument of :qux. Now, clearly we don’t actually want to invoke the bar method directly; it doesn’t exist right now, certainly not as a receiver on any object that makes any sense. But what about :qux? What if the above had been written as:

expectedArgument = :qux
foo should receive bar(expectedArgument) andReturn(5)

Clearly we need to evaluate the arguments to that method call but leave the call itself alone. Weird? I don’t know. You tell me.

What inspiration can we draw from frameworks in non-prototype languages?

I’ve found that I do my best learning through experience, and so didn’t make a serious attempt to understand the internals of other mocking frameworks; having used them extensively, I figured, I have a pretty good idea of their capabilities, and so dove in unencumbered by reason, research, or even a decent working knowledge of Ioke. It’s been tremendous fun and as usual I’ve gone through my share of pain, but one of the nicest parts has been actually spending time inside Mocha itself after it became clear that I needed a bit of guidance.

It turns out that prototyping doesn’t get you all the way there precisely because there are some features of a mocking framework, as suggested above, that require some deeper thinking about expectations. How do you know if one is satisfied? How do you pick from competing calls with different argument expectations? Multiple or infinitely allowed invocations? Sequenced return values? Block yields? And so forth.

Although there are pieces of it that feel vaguely stitched-together, as though Mocha and Stubba never quite met at the seams, it’s been fun code to read. Mocha’s a good example of a library that does something complex in a reasonably straightforward manner. The methods are brief and readable, and though there’s a bit more indirection than I’d like (what’s the difference between verified? and satisfied? What does it mean to match? or invoke? How do you pick apart three different mock methods in three different places?) I’d happily recommend it to others as a good example of how to pull off some hairy Ruby functionality without writing a whole lot of hairy Ruby code in the process.

What do I mean by that? Here’s the main expects method that gets mixed in to the Object class:

def expects(symbol)
  mockery = Mocha::Mockery.instance
  mockery.on_stubbing(self, symbol)
  method = stubba_method.new(stubba_object, symbol)
  mockery.stubba.stub(method)
  mocha.expects(symbol, caller)
end

Each one of those lines is pretty dense: it requires a lot of backtracking to understand why it’s there and what it does. But there aren’t many of them, and they can all be explored methodically, and understanding one leads to understanding the next. Much of Mocha is like that: moderately sized, neither opaque in its density or transparent in its verbosity. Nice, I suppose, for my own definition of nice.

I’ve also learned a lot about the framework that I didn’t know before (for example, I didn’t know that it was possible to configure Mocha to warn against or even disallow mocking nonexistent methods) and will bring that knowledge with me to future projects.

But I don’t regret having not read the source first. Trying and failing gave me the context I needed to understand why Mocha made some of the choices it did, even if I don’t agree with all of them, and in the coming days and weeks I’ll be stealing drawing inspiration from most of them. Thanks and kudos, guys.

Do unit tests find bugs?

Refactoring A Strange Codebase