Wednesday, December 22, 2004

Taste-Driven Development


Having used Test-Driven Development for several years now, I am as convinced as I have ever been of its benefits in producing high quality maintainable code. One thing that bothers me, though, is that TDD itself provides no forces to ensure that the small pieces you build are assembled together into the higher-order structures you intended. For example, you can write a clean, concise fully tested set of classes such that your domain model is directly accessing your custom tags for your web app. Of course nobody interested enough in software to read this would ever do such a thing, but the point remains: with the promiscuous coupling encouraged by IDE's that will helpfully import any class from anywhere without requiring you to think whether this violates good taste in the form of layering, abstraction or anything else, we seem to be constantly having to fight against randomly coupled internal workings in good-sized systems.


So I find I'm constantly looking for ways to do what we do at the class level with TDD and do it at the package/component level; that is, allow the good taste desired in the design to be formally expressed and tested with every build. Developers who aren't up to speed yet on the complexities of the system they're working on would be protected from violating its conventions, and the emergent design can be allowed to flourish in a controlled way. On our current project we use a few things to help, such as simian, jdepend, ydoc, lots of checkstyle checks aimed at code quality and the usual test coverage stuff. But a few of us think that what the world really needs is yet another open source java development tool. Oh, no, you say, not another one. Yes, I'm afraid, it's another one. But this one is different - this one uses a business rule engine (drools) to allow you to declare in a text file your idea of good taste and enforce it for your code base. This one even has a user guide. No, seriously. So if design is your bag, have a look at joodi and let me know what you think. It only has a few rules in it at the moment, so it's not overly complicated.


Oh, by the way, if you think the java runtime itself would show good taste, check out what joodi has to say about rt.jar in the sample reports section of the user guide...you might be surprised!


Here's an example rule from the joodi rule file. This one expresses the simple good taste that cyclic package dependence is bad. It's kind of cryptic at first glance, but it basically says that if there exists a namespace (i.e. a java package) that depends on itself (dependence being part of the joodi fact model when your code is analysed), then create a notification including the cause of the problem so that the report provides full traceability for each notification produced. Using joodi is pretty much down to writing rules like this that express what good taste means to you and your project. Welcome to Taste-Driven Development!



<rule name="Cyclic Namespace Dependence Is Bad">
<parameter identifier="namespace">
<java:class>Namespace</java:class>
</parameter>
<java:condition>
namespace.dependsOnNamespace(namespace) != null
</java:condition>
<java:consequence>
drools.assertObject(new CyclicNamespaceUseNotification(namespace.dependsOnNamespace(namespace)));
</java:consequence>
</rule>




Friday, December 10, 2004

Agile Requirements

I've been doing the odd dog-and-pony show presenting on the topic of agile development recently, which is always a good way to clarify your thoughts and force a few ideas to gel. What's been on my mind is the requirements gathering side of an agile project. Having just delivered a good sized agile project successfully, I'm convinced that the quality of our requirements gathering and the rigour and discipline of our scope management was one of the absolute keys.

Sunday, November 21, 2004

Re-routing CVS repositories

I'm putting this here to save myself the hassle of figuring out how to do it on the rare occasions I need to re-route a CVS sandbox to a different repository without releasing and checking out from the new repository. So if anyone else is wondering how to do it (you'll need cygwin on Windows), run this command from the top of your checkout tree:
find -name Root -exec sed -i.bak -e 's:/old/path:/new/path:' {} \;
. You will also need to be careful if you use a Windows CVS server like CVSNT, since using the ':' as the sed separator will require you to escape the colon in your repository path if you have one. By the way, this will make a backup of each Root file called Root.bak, just in case you don't trust it (and why should you...:P)

Thursday, November 18, 2004

Star Wars on the (really) small screen

I found another reason to dig my Pocket PC. My usual top reasons are talking books from audible.com and reading electronic books (in really big fonts!) with the Acrobat Reader for Pocket PC.

But having picked up a 1Gb SD card for next to nothing, I thought I'd investigate the supposedly forthcoming phenomenon of pocket movies. Microsoft has been making some noise about their new portable devices that play movies, but you don't need one of those. You just need a copy of this software and you're done. In about the time it takes to watch the film, it's converted to a 250Mb AVI at 320x240 resolution. That's 15 frames per second at a resolution that's higher than your TV, so it's pretty darn watchable. There is a slight weirdness to the screen when rotated to watch in landscape mode, probably because the RGB subpixels on the LCD or up/down aligned in this mode, but you get used to it.

So having recently picked up the Star Wars box set to relive my childhood like all the other 30-something males on the planet, but not having time to scratch myself like all the other 30-something males on the planet, I can now squeeze this necessary pastime into my morning commute and leave all the iPod weenies wondering why exactly they think they're so cool. Rock on.

star_wars.JPG


Why I Hate Domain-Driven Design

I was feeling pretty good about myself I have to admit. Having just delivered a system that took a year of a dozen developer's lives to complete with the full XP experience working like a treat, on time, on budget, happy customer, etc, etc, I thought I'd finally get my life back and start catching up on all the reading I don't do when it's head down, bum up on a project. High on my list was Domain-Driven Design by Eric Evans, as it had been highly recommended by people I respect a lot. So now I'm depressed. It is such a fine book, no, it's a truly mind-expanding, insightful, gee-now-I-feel-stupid kind of book that I now just want to go back and do the whole damn project again. And this time I'd do it properly. Dammit.

If you'd like to share the joy, as usual my recommendation is to get it on the cheap at Safari

Thursday, October 7, 2004

Power Pointers

I often get asked how I go about doing a technical presentation, so I thought I'd put down a few pointers for those who would like to do some presenting but think it's all a bit intimidating.

  1. Have something you want to say. This is generally not a problem for most serious geeks, although it's surprisingly common to find many who have incredible experience and insight but think nobody is interested in what they might say. That is almost never the case, either, since all of us are sponges for good stuff of all kinds.
  2. Care about your audience. It's important to really want to connect with the audience and make them feel important. Even in a stand and deliver situation, I often survey the audience to get a feel for their likes, dislikes and interests. This can help you focus on what they need and want over what you thought they might like.
  3. Don't start in PowerPoint. I never, ever begin my presentations by opening PowerPoint and looking at a blank slide. I always open a Word document instead and switch to Outline View. Then I just start brainstorming, imagining I'm having a one-on-one conversation with someone, explaining the topic at hand. No structure is present yet. It's all about noting down all the things you might want to say first. Then it's a matter of learning to use the outline view's features of indenting, outdenting, dragging and dropping to apply some structure to the thoughts. This is very much like the test, code, refactor cycle in development of code. The test is that you can explain the idea. The code is recording the points you need to do that. The refactoring is cleaning up the structure. And just like coding, it works really well if you treat these as distinct tasks and switch between them cleanly.
  4. Generate the slideshow from the outline. Once you're done in Word, just hit File->Sent to->PowerPoint and voila! All your top level headings become slides, and the subpoints become bullet points underneath them. You can then click Format->Apply Design Template, go looking for a pre-canned  slideshow and bingo! your show is just about done without a single tedious minute in PowerPoint itself. If you have time, add some pictures and animation, but remember they're the icing, not the cake.
  5. Less is More. I much prefer slides with fewer words on them. I do not consider a slide to be of any value without me in front of it to turn it into the presentation experience. Resist the temptation to turn a slideshow into a whitepaper. It's just there to remind you what you wanted to say and help the audience to follow; it's not there to say it for you!
  6. Remember that nobody finds it easy. Public speaking is daunting for everyone, and I do mean everyone. I sweat bullets for days before an important presentation, but once it starts it's almost like it's someone else doing it, not me, and in a way it's easy. If you're serious, look into Toastmasters for some training (I did it 20 years ago and am still grateful) or try your hand at a low-risk venue like a user group or other friendly and supportive environment.
  7. Have fun!

 



Wednesday, September 15, 2004

Smell the Tests!

When you are faced with a team that's new to agile development, it's common to spend a lot of time driving home the need to test everything. Along with refactoring, test-driven (or at least test-conscious) development is the most basic skill the aspiring extreme programmer must acquire. We strive to testability in design and the cheapest, most effective quality we can get. So is there such a thing as bad testing? Can tests hinder more than help? Can tests just be a downright pain in the rear?

The answer is a resounding YES! YES! YES!

Just because you code in an object-oriented language does not guarantee that what you write will contain any object-oriented characteristics whatsoever. Similarly, just because you write JUnit test cases does not guarantee that what you produce will display any of the qualities of an effective and adaptable test suite. Naive application of testing frameworks has the same potential for developers to exercise their bad judgement and bad taste as any other technology. Now while I'd certainly rather have a project with lots of less than ideal tests than no tests at all, we need to learn to 'smell the tests' to really get the most out of our (and our customers') investment in testing.

So here are some of the smells that tests can emit. I'm sure you can think of more, so please drop me a line so I can share your pain...:P

  • No statement of intent. If I have a Widget with a getPrice() method, I often see a testGetPrice() test method, and my first thought is "test getPrice() does what?". Prefer names for tests that express the intention of the test, such as testGetPriceReturnsZeroWhenNotInitialisedExplicitly(). There are tools that can turn your test names into human readable documentation, so don't be afraid of full-length sentences as test method names.
  • Irrelevant assertions. It is common (particularly in functional tests) for a test that is concerned with proving a small incremental behaviour to make irrelevant assertions about all sorts of other fields, even on other screens. Prefer semantic generalisations over value-specific assertions. For example, if your test is not concerned with the actual price algorithm, don't assertEquals("46.12", widget.getPrice()) when you can say assertNonZeroMonetaryValue(widget.getPrice()) - even though it's a bit more work at first.
  • Setup leakage. It is extremely common for developers new to testing to not easily be able to distinguish the concerns of their test from the setup steps required to get them to the point where their test can begin. The newly test-infected developer can thus create maintenance burdens and simply waste time reinventing the testing wheel with every test case. Strive for a clear separation between setup and test, and look for reuse in the setup code via fixtures and scenarios.
  • Overtesting. Many developers find it difficult to focus only on what is relevant to the test at hand. It is not only more efficient to assume the rest of the system works when writing a new test, but it also improves maintainability as each test is more isolated and targeted. It is common for developers to turn themselves into knots to get controlled input and output moving between different components when a mock would be faster and more effective. Leave it to the other tests to do their job and assume away responsibilities that are not yours right now for the code under test.
  • Multi-layer tests. Unit tests should rearely (if ever) traverse multiple layers in your design. A sure sign that they do is when they run slowly. Unit tests should run in the thousands per minute kind of rate, while functional tests tend to be in the 30-50 per minute range, so prefer unit tests to achieve coverage and functional tests to prove wiring of configured components.
  • In-container unit tests. T'ain't no such thang. Don't make sense. If you think you have some, that's a smell. :)
  • Fractured functional tests. Adding functionality a small bit at a time leads to functional tests with a lot of overlap and there is no force to encourage the consolidation of these tests other than slow build speed. I'm not sure how to fix this one other than slow slog refactoring. We have a lot of this joy ahead on our current project!

None of this alters my belief that test-driven development is the best method I know for building quality software. But it does mean that (once again) we need to be careful about silver bullet syndrome when it comes to testing.

 



Sunday, August 8, 2004

Have Hair Dryer, Will Boot

Having moved house recently, I was dismayed when one of my PC's (the one that's built for music and video editing) refused to boot. The primary hard disk was not being recognised by the BIOS. Dammit. Now yes, I have all the important data backed up, but it's pretty tough to backup full quality video footage at the best of times, and this box has had years of tweaking put into it, so rebuilding it from scratch was a rather sphincter-puckering concept. Having a couple of weeks paternity leave to sort it out, I started ringing around the pro data rescue market, and the prices were all in the $2,000 range. Hell, I could just buy a whole new box for that.

So being a software guy, I rang my buddy the hardware guy and told him my story, of how the box had been running happily for 18 months without being turned off, then it was shut down for a couple of weeks during the house move/new baby transition. "Ah-hah", says he, "just get out the hairdryer and heat that drive up for 20 minutes; it'll boot first go". And bugger me if he wasn't right on the money! A quick plug in of a new drive onto which I ghost-ed the dying one, switcheroo and off we go.

Phew!

Monday, July 26, 2004

Who gives a toss about software anyway?

I mean, really, I'm as guilty as the next geekblogger of making software and its myriad cultural spawn the centre of the universe, but when you hold your new baby girl in your arms it really does seem all a little silly. So here's to the arrival of Ella Rose....and yes, I did buy her a domain name before she was born!

Monday, June 28, 2004

IntelliJ as.....Writing Tool!

Trying to collaborate on some technical writing with Simon Harris led us to figure out a way to get the type of collaboration we are used to while coding when we're writing. this means having the content in a textual format so it can be merged, ruling out MS Word and other binary formats. We've been using a wiki a lot for this recently, but reading Martin Fowler's post about writing using XML made us want to give it a try. Now before you say DocBook, let me just say that that's just a little tooooo heavyweight for our needs. But the idea is right. So we knocked up a little DTD that incorporates a hierarchy of sections, with figures, code listing, cross references, external references, etc, and then used IntelliJ IDEA's DTD-driven XML editing to start writing the content. With the code folding and a few live templates, we pretty quickly had a nice intuitive editing experience that allows the separation of the creation of the content from the marking up thereof. With the addition of a little XSLT that automatically numbers the nested sections, voila! IntelliJ as author's workbench! Cool!



Thursday, June 10, 2004

When Prevention's Not Better Than Cure

When doing agile software development, our main challenge is having little enough process to go fast, but enough process to avoid crashing. The desire to go fast is all well and good, but we don't want to have our only feedback mechanism on whether we're going *too* fast to be blunt force trauma.

I have no doubt that application of agile methods such as XP without equal application of both the freedoms and the responsibilities they afford will end in tears. This is often the biggest challenge with developers new to XP. They love the freedom to refactor and design on the fly, but their enthusiasm sometimes wanes when confronted with the discipline of maintaining test coverage under schedule pressure.

So how do we know when we're going too fast? One problem with perceiving the team's speed (as opposed to velocity :P) is that different members of the team will have different comfort levels with the shared rate of progress. Several times recently I've found myself responding to feedback from team members that would like a little more process to avoid some issue or another, be it screen rework or the presence of defects during QA. I try to be careful to get enough feedback to find out if what we have is just turbulence or the swelling sound of a wing shearing off the fuselage. Most times it's the former, and in these cases I find there's a recurring theme to my response: prevention isn't always better than cure.

For example, in our current system we have 500+ story cards. If about 10% of these result in a defect (which looks about right at the moment) and each of these cost a day for a pair to fix (no metrics but I think that's more than the reality), that's 50x2 days of rework. If we were to put in place a process to prevent all these defects, it would probably cost the team at least two hours per story (that's only two people talking for an hour) to do the extra analysis and testing for EVERY story card. That's 1,000 hours extra work, or comfortably more than the 100 days of rework we have to do to cover off the defects. That's also on the unrealistic assumption that you do actually prevent all the defects! It's just a numbers game in which you win if the quality is sufficient that the amount of rework on the defects is less than the amount of effort to prevent all those defects plus the extra effort you put in to the majority of cards that would never have had defects in the first place.

So I find I'm spending a bit of time reassuring the passengers that a little turbulence is normal, and preventing it is not only unnecessary, it's counter-productive. At the same time I try to maintain a healthy paranoia about the process. Of course, the danger lies in not being able to tell turbulence from a tailspin, so if anyone has ideas on how to do this, bring it on!

 



Thursday, May 13, 2004

Conversation Is Not Understanding Either!

I have been as guilty as the next geek of rationalising my distaste at the tedium of documentation. Many of my agile friends and colleagues are fond of saying that 'Documentation is Not Understanding' when promoting the 'people over process' bit of the Agile Manifesto. Now while I do indeed hold this truth to be self-evident, a few recent experiences find me not so dismissive of the value of documentation as once I was.

Circumstances led me to leave my role in a nice fun agile project for a few months. When I returned, I struggled to get back up to speed for a couple of weeks - these agile folks will just go and change things while you're not looking :) When trying to reconstruct my mental model of the project, I found the pieces I had to put together out of conversations with analysts, customers and developers didn't always fit together snugly, and I found myself needing some sort of point of reference that was indeed shared understanding to work forward from. Anyway, I haven't really worked out what this means, but I have to conclude that while documentation is not understanding, neither is conversation. Another assumption bites the dust!



Monday, April 12, 2004

The Need for Speed

At the Melbourne eXtreme Programming Enthusiasts Group the other night we had a big rantorama about TDD. One of the points in my presentation was that Kent had said that unit tests had to be fast, and he cited one project in which there were 4,000 tests that ran in about 20 minutes. A few guys from my team were there and guffawed out loud. Our current project (4 months old) has 1,300 unit tests that run in about 10 seconds, so they weren't impressed with Kent's numbers at all. Even allowing for Moore's law over a few years, we'd run about 4,000 tests in 30 seconds, which two speed-doublings ago would take 2 minutes. That's still an order of magnitude faster...so what's the deal?

This led to lots of enthusiastic debate about what the hell a unit test actually is. For what it's worth, here's my simplistic test categorization model:

1. A unit test is a test that requires only the class under test and its immediate dependencies (or mocks thereof). This means no deployment, no packaging, nothing. That's why they're fast.

2. A functional test is a test you can't run until the app is packaged and deployed in some way. So you "in-container unit test" fans aren't doing unit tests by this model. I'm hard on this stuff because anything requiring deployment puts you in a different order of magnitude speed wise; this doesn't mean these tests aren't valuable, just that I don't count them as _unit_ tests, even if they only test a single unit of functionality.

3. An integration test is a test of your external dependencies, not necessarily requiring your app to be deployed at all, but requiring your external dependencies to be available. This is where databases come in, or other services provided by EJB's or anything else. These tests are also usually way too slow for me to want too many of them, and I want to isolate them from my app as much as possible. This means a separate test harness for the external stuff and leave my app just unit testing against the assumptions proven by the integration test suite.

The point is that I want to push unit tests as far as I can because they just provide so much more performance than the other types of tests. That's the only reason. I want as few of the others as possible because I spend enough of my life waiting for computers to get done with stuff as it is. Yes, you _can_ unit test custom tags to within an inch of 100% coverage!

So what's everyone else getting speed-wise???



Saturday, March 20, 2004

Become what you despise

A lot of agile/TDD/XP folks like me enjoy flouting a very public disdain for "whiteboard architects" and their ilk. These are the ones who haven't written a line of code in ten years or more, serve out voluntary life sentences in financial institutions and impose design decisions from on high while ignoring development teams entirely. They never ask for feedback on their architecture from anyone who sits beneath them on the org chart, as what feedback of value could a lowly code-cutter have? They are far too busy seeking feedback from those _above_ them on the org chart to make sure their career ambitions are on track, and that's a full time job.

So how happy was I when Jon broke his arm and I was pulled off my nice little agile dev lead role and into a six week whiteboard wonderland? Hmmm. In a nutshell, this gig involves designing a technical solution to coordinate half a dozen systems on four different platforms written in four different languages, integrating .NET with J2EE, a rules engine, messaging systems, mainframe yuk galore, blah, blah, snore.  The system will be used by thousands of people every day and carry billions of dollars of business every year  Textbook whitebook architect turf. Oh, and you've got three weeks to get your head around it and come up with a solution and another couple of weeks to write it up and get it approved by the Big Guys Upstairs. No coding for you!  Well, it's been an interesting and enlightening experience - always a good thing. I hope I haven't become what I despise, but perhaps I despise them a little less by understanding them a little more.

Lesson: Same shit, different abstraction

So why is it that we who prefer to spend our time at the implementation coalface smirk at the whiteboard architects? I have noticed during this brief out-of-body experience that one reason is the same as why we smirk at some of our less-than-breathtaking code-cutting colleagues. They can't design to save themselves. Just as many developers solve the wrong problem and can't distinguish between an implementation abstraction and a design abstraction (Dude, don't call it a SQLDataSource if it's role is a DomainStore!), it seems the same mistakes happen at the higher level of abstraction where enterprise architects play. Some of them seem compelled to define the implementations of the solution to the problem instead of just defining the problem, assigning responsibilities to well-nhamed abstractions (SOA, anyone?) and admitting that in the time available they don't have a hope in hell of telling a development team how they should actually build the thing. I can forgive them their transgressions against developers because it's not just them. We do it too. And so do business people. How often have you been given a so called 'requirement' that is not a statement of a problem at all but a statement of a dumbass solution? It has surprised me how well design concepts I'm used to using at the class/interface/component level map to the system/service level.

On this gig I've met one particularly outstanding guy who's an enterprise architect on a parallel project, and I just know he would be a kickass code cutter as well; he must just prefer to get paid properly. So take heart, you too might get lucky and find a whiteboard architect whose pictures deserve to make it off the drawing board!

 



Sunday, March 7, 2004

Getting into the Swing of TDD

Just came off a tough late night/early morning pairing session with Simon Harris. We're working on a Swing app that will let you wander around the duplication in your code base by reading a Simian log file and referring to the original code tree. We're also trying an experiment where we use TDD and blog the experience so that we can publish the gory details of the construction of the app, on the presumption that anyone cares, of course...:P

Last night's session will make for interesting reading, mainly because we're both pretty clueless about Swing. In keeping with Jon Eaves ideas on spiking and TDD, we mucked around for a while trying to figure out how to use the java.awt.Robot to drive our panels, but we got there in the end. What we are trying to explore is how far we can push TDD, and I, for one, am so sick of web apps that this seemed like a cool idea.

I don't know enough about Swing to understand why a lot of folks seem to hate it these days, but it's always appealed to me as having the potential to facilitate real unit testing. By this I mean that a UI area composed of multiple JPanels should be able to have independently designed and tested each of those panels before assembling the UI and running functional tests over it.

The theory goes something like this:

  • you don't have to write tests for one line delegation methods in my book. So if the UI was a veneer so thin over a fully tested set of non-Swing stuff, I'm happy to just glue the UI onto the layer below with trivial delegation. This is the interesting part - how thin can the UI layer be? So thin that no testing is required?
  • you use a Mediator to coordinate the interaction between panels (which you of course make an interface so you can easymock it for testing), and each panel implements a Colleague interface (again facilitates testing). This means that although there is 2-way comms between the mediator and the colleagues, either end of this pipe can be mocked, allowing testing of each bit independently.
  • you then assemble your UI like any other swing UI and the bits play nice.

We had previously gotten our mediator TDD-ed into existence and were happy with it. We then came to creating the JPanel that contains a list of files that share some duplicated code. This panel was also TDD-ed into existence but with all its behaviour in non-Swing methods. All good. Then it came time to (we thought) just add the JList and off we go. Turns out the amount of code required, with event listeners and a private inner ListModel class left us facing the dreaded NullPointerException at run time. If there is a more glaring signal that your code is crap, I don't know it.

So we came to the conclusion that we would have to actually unit test the UI with events somehow, you know, mouse and keyboard stuff. Yuk. We mucked around for a while trying to figure out how to do this while keeping the test 'headless', but failed dismally. In the end we came to grips with the pixel coordinate stuff required by Robot and plonked our little panel into a JFrame of its own in the unit test and just displayed it and started the Robot clicking on stuff.

So the upshot was that a single panel pops up on the screen and dances about a bit being unit tested against a mock Mediator that catches and verifies the panel's interactions with the rest of the dialog that isn't even there yet. Long story, I know, but I thought it was kinda fun and my blog's been a bit quiet lately.



Wednesday, February 11, 2004

Just what the world needs

Just what the world needs - another ThoughtWorker bleating on about test-driven development. Sheesh!