I have never been a fan of test-driven development. I think the concept is a curiosity that makes you write lots of throw-away code, while keeping lots of code that really doesn't test anything useful. But my most fundamental disagreement with TDD is that it is counter to the way people think. And part of the great joy of programming is building solutions, rather than building problems for the solutions to solve. For this reason, I think that many people who like TDD in the abstract eventually move back to a model of writing the code and immediately writing unit tests to exercise it. (This happens to be my model, but I didn't come to it via TDD. Rather, I adopted it because I became profoundly and deeply convinced of the value of lots of unit tests written immediately after coding.)
I've expressed this view before and would have held on to it unmodified had I not come across a terrific and unique Java tutorial: Agile Java by Jeff Langr. Forget the "agile" attribute. This is a book that teaches Java via TDD. And, it turns out, this is a very interesting way of doing things. How many times have we seen Java tomes start with "Hello World"? This one starts by teaching JUnit. And through JUnit, it teaches TDD, and then Java.
This approach, which is brilliantly original, has very distinct benefits: Firstly, readers are much more in tune with Java's verbose but unrevealing error messages. Rather than looking at a stack dump in disbelief, they're used to debugging and jumping in. Secondly, of course, they're used to testing and to writing code that is testable. The third and for me most important benefit is that this book cannot present Java in the usual way. (Such as: here's an inner class. Here's why you need it. Here's a code snippet. OK, now off to statics....) Rather, each chapter requires writing a mini-app that exercises the topics the author wants to present. So, you get to think about OO and Java, rather than merely learning language syntax.
The book also forces the reader to think about challenging testing issues. And I do mean challenging. It is the only text I've ever read anywhere that shows how to run a unit test on a log record written to the console. How would you solve that problem?
The only drawback is that the same topics are discussed in many different places throughout the book (each discussion amplifying earlier points), so you really can't use it as a reference after the fact. This is a small gripe, as there are plenty of Java references on line and in print.
I recently wrote a review of Java tutorials (before I knew about this book). Despite the many great titles I discuss in that article, if I were teaching a Java class today, this is definitely the book I would use. Bar none. And I don't even like TDD.
Monday, January 29, 2007
Learning Java via TDD: An impressive approach
Saturday, January 27, 2007
One activity that is inherently productive: unit testing
In a post today in his blog, Larry O'Brien states: "...let's be clear that of all the things we know about software development, there are only two things that we know to be inherently highly productive: Well-treated talented programmers and iterative development incorporating client feedback"
I find it hard not to add unit testing to this list.Of all the things that have changed in how I program during the last six or seven years, nothing comes close to unit testing in terms of making me more productive. In addition, it has made me a better programmer, because as I write code I am thinking about how to test it. And the changes that result are almost always improvements.
Labels: programming, testing, unit testing
Wednesday, January 24, 2007
Multicores not as productive as you expected?
For a while, I have been intrigued by how small the performance pop is from multicore processor designs. I've written about this and, finally, I think I can begin to quantify it. I'll use AMD's processors for the sole reason that the company has for years posted a performance measure for each of its processors. (Originally, this data was a move to counter Intel's now-abandoned fascination with high clock speeds.)
This chart shows the performance as published by AMD and the corresponding clock speeds for most of its recent processors. I have broken the figures out for the three branches in the Athlon processor family (which is AMD's desktop chip).
There are several interesting aspects to this chart, but the one I want to focus on is the performance of the rightmost entry. The dual-core Athon 64 X2 with a rating of 5200 has a clock speed of 2600MHz. Now, notice the Athlon XP with a rating of 2600 (10th entry from the left): it has a clock speed of 2133 MHz.
In theory, since the AMD ratings are linear, a dual-core processor should give you near but not quite the performance of two single-core chips. So two 2600-rated chips should give you roughly the performance of a 5200-rated dual core chip. Using the chart, we would expect two 2.133GHz cores to give us the 5200 performance figure. In reality, though, it takes two 2.6GHz cores to do this--far more than we would expect. It's actually an even wider gap that that, because the dual-core chips have faster buses and larger caches than the Athlon processors we're comparing it to, so it can make far better use of the processor on each clock cycle.
So, why does it take so much more than twice the clock speed to deliver dual-core performance? The orignial 2600-rated Athlon XP had a memory manager built into the chip. On the X2 chip, however, the two cores do not have dedicated memory managers--instead, they share a single on-chip memory controller. This adds overhead. The cores also share interfaces to the rest of the system and so again must work through resource contention to get the attention they need.
Don't be fooled into thinking this is an AMD-specific issue. (As I said earlier, I used AMD only because they are kind enough to publish clock speed and performance data for their chips.) This is not an AMD-only problem. Intel is in exactly the same boat--what is shared between cores is plenty expensive. Expect, as time passes, to see chip vendors trying to limit these shared resources.
Tuesday, January 23, 2007
Enter The Komodo Dragon
Komodo 4.0 from Active State is out of beta, and was released today. Komodo has long been viewed as the premier IDE for scripting languages (Python, Perl, and Tcl especially). It continues this tradition with this release and adds Ruby and RoR support. It's the first high-end IDE with intelligent editing and debugging for Ruby and RoR that I'm aware of.
Komodo also provides numerous tools for Ajax development including Javascript debugging, specific editors for XML, HTML, and CSS. Plus an HTTP viewer and a DOM editor.
If you work in any of the languages Komodo supports, you owe it to yourself to examine it (for free). If you work in any two of them, you probably should just buy it. At $245, it's a steal.
Wednesday, January 17, 2007
C PDF Library
In my current column in SD Times, I discuss the open-source iText library for creating PDF files. iText enables developers to create reports in PDF, HTML, and RTF from any Java application--including servlets. I mentioned in the article that I previously had looked for a PDF library in C and could not find one that was free and open source.
Eagle-eyed reader Reid Thompson kindly sent me a note pointing out the Libharu project. In my quick scan, it's not quite as feature-packed as iText, but it does cover pretty much all the functionality needed for business reports. And it has one advantage over iText that will prove compelling to some developers: Ruby bindings.
Tuesday, January 16, 2007
Want to be a Jolt Award Finalist? Three Steps
I have been a Jolt judge for all 17 years that the award has existed. This position has been a great privilege because during that time span, the award has become the equivalent of the Academy Awards for software-development tools. One reason for this rise to prominence is that vendors sense that the judges put a lot of work and deliberation into their choices. This somewhat understates the work we do, as I'll explain shortly. This post, however, focuses on a common query from vendors whose products did not advance to the final round: what more could we have done to advance? The answer frequently is: plenty.
To give context to the pointers below, it's important to understand a few things about the Jolt judging process:
- For judges, the Jolt season represents a period of intense activity. I expect that in any given Jolt season I will spend more than 100 hours on product selection. That's a lot of time, and it's all volunteer. We receive no payment for our time.
- Judge deliberations are secret. We use several mechanisms for sharing our perspectives. Their contents are sealed at the end of the judging cycle. Only discussions relating to procedural matters are retained year to year. So, asking for the judges' rationale for a certain decision will not (or, at least, should not) result in useful information.
- Judges vote in secret. No one but the person tabulating the results at Doctor Dobb's knows which judge voted for what. Frequently, the results are mystifying to me. I don't understand why product X was left out, while the clearly superior product Y was included. There is only one obvious answer: other people evaluate the product differently than I do.
- Judges recuse themselves from any category of products in which they can derive financial benefit or where they work for one of the vendors whose product is nominated.
So, how can a vendor influence a product's fate?
- Have a good product. This more than any other factor will improve your prospects. If your product is nominated year after year, make sure that you have something new to say. We frequently kick out products that are the same as last year's save for a few tweaks. Remember this is an annual award, so greatness must have occurred during the coverage year (generally November to November).
- Be able to articulate why your product is better than others. Judges who have never heard of your product need a reason to vote for it. Give it to them. Many vendors set up portals specifically for Jolt judges. They include movie clips of the product (10-15 minutes), screen shots, and generated reports. This is a superb idea. Judges can go to the site and in 20 minutes figure out whether they see any magic there. If you choose to do this, emphasize how your product is different or better than others. Don't try to demo every feature. The judges just want to know why they should vote for you. If you want to create a competitive grid with feature comparisons, this helps too.
- Follow up with the judges. Two categories I voted in this weekend (when we voted for finalists) had more than 30 products. After looking at a large number of websites, the products tend to blur. Even though I take notes, when I go back and re-read them, it's hard to remember my exact perspective. If I don't know a product beforehand, it's likely to fall in this blurry region unless it has some incredibly good (or bad) feature. You have a PR agency, right? Put them to work. Have them contact me. Send me a press kit in the mail. Some companies used to send 'swag'--an industry terms for those inexpensive promotional chotchkas vendors give out. Better yet, send out a boxed copy of your software. This helps. In a sea of choices, having a name to remember and with which I can associate specific features is a big plus.
Depending on how I split my votes I can vote for anywhere from 4 to 10 products per category. I almost never vote for as few as four. A problem I have is that once I've voted for the top products, I might have only a few votes left for the remaining 20-30 products. At this point, I need some reason to vote for your product. Just because it's good is not sufficient. Make its good points memorable and you're likely to get one of those last few votes.
I hope this post answers some preliminary questions.
Friday, January 05, 2007
Correction Re Java Tutorials
Grrr. I intensely dislike being wrong in reviews and recommendations. I put lots of effort into my published writings so that readers can rely with confidence on what I recommend. So, it's quite painful--actually, it's a sense of shame and depression coupled with a lot of pain--to realize that I bollixed a review in print.
Such is the case with my quick guide to Java tutorials that appears in the current column in SD Times. In it, I write the following (after discussing the major books in this market):
Finally, for those who want something more serious but don’t require the omnibus tomes, there’s “The Java Tutorial” 4th Edition, by Zakhour et al. (Addison-Wesley Professional). In 600 pages, it presents all of the language proper, with well-chosen code examples, plus the basics of the major API sets. It’s put out by the same team that developed Sun’s outstanding online Java tutorials, which might be the best tutorials ever developed for any language. Get this book to start with, unless one of the others has a particular feature you feel is critical. Either way, you’ll be treated well.The problem is this book is really not very good and I have to retract my recommendation. It contains some unpardonable errors (its list of Java keywords is incorrect; it refers readers to other books for topics it introduces and never brings to a close--but it only gives you the other book's title, not a chapter or page number, nor even a link to a website that covers the same topic; it makes reference to material that has not yet been presented; and, finally, it makes points that it flags as important, with no explanation as to why.). Don't consider this book, instead go to Volume 1 of Core Java to get the basics of the language in fewer than 800 pages.
My erroneous recommendation comes from an aspect that I failed to consider. The book does have good parts, especially those that refer to the latest items added to Java (enums, for example). I looked at these and was impressed. I then made the error of thinking that the rest of the book was as good as the sections I'd read. The trouble is this book is a gang project and clearly there is considerable fluctuation in the work quality of various authors. I happened to hit the few highlights in my review pass.
Curiously, a few years ago, this problem would not have occurred. Books were primarily written by one or two authors, so quality was consistent throughout--be it good or bad. Any given pair of chapters pretty much reflected the book's overall quality. With the gang books, this is much more rarely the case, as I was badly reminded.