Writing good tests is hard: test code tends to be opaque, fragile, and hard to maintain even when written by the same developers who write decent product code. Why?
Test code is inherently introspective: it is about the code it is testing. Product code is about a user-domain problem - for example, managing the relationships between a company and its customers. Test code is about the user-domain problem and about the product code. As a concrete example: writing tests often requires exposing (through package sharing or through "test-only" APIs) methods that don't make sense otherwise, in order to give the test code the ability to inspect the internal state of objects in the product code.
Introspection, or self-reference, increases conceptual complexity. An object that does something is simpler than an object that reasons about an object that does something.
Thursday, September 3, 2009
Monday, July 13, 2009
Dogfood
The company where I spent my vacation has a very strong tradition of "eating one's own dogfood," which means using the products they are developing. This is supposed to improve the quality of the product, because one becomes painfully aware of the problems and is motivated to fix them.
My recollection is that when I first heard the phrase, back in the early 1990s, it was "tasting," not "eating," one's own dogfood. There is a significant difference between the two.
The problem with eating only your own dogfood is that you start getting used to the taste of dogfood, and you don't discover that the rest of the world has learned how to cook a decent meal.
Maybe it is better to eat whatever the best food around is, while occasionally being forced to taste one's own dogfood. For example, let development teams use whatever products they want, but have test days where the teams work through predefined customer use cases using their own products.
My recollection is that when I first heard the phrase, back in the early 1990s, it was "tasting," not "eating," one's own dogfood. There is a significant difference between the two.
The problem with eating only your own dogfood is that you start getting used to the taste of dogfood, and you don't discover that the rest of the world has learned how to cook a decent meal.
Maybe it is better to eat whatever the best food around is, while occasionally being forced to taste one's own dogfood. For example, let development teams use whatever products they want, but have test days where the teams work through predefined customer use cases using their own products.
Tuesday, June 30, 2009
Vacation
I spent the last month on "vacation", working for a different company on a different product in a different programming language. It was actually quite a lot of fun; I'd like to take more such vacations (not least because, unlike the sort where one travels to a different country, I got paid to do it.)
I was working primarily in C#, a language I've not spent much time in till now. I was also working on Windows-specific user interface code, rather than the platform-independent systems-level code I normally write.
C# is a weird language. It feels to me like something of a kitchen-sink language: it's got a bit of practically every language idea I've ever heard of. Generics, function delegates, event handlers, closures, ... there are about five times as many keywords as Java has. Java is to C# as Spanish is to English.
There were a few things that I really liked: I did get pretty friendly with closures, for instance, and the language support for iterators is cool. Java still feels more elegant to me, though. There's just too many ways to skin the same cat in C#. And Java's collection and concurrency libraries are still way better.
The one thing I most disliked about C# isn't really a fault of the language as such: the naming conventions. In Java, by convention, we name types with capital letters; methods, variables, and packages with lower-case. This helps disambiguate some syntax that would otherwise be hard for humans to read (even though it doesn't bother the compiler at all). In C#, by convention, namespaces, types, and methods are all capitalized; fields and variables aren't. Properties, which are fields that have built-in getter and setter methods (so they can be accessed by assignment syntax), are capitalized. To my eye, it's hard to read and ugly.
To a compiler, none of these distinctions really make much difference; the differences between the .NET and Java virtual machines is pretty small. Programming languages, as I've said before, are a way for programmers to express ideas: they're for people, not computers. Esthetics matters.
If I were in St. Louis, I'd be attending this talk on the Fan programming language, which claims to be compilable to both virtual machines and to get rid of a lot of the uglifications of both Java and C#.
I was working primarily in C#, a language I've not spent much time in till now. I was also working on Windows-specific user interface code, rather than the platform-independent systems-level code I normally write.
C# is a weird language. It feels to me like something of a kitchen-sink language: it's got a bit of practically every language idea I've ever heard of. Generics, function delegates, event handlers, closures, ... there are about five times as many keywords as Java has. Java is to C# as Spanish is to English.
There were a few things that I really liked: I did get pretty friendly with closures, for instance, and the language support for iterators is cool. Java still feels more elegant to me, though. There's just too many ways to skin the same cat in C#. And Java's collection and concurrency libraries are still way better.
The one thing I most disliked about C# isn't really a fault of the language as such: the naming conventions. In Java, by convention, we name types with capital letters; methods, variables, and packages with lower-case. This helps disambiguate some syntax that would otherwise be hard for humans to read (even though it doesn't bother the compiler at all). In C#, by convention, namespaces, types, and methods are all capitalized; fields and variables aren't. Properties, which are fields that have built-in getter and setter methods (so they can be accessed by assignment syntax), are capitalized. To my eye, it's hard to read and ugly.
To a compiler, none of these distinctions really make much difference; the differences between the .NET and Java virtual machines is pretty small. Programming languages, as I've said before, are a way for programmers to express ideas: they're for people, not computers. Esthetics matters.
If I were in St. Louis, I'd be attending this talk on the Fan programming language, which claims to be compilable to both virtual machines and to get rid of a lot of the uglifications of both Java and C#.
Wednesday, May 6, 2009
I Don't Quite Get Dependency Injection
Here's how you write a program that prints out "hello world" in ordinary Java:
I'm reading Spring In Action, and this excellently written book starts out by writing "hello world" in Spring-enabled Java. It's about ten times as long; I won't reproduce it here. The main benefit is that the "Hello world" string is in an XML file, instead of in a Java file, so that it's easier to change.
Or something. Personally, I'd rather edit Java than XML. In my experience as a developer, it's actually much easier to write cryptic bugs in XML (or more broadly, in configuration code of any sort) than in Java. This is because the rules of Java (or other formal languages, such as C, C++, heck, even Perl) are more standardized, better defined, better testable, and better documented than those of proprietary configuration languages such as Spring. If I change a character string in Java, I can predict quite well what will happen, and I can watch it happen in the debugger. If I change a string in an XML configuration file, I have no way to know who is reading that file or what they are doing with it. It's magic.
In the 1970s, when I first learned to program, the field was predominantly procedural: a program was a bunch of instructions to be followed one after another. The instructions might involve reading data from files and acting on those data, but the data did not modify the program itself; in fact we believed at the time that it was very poor form to let a program be self-modifying, because it made it hard to understand how it would behave, so we were very reluctant to consider the data files to be part of the program.
Thirty years later, it feels to me like the software industry has moved very strongly toward configuration-based programming. We want our procedural (Java) code to do less and less, and instead we want to control behavior by way of increasingly complex configuration. A program I'm currently working on has got more configuration files than it does Java files, and they are spread over more directories on the hard drive. The procedural part is all in one language: Java. The configuration part is in EHCache, Hibernate, Spring, Maven, JDBC, and Log4J, each with its own cryptic syntax that is subject to change with every version and that is documented in bits and pieces on web forums and spottily-available books.
As so often, I find myself wondering if the emperor has any clothes. Is it really easier to program this way? Or is it just different?
I wonder whether instead, we should focus on figuring out what is "hard" about procedural programming, and work on making that easier, within the confines of a formally defined, easy to understand and read language. For instance, IDEs could easily help with changing dependencies - in fact, an IDE could look at all the pieces of a program, determine all the external dependencies, and present them as a single view, allowing the programmer to substitute equivalent components.
public class Hello {
public static void main(String[] args) {
System.out.println("Hello world!");
}
}
I'm reading Spring In Action, and this excellently written book starts out by writing "hello world" in Spring-enabled Java. It's about ten times as long; I won't reproduce it here. The main benefit is that the "Hello world" string is in an XML file, instead of in a Java file, so that it's easier to change.
Or something. Personally, I'd rather edit Java than XML. In my experience as a developer, it's actually much easier to write cryptic bugs in XML (or more broadly, in configuration code of any sort) than in Java. This is because the rules of Java (or other formal languages, such as C, C++, heck, even Perl) are more standardized, better defined, better testable, and better documented than those of proprietary configuration languages such as Spring. If I change a character string in Java, I can predict quite well what will happen, and I can watch it happen in the debugger. If I change a string in an XML configuration file, I have no way to know who is reading that file or what they are doing with it. It's magic.
In the 1970s, when I first learned to program, the field was predominantly procedural: a program was a bunch of instructions to be followed one after another. The instructions might involve reading data from files and acting on those data, but the data did not modify the program itself; in fact we believed at the time that it was very poor form to let a program be self-modifying, because it made it hard to understand how it would behave, so we were very reluctant to consider the data files to be part of the program.
Thirty years later, it feels to me like the software industry has moved very strongly toward configuration-based programming. We want our procedural (Java) code to do less and less, and instead we want to control behavior by way of increasingly complex configuration. A program I'm currently working on has got more configuration files than it does Java files, and they are spread over more directories on the hard drive. The procedural part is all in one language: Java. The configuration part is in EHCache, Hibernate, Spring, Maven, JDBC, and Log4J, each with its own cryptic syntax that is subject to change with every version and that is documented in bits and pieces on web forums and spottily-available books.
As so often, I find myself wondering if the emperor has any clothes. Is it really easier to program this way? Or is it just different?
I wonder whether instead, we should focus on figuring out what is "hard" about procedural programming, and work on making that easier, within the confines of a formally defined, easy to understand and read language. For instance, IDEs could easily help with changing dependencies - in fact, an IDE could look at all the pieces of a program, determine all the external dependencies, and present them as a single view, allowing the programmer to substitute equivalent components.
Monday, April 6, 2009
Improving Maven
I'm still at loggerheads with Maven. There are some specific problems that bother me and that I think could be improved while preserving its basic Maven-ness. Many of the problems center around issues when simultaneously developing more than one component at a time, i.e., when depending on SNAPSHOT versions. In fact a SNAPSHOT dependency is a fairly good indicator of misery; but some small improvements in Maven could reduce that pain point by a lot.
1. Maven knows that source produces artifacts, but it doesn't know that artifacts come from source. When Maven checks dependencies, it looks in local and remote repositories, but it doesn't know enough to build (or re-build) an artifact from source.
If Maven artifacts (at least locally deployed ones) had a backpointer to the location of the source project that produced them, then it would be possible to check dependencies against source. For instance, if project B depends on A, and I touch project A's code and then rebuild B, project A should also get rebuilt and installed. Similarly, if artifact A came from local source, then it almost certainly should NOT get replaced by an "updated" artifact from a remote repository, even if the remote artifact is newer; rather, local source code should always get honored. Extra points for a "mvn svn:update" command or the like, that would transitively sync the version control system to the latest code in all upstream projects.
2. SNAPSHOTs need to be versioned. When you're collaboratively working on two projects, and an API between them changes, the downstream build is broken until the upstream project gets refreshed. But right now that happens in a nondeterministic, asynchronous way: to Maven, all SNAPSHOTs are identical until, around midnight or so, it decides to refresh. Basing refreshes on an update interval is like filling up your car's gas tank every Friday: it's either too soon or too late. This needs to be deterministic. What I really want from SNAPSHOT is the idea of a fine-grained version number, that I will throw away upon release. It could be as simple as letting me say 1.3.0-SNAPSHOT-002, instead of just 1.3.0-SNAPSHOT.
3. Maven assumes that the internet is fast and reliable. It is neither, as anyone who works from coffeeshops and airports knows all too well. When Maven fails to get a network connection, or the network dies midway or times out, it needs to be able to roll back to a known and working state. Among other things, this means that updates need to be atomic across projects, or at least they need to be nondestructive. It also means that basic help should not rely on a network connection. Maven should not attempt to update plug-ins or projects during a 'clean' or 'help' operation.
I've got other problems with Maven - for instance, I think XML is nasty to work with, and I think that "convention over configuration" translates to "doesn't play well with others." Those things are harder to address while still keeping it Maven. But if the above three improvements were made, I think no one who loved Maven would be harmed, and a lot of other folks would be helped.
1. Maven knows that source produces artifacts, but it doesn't know that artifacts come from source. When Maven checks dependencies, it looks in local and remote repositories, but it doesn't know enough to build (or re-build) an artifact from source.
If Maven artifacts (at least locally deployed ones) had a backpointer to the location of the source project that produced them, then it would be possible to check dependencies against source. For instance, if project B depends on A, and I touch project A's code and then rebuild B, project A should also get rebuilt and installed. Similarly, if artifact A came from local source, then it almost certainly should NOT get replaced by an "updated" artifact from a remote repository, even if the remote artifact is newer; rather, local source code should always get honored. Extra points for a "mvn svn:update" command or the like, that would transitively sync the version control system to the latest code in all upstream projects.
2. SNAPSHOTs need to be versioned. When you're collaboratively working on two projects, and an API between them changes, the downstream build is broken until the upstream project gets refreshed. But right now that happens in a nondeterministic, asynchronous way: to Maven, all SNAPSHOTs are identical until, around midnight or so, it decides to refresh. Basing refreshes on an update interval is like filling up your car's gas tank every Friday: it's either too soon or too late. This needs to be deterministic. What I really want from SNAPSHOT is the idea of a fine-grained version number, that I will throw away upon release. It could be as simple as letting me say 1.3.0-SNAPSHOT-002, instead of just 1.3.0-SNAPSHOT.
3. Maven assumes that the internet is fast and reliable. It is neither, as anyone who works from coffeeshops and airports knows all too well. When Maven fails to get a network connection, or the network dies midway or times out, it needs to be able to roll back to a known and working state. Among other things, this means that updates need to be atomic across projects, or at least they need to be nondestructive. It also means that basic help should not rely on a network connection. Maven should not attempt to update plug-ins or projects during a 'clean' or 'help' operation.
I've got other problems with Maven - for instance, I think XML is nasty to work with, and I think that "convention over configuration" translates to "doesn't play well with others." Those things are harder to address while still keeping it Maven. But if the above three improvements were made, I think no one who loved Maven would be harmed, and a lot of other folks would be helped.
Thursday, April 2, 2009
Eclipse awards
I spend most of my professional life feeling ignorant of one thing or another, so in the few areas where I do know at least a little bit I try to help out. For that reason I post pretty often to the eclipse.newcomer newsgroup. I'm proud to have been a finalist for the Eclipse Newcomer Evangelist award for the second year in a row:
Wednesday, March 11, 2009
Sharing Data
I recently discovered Joe Armstrong's post "Why OO Sucks". I didn't agree with much of what he said, but I was struck by one of his claims, "functions and data structures belong in totally different worlds." This is of course the antithesis of the OO (object-oriented) programming philosophy, which holds that you should lump data together with the sets of rules and actions that apply to it.
I disagree. Data is only useful if it has integrity, and it only has integrity if there are rules that govern how it is read and changed. "Rules" is just another word for "code", so this argues that code should be tightly associated with data.
An "object" in software jargon is just a way to expose data while still wrapping it in a decent amount of clothing. Within an application, objects make it safer to work with data, because you can ensure that no matter what you do the data is still valid.
The problem with objects is that they're not easily shared between applications. They're very ephemeral; they live only as long as they're contained in a running program, and they're tightly coupled to all the details of that particular program. In Java every object is an instance of a particular class, and every class is associated with the classloader that produced it, and classloaders in turn are associated with a single instance of a single application. If you try to put an object into a different application, it looks around for its classloader and, not finding one it recognizes, gets scared and shy.
The usual solution to this is to convert the object into raw data (often some sort of text representation, like XML) for long enough to transport it to a different application, and then in that application a new object is created by reading in the raw data and associating it with a hopefully compatible class from a hopefully compatible classloader. This is slow, expensive, and inaccurate. For instance, there's no way to guarantee that the classes are truly identical, so if an object moves from application A to B and back again, it might come back in an illegal state. Also, it often requires the programmer to write a lot of code to spell out how to read and write the object.
Terracotta is a way to spread the work of a software application across a large number of computers. We allow objects to move freely from one computer to the next. Under the covers we do still convert the objects to raw data and back, but we do it in a way that is quite efficient and transparent to the application programmer. We're great at moving objects from computer to computer in a single application. Up until the latest release, however, we weren't much good at moving objects between different applications. Now we are.
The basic idea here is that even though the classloaders for two different applications are different, as long as they both contain a definition of the class being shared (and the other classes that it in turn needs to access), that's good enough. The computer has no way of knowing whether that's true; but the programmer does. So, we let the programmer tell Terracotta which applications are allowed to share classes with each other. The configuration feature is called "app-groups", and I'd point to the documentation in this post, but it's not up on the web site quite yet. It's quite simple to use; you just define an app-groups element in the Terracotta configuration file, give it a name, and inside it you list all the applications that you want to be able to share objects with each other.
A typical use case would be if you've got a user-facing application and also an administrative application. Imagine, for instance, a merchant site, that lets users build up a shopping cart. Using Terracotta you might avoid storing that shopping cart in your central database, to reduce database load; instead, you'd keep it as transient data, getting session scalability and server failover from the Terracotta system instead. But suppose you want to let a sales agent view a customer's shopping cart, to make recommendations or fix problems. How can you share the transient shopping cart data between the customer-facing application and the agent-facing administrative application? One idea is to keep the list of shopping carts as a shared root in both applications, and then place both applications in the same app-group with the Terracotta configuration. No database required; transient data is still transient; no custom serialization code or data format definitions required. Just transparent sharing of objects between two otherwise different applications that both happen to include the same Java class definitions.
There are still some caveats, of course. One ugly one is that the different applications have to be running in different Java virtual machines. That is, you can't have a single application server instance and deploy both applications to it. That's for internal technical reasons that we hope to eliminate in a future release. For now, you'd have to put the sales-agent application on a separate app server instance (although it could be running on the same physical computer). Another caveat is that you can't have multiple overlapping groups (like, A can share with B and B with C but not A with C), and you can't restrict sharing to only certain objects or roots, it's application-wide. Caveats notwithstanding, I think it's a powerful new feature, and it'll be interesting to see what new uses of Terracotta this enables.
I disagree. Data is only useful if it has integrity, and it only has integrity if there are rules that govern how it is read and changed. "Rules" is just another word for "code", so this argues that code should be tightly associated with data.
An "object" in software jargon is just a way to expose data while still wrapping it in a decent amount of clothing. Within an application, objects make it safer to work with data, because you can ensure that no matter what you do the data is still valid.
The problem with objects is that they're not easily shared between applications. They're very ephemeral; they live only as long as they're contained in a running program, and they're tightly coupled to all the details of that particular program. In Java every object is an instance of a particular class, and every class is associated with the classloader that produced it, and classloaders in turn are associated with a single instance of a single application. If you try to put an object into a different application, it looks around for its classloader and, not finding one it recognizes, gets scared and shy.
The usual solution to this is to convert the object into raw data (often some sort of text representation, like XML) for long enough to transport it to a different application, and then in that application a new object is created by reading in the raw data and associating it with a hopefully compatible class from a hopefully compatible classloader. This is slow, expensive, and inaccurate. For instance, there's no way to guarantee that the classes are truly identical, so if an object moves from application A to B and back again, it might come back in an illegal state. Also, it often requires the programmer to write a lot of code to spell out how to read and write the object.
Terracotta is a way to spread the work of a software application across a large number of computers. We allow objects to move freely from one computer to the next. Under the covers we do still convert the objects to raw data and back, but we do it in a way that is quite efficient and transparent to the application programmer. We're great at moving objects from computer to computer in a single application. Up until the latest release, however, we weren't much good at moving objects between different applications. Now we are.
The basic idea here is that even though the classloaders for two different applications are different, as long as they both contain a definition of the class being shared (and the other classes that it in turn needs to access), that's good enough. The computer has no way of knowing whether that's true; but the programmer does. So, we let the programmer tell Terracotta which applications are allowed to share classes with each other. The configuration feature is called "app-groups", and I'd point to the documentation in this post, but it's not up on the web site quite yet. It's quite simple to use; you just define an app-groups element in the Terracotta configuration file, give it a name, and inside it you list all the applications that you want to be able to share objects with each other.
A typical use case would be if you've got a user-facing application and also an administrative application. Imagine, for instance, a merchant site, that lets users build up a shopping cart. Using Terracotta you might avoid storing that shopping cart in your central database, to reduce database load; instead, you'd keep it as transient data, getting session scalability and server failover from the Terracotta system instead. But suppose you want to let a sales agent view a customer's shopping cart, to make recommendations or fix problems. How can you share the transient shopping cart data between the customer-facing application and the agent-facing administrative application? One idea is to keep the list of shopping carts as a shared root in both applications, and then place both applications in the same app-group with the Terracotta configuration. No database required; transient data is still transient; no custom serialization code or data format definitions required. Just transparent sharing of objects between two otherwise different applications that both happen to include the same Java class definitions.
There are still some caveats, of course. One ugly one is that the different applications have to be running in different Java virtual machines. That is, you can't have a single application server instance and deploy both applications to it. That's for internal technical reasons that we hope to eliminate in a future release. For now, you'd have to put the sales-agent application on a separate app server instance (although it could be running on the same physical computer). Another caveat is that you can't have multiple overlapping groups (like, A can share with B and B with C but not A with C), and you can't restrict sharing to only certain objects or roots, it's application-wide. Caveats notwithstanding, I think it's a powerful new feature, and it'll be interesting to see what new uses of Terracotta this enables.
Monday, March 9, 2009
Circus Contraption
As long as we're talking about things I'm proud of, let me mention the new show that I'm doing sound for, Circus Contraption. I did the sound system design and installation, and the overall sound design, and I'm sharing the night-to-night mixing duties with two other sound guys. If you happen to be in Seattle on a Friday, Saturday, or Sunday in the next couple months, try to see the show! It's the real thing. The sword swallower actually swallows the sword, it's not just stage magic.
Doing live sound is very, very different than writing software. You do not get a chance to fix bugs, and you cannot take things slowly or stop to have a design discussion. Every night is different: the performers sing and play softer or louder, there are more or fewer people (read: sound-absorbing sacks of water) in the audience, equipment that worked the last night breaks this night, someone trips over a wire or forgets to turn on their microphone. You do what you can and move on. Frankly, I'd probably be a better software developer if I treated software more like live sound.
Doing live sound is very, very different than writing software. You do not get a chance to fix bugs, and you cannot take things slowly or stop to have a design discussion. Every night is different: the performers sing and play softer or louder, there are more or fewer people (read: sound-absorbing sacks of water) in the audience, equipment that worked the last night breaks this night, someone trips over a wire or forgets to turn on their microphone. You do what you can and move on. Frankly, I'd probably be a better software developer if I treated software more like live sound.
Wednesday, March 4, 2009
Terracotta
I write software for Terracotta, which is an open-source company. I love working on open source code, in part because what I do is not a secret - I can tell my geeky friends about the cool problems that I wrestle with. (I also work on Eclipse, another open source project.)
Paradoxically, though, it seems like in the open source world it is often very hard to talk about who my customers are. Partly that's because we don't always know - anyone can download the product for free. But also it's because our paying customers don't always want their competition to know how they succeed, and we of course need to honor their confidentiality.
So I'm really pleased that Terracotta has lately been getting some great press about one of our important customers, Sabre Holdings. They're perhaps most commonly known for one facet of their business, Travelocity. Sabre is huge - according to one article, "On any given day, Sabre's servers have to be able to handle up to half a billion transactions a day and a peak volume that can go up to 32,000 transactions per second."
How do they get that kind of volume, and the reliability that has to go with it? Answer: they run their mission-critical, high-volume stuff on Terracotta, my software. Yes, I'm proud :-)
Paradoxically, though, it seems like in the open source world it is often very hard to talk about who my customers are. Partly that's because we don't always know - anyone can download the product for free. But also it's because our paying customers don't always want their competition to know how they succeed, and we of course need to honor their confidentiality.
So I'm really pleased that Terracotta has lately been getting some great press about one of our important customers, Sabre Holdings. They're perhaps most commonly known for one facet of their business, Travelocity. Sabre is huge - according to one article, "On any given day, Sabre's servers have to be able to handle up to half a billion transactions a day and a peak volume that can go up to 32,000 transactions per second."
How do they get that kind of volume, and the reliability that has to go with it? Answer: they run their mission-critical, high-volume stuff on Terracotta, my software. Yes, I'm proud :-)
Thursday, February 12, 2009
What should code comments do?
Below I've posted some code I just had to look at. I've got nothing against this code; it's a nice clean class, simple, I'm not aware of any bugs in it.
It's easy to figure out what this code does, just by looking at it. It takes a slash-delimited string ending in "war", like the one in main(), and deletes the third token if it contains only decimal digits.
But WHY? What problem does this class solve? What is Geronimo, why is the string "war" important?
I can't help but think that someone discovered the need for this code the hard way, after time spent looking at Geronimo code or documentation, talking with peers, perhaps after fixing a bug report from the field. All that information has now been lost.
Perhaps the need for this applied only to a particular version of Geronimo. Perhaps it only turns up in a peculiar use case. Perhaps the original developer's understanding was flawed and this code is never actually needed. There's no way to know, and anyone who encounters this code in the future will have to try to figure out how not to break it. Very likely, it actually does do something important but it's not covered in the test suite, and any breakage will be discovered as a regression in the field, when some user tries to update to the latest product version and their application no longer runs.
It's like a post in the middle of the living room: you figure it's probably supporting some weight above, but how do you know? So you can't remodel the room, because the second floor might collapse. But maybe the builder put it there because they were planning on a hot tub on the floor above, where now you've got a walkin closet. Now you've got to hire a structural engineer to do the same calculations again, because the original rationale has been lost.
Well-written code shouldn't need to explain what it does. But it should explain why it does it. What other options were considered? In what situations is the code necessary?
It's easy to figure out what this code does, just by looking at it. It takes a slash-delimited string ending in "war", like the one in main(), and deletes the third token if it contains only decimal digits.
But WHY? What problem does this class solve? What is Geronimo, why is the string "war" important?
I can't help but think that someone discovered the need for this code the hard way, after time spent looking at Geronimo code or documentation, talking with peers, perhaps after fixing a bug report from the field. All that information has now been lost.
Perhaps the need for this applied only to a particular version of Geronimo. Perhaps it only turns up in a peculiar use case. Perhaps the original developer's understanding was flawed and this code is never actually needed. There's no way to know, and anyone who encounters this code in the future will have to try to figure out how not to break it. Very likely, it actually does do something important but it's not covered in the test suite, and any breakage will be discovered as a regression in the field, when some user tries to update to the latest product version and their application no longer runs.
It's like a post in the middle of the living room: you figure it's probably supporting some weight above, but how do you know? So you can't remodel the room, because the second floor might collapse. But maybe the builder put it there because they were planning on a hot tub on the floor above, where now you've got a walkin closet. Now you've got to hire a structural engineer to do the same calculations again, because the original rationale has been lost.
Well-written code shouldn't need to explain what it does. But it should explain why it does it. What other options were considered? In what situations is the code necessary?
public class GeronimoLoaderNaming {
public static String adjustName(String name) {
if (name != null && name.endsWith("war")) {
String[] parts = name.split("/", -1);
if (parts.length != 4) { throw new RuntimeException("unknown format: " + name + ", # parts = " + parts.length); }
if ("war".equals(parts[3]) && parts[2].matches("^\\d+$")) {
name = name.replaceAll(parts[2], "");
}
}
return name;
}
public static void main(String args[]) {
String name = "Geronimo.default/simplesession/1164587457359/war";
System.err.println(adjustName(name));
}
}
Friday, January 30, 2009
Maven Maven Maven
My post Maven Continues to Suck drew a number of comments, including a helpful and thoughtful comment from Jason van Zyl, who concluded "I don't think it's so much that Maven continues to suck as much as we need to do more than the free book we already have to get people past these simple setup problems that cause frustration."
I responded briefly in the comments but wanted to expand a bit.
Perhaps I will eventually see the light - I do keep hoping, because there doesn't seem much alternative - but I have not yet. Respectfully, Maven folks, I think it's optimistic to hope that if you "get people past these simple setup problems that cause frustration" you'll be in the clear. I've been working with Maven now for half a year; it continues to be perhaps my primary source of frustration, and to interfere in almost everything I do. I cannot say the same for any of the other tools I use. Neither my IDE (Eclipse), my version control system (SVN), my language (Java) nor its libraries were this problematic this far along.
The difference may in part be in my expectations; but the issues go beyond simple setup problems. To pick two other examples: (1) The Eclipse/Maven integration is rough, causing what should be simple Eclipse operations (saving files, jumping to referenced types, debugging) to be slower and less accurate. (2) No one on our team has yet figured a way to pipe arbitrary system properties from a Maven command line into a forked app server process in the context of a system test. These may be blamed on third-party software (M2Eclipse, Surefire); or on lack of Maven configuration skillz; but the Maven ecosystem is part of working with Maven, as is the fact that most developers on a team should not be expected to be Maven experts.
It is worth mentioning that invoking "mvn help" at the command line first tries to download stuff from teh internets, and then spits out a cryptic build failure error; mvn --help spits out a command line usage message that says nothing about how to actually get help. Even "mvn clean" tries to first do an update (which is always the wrong thing to do before doing a clean, because you may lose information about what to clean). Have you ever tried to use Maven without an internet connection handy, like on an airplane? Epic fail. I know about mvn -o but have never been able to get it to work, perhaps because the web of dependencies is so fragile and unstable.
By contrast, in our core code base we have a homebrew build script that solves all these problems, is easy for anyone with basic programming knowledge to maintain and modify, and just never seems to get in the way. If Maven requires more domain-specific knowledge, skill, and time to maintain than a hard-coded build script (or Ant script) would, is it buying us anything?
No tool is the right answer for all problems, and most tools are the right answer for some. Almost any tool can be extended, with sufficient skill and time, to do anything. That doesn't make it the right tool.
Moreover, the more smarts that we build into our Maven configuration, the more that we will rely on needing to hire developers with serious Maven chops; I would rather hire developers with serious programming chops. Maven skills do not generalize to other problems; Java, Groovy, Ruby, Perl do.
What I'm saying here is that I think there is a problem in principle with basing a build on a tool that requires deep domain-specific knowledge to use well; that I think Maven is such a tool; that, further, I think even with solid knowledge Maven is based on premises (such as the idea of a SNAPSHOT) that don't model the world well (pre-release output of a CI process is not equivalent to locally-built output of a local change); and that, finally, there is only room in the world for at most one convention-based tool, and we already have more than one.
Put differently, I'm saying that I think even if I knew how to use it well, Maven would not be the right tool.
I responded briefly in the comments but wanted to expand a bit.
Perhaps I will eventually see the light - I do keep hoping, because there doesn't seem much alternative - but I have not yet. Respectfully, Maven folks, I think it's optimistic to hope that if you "get people past these simple setup problems that cause frustration" you'll be in the clear. I've been working with Maven now for half a year; it continues to be perhaps my primary source of frustration, and to interfere in almost everything I do. I cannot say the same for any of the other tools I use. Neither my IDE (Eclipse), my version control system (SVN), my language (Java) nor its libraries were this problematic this far along.
The difference may in part be in my expectations; but the issues go beyond simple setup problems. To pick two other examples: (1) The Eclipse/Maven integration is rough, causing what should be simple Eclipse operations (saving files, jumping to referenced types, debugging) to be slower and less accurate. (2) No one on our team has yet figured a way to pipe arbitrary system properties from a Maven command line into a forked app server process in the context of a system test. These may be blamed on third-party software (M2Eclipse, Surefire); or on lack of Maven configuration skillz; but the Maven ecosystem is part of working with Maven, as is the fact that most developers on a team should not be expected to be Maven experts.
It is worth mentioning that invoking "mvn help" at the command line first tries to download stuff from teh internets, and then spits out a cryptic build failure error; mvn --help spits out a command line usage message that says nothing about how to actually get help. Even "mvn clean" tries to first do an update (which is always the wrong thing to do before doing a clean, because you may lose information about what to clean). Have you ever tried to use Maven without an internet connection handy, like on an airplane? Epic fail. I know about mvn -o but have never been able to get it to work, perhaps because the web of dependencies is so fragile and unstable.
By contrast, in our core code base we have a homebrew build script that solves all these problems, is easy for anyone with basic programming knowledge to maintain and modify, and just never seems to get in the way. If Maven requires more domain-specific knowledge, skill, and time to maintain than a hard-coded build script (or Ant script) would, is it buying us anything?
No tool is the right answer for all problems, and most tools are the right answer for some. Almost any tool can be extended, with sufficient skill and time, to do anything. That doesn't make it the right tool.
Moreover, the more smarts that we build into our Maven configuration, the more that we will rely on needing to hire developers with serious Maven chops; I would rather hire developers with serious programming chops. Maven skills do not generalize to other problems; Java, Groovy, Ruby, Perl do.
What I'm saying here is that I think there is a problem in principle with basing a build on a tool that requires deep domain-specific knowledge to use well; that I think Maven is such a tool; that, further, I think even with solid knowledge Maven is based on premises (such as the idea of a SNAPSHOT) that don't model the world well (pre-release output of a CI process is not equivalent to locally-built output of a local change); and that, finally, there is only room in the world for at most one convention-based tool, and we already have more than one.
Put differently, I'm saying that I think even if I knew how to use it well, Maven would not be the right tool.
Javasaurus
Nothing about dinosaurs; apologies to any 6-year-olds I've misled.
There's an interface in the Java libraries called "Runnable", that just packages up the idea of "some code that you might want to run." This is handy when writing algorithms like "do some preparation; then run whatever it is the client wants to run; then do some clean-up." It's a way to hand a series of program steps from one module to another without having to know in advance what those steps are. ("Closures," much debated in Java, are another way of doing this.)
Runnable defines one method, "run". But the "run" method doesn't allow for the possibility of failure. I needed something similar, that was allowed to communicate failure (by throwing an exception). I knew there was something, but what? A search of the likely spots didn't turn up what I was looking for.
It would be really cool if there was a Thesaurus of Java, a tool or a web site that would let me type in "Runnable" and would come back with all the other things that were kind of like "Runnable." In this case the answer, provided by my colleague Hung, is "Callable." Doh.
A similar task that comes up for me a lot is that I've got a This, and I know it's possible to convert it to a That, but I'm not sure how. For instance, let's say I've got an Eclipse IFile, and I want to convert it to an ordinary Java File. The mapping isn't perfect, but basically, given an IFile there is a chain of methods you can call that will either get you the corresponding File or tell you it doesn't exist. But what is that chain of methods?
There's a finite number of methods that take an IFile as an argument (or receiver). There's a finite number of methods that produce a File. So there's a finite, although very large, possible graph between them - for instance, you could imagine calling something like IFile.getURL() and then URL.getFileDescriptor() and then FileDescriptor.getFile(). (I just made those names up, that's not the real answer.)
Most of the paths through the graph will be wrong, and some will be long and some will be short. But you could use the same sort of semantic analysis tools that are used for natural language translation, feeding off existing code (such as the Eclipse code base, in this example), to inform the graph of common versus uncommon pairings. I'd enter my start and end types, and I'd see a view of the top ten paths through the graph of possible method calls connecting them, perhaps even with links to existing code examples where something like that path had been used before.
I tried Googling for "Java Thesaurus" to see if this existed already, but all that comes up is Java code for writing thesauri.
There's an interface in the Java libraries called "Runnable", that just packages up the idea of "some code that you might want to run." This is handy when writing algorithms like "do some preparation; then run whatever it is the client wants to run; then do some clean-up." It's a way to hand a series of program steps from one module to another without having to know in advance what those steps are. ("Closures," much debated in Java, are another way of doing this.)
Runnable defines one method, "run". But the "run" method doesn't allow for the possibility of failure. I needed something similar, that was allowed to communicate failure (by throwing an exception). I knew there was something, but what? A search of the likely spots didn't turn up what I was looking for.
It would be really cool if there was a Thesaurus of Java, a tool or a web site that would let me type in "Runnable" and would come back with all the other things that were kind of like "Runnable." In this case the answer, provided by my colleague Hung, is "Callable." Doh.
A similar task that comes up for me a lot is that I've got a This, and I know it's possible to convert it to a That, but I'm not sure how. For instance, let's say I've got an Eclipse IFile, and I want to convert it to an ordinary Java File. The mapping isn't perfect, but basically, given an IFile there is a chain of methods you can call that will either get you the corresponding File or tell you it doesn't exist. But what is that chain of methods?
There's a finite number of methods that take an IFile as an argument (or receiver). There's a finite number of methods that produce a File. So there's a finite, although very large, possible graph between them - for instance, you could imagine calling something like IFile.getURL() and then URL.getFileDescriptor() and then FileDescriptor.getFile(). (I just made those names up, that's not the real answer.)
Most of the paths through the graph will be wrong, and some will be long and some will be short. But you could use the same sort of semantic analysis tools that are used for natural language translation, feeding off existing code (such as the Eclipse code base, in this example), to inform the graph of common versus uncommon pairings. I'd enter my start and end types, and I'd see a view of the top ten paths through the graph of possible method calls connecting them, perhaps even with links to existing code examples where something like that path had been used before.
I tried Googling for "Java Thesaurus" to see if this existed already, but all that comes up is Java code for writing thesauri.
Thursday, January 8, 2009
Build languages
I mentioned a while ago that I thought the world still needed a good language for build tools. One sign that no such language exists is that there is still a 1:1 correspondence between build tools and their languages. There is no such thing as a portable build description, although some tools have a limited ability to import build scripts meant for other tools.
What makes a build language different than other languages? That's worth a long post. But for now just a couple short thoughts:
Builds are special in that they are almost always slow. Building software is, ironically, one of the hardest things that a computer can do. It's common for a build of even a mid-size project to take several hours, and running the acceptance tests can take most of a day.
Also, builds typically have significant side effects. They modify the world. Running a build may cause a public web site to be updated; it may cause gigabytes of new files to be copied to slave machines around the world; it may send emails to thousands of people; it may cause many other dependent projects to become inoperable.
This means that you can't just tweak and re-run if there's a problem. Debugging an intermittent problem can take weeks, instead of hours, and it's typically a very public process.
Most computer languages have no way to express the idea that some steps are slow, or that some steps have side effects, or even that some steps are dangerous. A good build language would do that intrinsically.
Error handling is a difficult part of any program, but it is critical for build programs, because error conditions are very common and can have damaging long-term side effects. So a good build language would make error handling easier. For instance, it should be easy to associate an action with a set of restrictions, like "execute the update step, unless it would result in deleting any existing files." This sort of thing is not impossible in existing languages but it is requires more code than anyone would actually write (or get right).
Whereas most computer programs are designed to be run many times without change - for instance, hundreds of thousands of people will post millions of blog entries before the Blogger.com software has an update - build programs have to change much more often per execution. A build script might be run a few times a day, and updated every few days. So the writing and debugging of the build program, as an activity, are nearly as important as the execution.
So, a good build language should be closely integrated with writing and debugging. Modern computer languages are typically compiled rather than interpreted, meaning that you have to finish writing the program (or at least a complete, self-consistent, self-contained subset of it) before you can begin executing it. The opposite extreme of a compiled language is a command shell, which is an interactive environment that lets a user perform arbitrary commands one at a time; a command shell may support scripting but it does not build up a "program" out of the executed commands.
In between these extremes are interpreted languages, in which there is a program but it runs within an "interpreter" that lets the user run one step of a program before the next step has even been written. In an interpreted language, it's possible to write and execute a program simultaneously: when you finish, you've got a program and you've also got its results. A good build language should be interpreted, not compiled. And the interpreter needs to be able to tell the user what the next step would do, before doing it: sort of like print preview. This is an attribute of the build tool, not the build language, but it places restrictions on the build language.
What makes a build language different than other languages? That's worth a long post. But for now just a couple short thoughts:
Builds are special in that they are almost always slow. Building software is, ironically, one of the hardest things that a computer can do. It's common for a build of even a mid-size project to take several hours, and running the acceptance tests can take most of a day.
Also, builds typically have significant side effects. They modify the world. Running a build may cause a public web site to be updated; it may cause gigabytes of new files to be copied to slave machines around the world; it may send emails to thousands of people; it may cause many other dependent projects to become inoperable.
This means that you can't just tweak and re-run if there's a problem. Debugging an intermittent problem can take weeks, instead of hours, and it's typically a very public process.
Most computer languages have no way to express the idea that some steps are slow, or that some steps have side effects, or even that some steps are dangerous. A good build language would do that intrinsically.
Error handling is a difficult part of any program, but it is critical for build programs, because error conditions are very common and can have damaging long-term side effects. So a good build language would make error handling easier. For instance, it should be easy to associate an action with a set of restrictions, like "execute the update step, unless it would result in deleting any existing files." This sort of thing is not impossible in existing languages but it is requires more code than anyone would actually write (or get right).
Whereas most computer programs are designed to be run many times without change - for instance, hundreds of thousands of people will post millions of blog entries before the Blogger.com software has an update - build programs have to change much more often per execution. A build script might be run a few times a day, and updated every few days. So the writing and debugging of the build program, as an activity, are nearly as important as the execution.
So, a good build language should be closely integrated with writing and debugging. Modern computer languages are typically compiled rather than interpreted, meaning that you have to finish writing the program (or at least a complete, self-consistent, self-contained subset of it) before you can begin executing it. The opposite extreme of a compiled language is a command shell, which is an interactive environment that lets a user perform arbitrary commands one at a time; a command shell may support scripting but it does not build up a "program" out of the executed commands.
In between these extremes are interpreted languages, in which there is a program but it runs within an "interpreter" that lets the user run one step of a program before the next step has even been written. In an interpreted language, it's possible to write and execute a program simultaneously: when you finish, you've got a program and you've also got its results. A good build language should be interpreted, not compiled. And the interpreter needs to be able to tell the user what the next step would do, before doing it: sort of like print preview. This is an attribute of the build tool, not the build language, but it places restrictions on the build language.
Thursday, January 1, 2009
I Want Pictures
With all our interesting weather of late, I've been following Cliff Mass' weather blog. I've been fascinated and impressed by the great data visualizations that meteorologists have to work with. They have a head start because their data naturally maps onto a two-dimensional plot, but they've managed to add many more dimensions in ways that even a non-meteorologist can quickly comprehend. For instance, look at this image, which is the first image from his blog post of 1/1/09:
In addition to the two dimensions of space, and the overlay of geopolitical boundaries, this image shows sea-level pressure, temperature, and the vector of wind speed and direction. And it's just plain pretty to look at, too. I'd love to have a job that involved looking at pictures like that all day.
What would the software equivalent be, I wonder? Could I combine profiler data with a dynamic class diagram from UML? What if I overlaid a metric of function complexity on top of that?
The visualization tools for software are pretty weak, when you consider that all the information is already in the computer (we don't need weather satellites to get our data). It might be because software is an abstraction that doesn't easily lend itself to a 2-D layout like weather data does, but I think it might also be that software engineers are by nature less visually oriented. I think I'm more of a visual thinker than most, but not all, of the developers I've worked with. I'm not really comfortable with something until I can draw a picture of it.
In addition to the two dimensions of space, and the overlay of geopolitical boundaries, this image shows sea-level pressure, temperature, and the vector of wind speed and direction. And it's just plain pretty to look at, too. I'd love to have a job that involved looking at pictures like that all day.
What would the software equivalent be, I wonder? Could I combine profiler data with a dynamic class diagram from UML? What if I overlaid a metric of function complexity on top of that?
The visualization tools for software are pretty weak, when you consider that all the information is already in the computer (we don't need weather satellites to get our data). It might be because software is an abstraction that doesn't easily lend itself to a 2-D layout like weather data does, but I think it might also be that software engineers are by nature less visually oriented. I think I'm more of a visual thinker than most, but not all, of the developers I've worked with. I'm not really comfortable with something until I can draw a picture of it.
Subscribe to:
Posts (Atom)