It rarely snows in Seattle, but it is snowing today; we've gotten about 4" in the last few hours. I live at the top of a steep hill so this basically means I can't leave home for the next few days. Fortunately or unfortunately, I work from home. Thus, the view from the Seattle offices of Terracotta:
Thursday, December 18, 2008
Sunday, December 7, 2008
Through A Glass, Darkly
Although I get paid to write software, most of my time is spent understanding other people's software. I find that difficult: the available information is usually fragmentary, inconsistent, and more than I can hold in my head at one time anyway. It's like trying to read a mural on the side of a building, through the holes in a construction fence, as I drive by. I get little snapshots of partial information and I have to try to piece them together in my head. Sometimes the big picture I develop is entirely wrong, or missing big chunks.
Example: I've been working with Hibernate for more than a month now, but I still don't really understand exactly how it actually works. I only discovered tonight that it creates proxies to camouflage and control all my objects. This is sort of like wondering why your friends are acting a bit odd and then discovering that everyone on the planet except you has been replaced by space aliens.
Java has so dang many layers of obscurity that it is really hard to figure out precisely what code is actually being executed. The application code you write is just the tip of the iceberg. What with Spring, Hibernate, Terracotta, JUnit, the Hotspot compiler, and all the other frameworks and code enhancement tools we use, the program code itself is almost just a hint to the system. Maybe we're getting closer to the holy grail of being able to tell the computer what we want done, rather than how to do it.
Example: I've been working with Hibernate for more than a month now, but I still don't really understand exactly how it actually works. I only discovered tonight that it creates proxies to camouflage and control all my objects. This is sort of like wondering why your friends are acting a bit odd and then discovering that everyone on the planet except you has been replaced by space aliens.
Java has so dang many layers of obscurity that it is really hard to figure out precisely what code is actually being executed. The application code you write is just the tip of the iceberg. What with Spring, Hibernate, Terracotta, JUnit, the Hotspot compiler, and all the other frameworks and code enhancement tools we use, the program code itself is almost just a hint to the system. Maybe we're getting closer to the holy grail of being able to tell the computer what we want done, rather than how to do it.
Friday, December 5, 2008
Maven Continues to Suck
Maven is supposed to be a tool to support collaborative software development.
Maven is based on the concept of software modules, each of which is versioned, and which have interdependencies. Let's say version 1.0.2 of MyThing depends on version 2.4.1 of YourThing, for instance. Now, what if I want to make some changes to both YourThing and MyThing? Well, there's a special version called "SNAPSHOT" - as in 2.4.1-SNAPSHOT - that means "the latest and greatest up-to-the-minute version of YourThing version 2.4.1". Setting aside the discussion of how the concept of a "version of a version" is inherently flawed, this introduces a big problem: it does not distinguish between changes I make and changes somebody else makes.
Case in point. Another developer at my company, and I, both are working on modifications to a module that our other stuff depends on. My modifications won't ever be shared with the rest of the world, they're just for my temporary purposes, but nonetheless I need them. The other developer, however, made some changes and committed them to version control. When I went to do a build of MyThing, Maven checked dependencies and noticed that there was a new SNAPSHOT of YourThing.
In a sane world, a collaborative software tool would notice the conflict, and perhaps notify me: "a newer version of YourThing is available, but it conflicts with your local changes - do you want to upgrade and lose your changes, or keep the stale but familiar stuff you've got?" The default, of course, would be to keep what I've got; after all, if I made some local changes, it was presumably for a reason.
Not Maven, though. Because I said SNAPSHOT (so that I could make changes locally), Maven silently and transparently discards my local version and updates me to somebody else's changes, at a random time of its deciding (typically, in the first build I do after midnight, of any project that happens to depend on YourThing).
Fortunately, the other developer's changes contained a serious bug. I say fortunately, because otherwise I might not have noticed that my changes had been silently discarded, and I might have spent a lot of time trying to figure out why my code, that used to work, no longer did.
What kind of collaborative software development tool is it that can't gracefully handle the simple case of two people working on a shared module?
Maven is not really a collaborative software development tool, it seems. Maven is a tool for letting one person develop a single module of code, with external dependencies on an otherwise-static world. That does not describe any software project I have ever worked on.
I keep hoping to see the light, and discover that I'm wrong about Maven. But it keeps on sucking.
Maven is based on the concept of software modules, each of which is versioned, and which have interdependencies. Let's say version 1.0.2 of MyThing depends on version 2.4.1 of YourThing, for instance. Now, what if I want to make some changes to both YourThing and MyThing? Well, there's a special version called "SNAPSHOT" - as in 2.4.1-SNAPSHOT - that means "the latest and greatest up-to-the-minute version of YourThing version 2.4.1". Setting aside the discussion of how the concept of a "version of a version" is inherently flawed, this introduces a big problem: it does not distinguish between changes I make and changes somebody else makes.
Case in point. Another developer at my company, and I, both are working on modifications to a module that our other stuff depends on. My modifications won't ever be shared with the rest of the world, they're just for my temporary purposes, but nonetheless I need them. The other developer, however, made some changes and committed them to version control. When I went to do a build of MyThing, Maven checked dependencies and noticed that there was a new SNAPSHOT of YourThing.
In a sane world, a collaborative software tool would notice the conflict, and perhaps notify me: "a newer version of YourThing is available, but it conflicts with your local changes - do you want to upgrade and lose your changes, or keep the stale but familiar stuff you've got?" The default, of course, would be to keep what I've got; after all, if I made some local changes, it was presumably for a reason.
Not Maven, though. Because I said SNAPSHOT (so that I could make changes locally), Maven silently and transparently discards my local version and updates me to somebody else's changes, at a random time of its deciding (typically, in the first build I do after midnight, of any project that happens to depend on YourThing).
Fortunately, the other developer's changes contained a serious bug. I say fortunately, because otherwise I might not have noticed that my changes had been silently discarded, and I might have spent a lot of time trying to figure out why my code, that used to work, no longer did.
What kind of collaborative software development tool is it that can't gracefully handle the simple case of two people working on a shared module?
Maven is not really a collaborative software development tool, it seems. Maven is a tool for letting one person develop a single module of code, with external dependencies on an otherwise-static world. That does not describe any software project I have ever worked on.
I keep hoping to see the light, and discover that I'm wrong about Maven. But it keeps on sucking.
Wednesday, December 3, 2008
Code Reviews
A spare set of eyes is always useful, so most software teams have some practice of code reviews. There is huge variation from team to team in how code reviews are actually done, and in my experience how any given team does it is an axiom of the corporate culture; suggesting a different approach is met with the same sort of response as if you'd suggested, say, using an abacus instead of a personal computer. (I haven't tried this at my current employer, Terracotta, yet.)
The range of practices I've been directly involved with includes:
Along with that range is the question of how, exactly, the code gets reviewed. Approaches I've tried include:
And then, there's the question of media and presentation. If code is being given to reviewers, it may be presented as a set of changes to the existing code, or as a finished document; it may be presented as hard copy or as files. The review may be conducted in person, over a video link (like WebEx or Elluminate), by audio conference, or by email.
The costs and benefits vary a lot depending on approach. I've gotten the best reviews out of the "independent review" approach: it creates a social dynamic where there is some shame for any reviewer who fails to find a particular bug that others do find, so there's incentive for thoroughness, but it also lets each reviewer bring their skills to bear in whatever way and at whatever pace works best for them. The collective feedback (whether in a meeting or in group email) is a good way to learn advanced techniques, subtle errors, and so forth.
But this approach is also expensive - I used to budget half a day to participate in a review like that, between the reviewing and the feedback meeting, and if three or four people have to spend half a day for each developer who wants to check something in, it's easy to spend all your time reviewing. Also, this strategy works poorly for small changes or even large refactorings of existing code, because reviewers tend to get distracted by things that really should be fixed, but that have been that way forever and is not in the scope of the current changes.
The opposite end of the curve is probably the "go borrow someone's eyeballs if you need them" sort of review. This misses a lot of errors, but sometimes you get so deep into a problem that you lose track of the obvious. A second set of eyes can see that you failed to close an open stream, or that you're doing something that a library routine already exists for. A second set of eyes will probably miss subtleties like locking on the wrong object.
Personally, I'm afraid that of all the possibilities, I get the very least out of collective review over video conference. The audio and video quality is generally very poor, so it's hard for me to follow what anyone else is saying; if my attention wanders for a moment, it's socially awkward to ask people to back up, so I tend to lose track of crucial details; and since it's the code owner who drives the review, any assumptions they have about what is and is not important become shared. Often bugs lurk precisely within those assumptions.
Finally there's the question of what sort of comments matter. In the most formal review process I was part of, we prioritized comments exactly the same way we'd have prioritized bugs:
Using a structure like that, we went through each module of code, collecting and discussing comments in priority order. Pri 1 bugs had to be fixed before the code could be checked in, and generally another review meeting would be scheduled to review the fix, depending on the developer's track record. Pri 2 bugs were usually fixed depending on the amount of effort required. Pri 3 bugs might be deferred or ignored. Generally priority-4 bugs would be relegated to email, that is, the scribe would just collect them from the individual reviewers and tack them onto the followup email without them ever being discussed in person, to save time and avoid religious wars; the developer would be expected to make a personal decision about whether to implement the sugggested change. Doing it this way also let us directly compare the relative effectiveness of reviewing versus testing.
I'm against any review process that takes a lot of people's time to do a shallow review. Either review thinly and cheaply, or deeply and expensively, but don't fool yourself into thinking that a lightweight review process is finding all the interesting bugs.
The range of practices I've been directly involved with includes:
- No review at all. I haven't seen this in a long time.
- Self-initiated. If you want a review, you find a peer and beg some of their time. Surprisingly, this has been my experience working on Eclipse, and at a number of other teams; it may reflect resource constraints or perhaps just the fact that I've been in the industry for a long time and people foolishly think I'm less likely to write bad code.
- Recommended review for major work, especially for new developers. This too is pretty common in my experience, and is the approach taken at Terracotta.
- Mandatory review of all code before it's committed to version control. This approach was taken by a team I worked on in the '90s at Microsoft, and resulted in some of the best code I've ever written or seen. (The project made it through beta but was then axed by management; it conflicted with corporate strategy.)
- Pair programming, two sets of eyeballs at all times. I've only tried this a little bit; it did seem like we wrote pretty good code, but boy was it exhausting.
Along with that range is the question of how, exactly, the code gets reviewed. Approaches I've tried include:
- Over my shoulder. The reviewer looks over my shoulder as I walk through the code on my monitor, explaining it.
- Over the reviewer's shoulder. Same, except that the reviewer is the one driving.
- Independent review, meet with feedback. Each reviewer is given the code to review, and given some time (a day, perhaps) to review and come up with comments; we then meet to collect together the feedback. The meeting is typically moderated by someone other than the developer, and there may also be a designated scribe to collect the comments.
- Collective review. Like "over my shoulder", except that there is more than one reviewer. Having multiple reviewers makes it psychologically harder for any one reviewer to change the flow of the review: if someone wants to go back and look at a previous file, they have to interrupt someone else.
And then, there's the question of media and presentation. If code is being given to reviewers, it may be presented as a set of changes to the existing code, or as a finished document; it may be presented as hard copy or as files. The review may be conducted in person, over a video link (like WebEx or Elluminate), by audio conference, or by email.
The costs and benefits vary a lot depending on approach. I've gotten the best reviews out of the "independent review" approach: it creates a social dynamic where there is some shame for any reviewer who fails to find a particular bug that others do find, so there's incentive for thoroughness, but it also lets each reviewer bring their skills to bear in whatever way and at whatever pace works best for them. The collective feedback (whether in a meeting or in group email) is a good way to learn advanced techniques, subtle errors, and so forth.
But this approach is also expensive - I used to budget half a day to participate in a review like that, between the reviewing and the feedback meeting, and if three or four people have to spend half a day for each developer who wants to check something in, it's easy to spend all your time reviewing. Also, this strategy works poorly for small changes or even large refactorings of existing code, because reviewers tend to get distracted by things that really should be fixed, but that have been that way forever and is not in the scope of the current changes.
The opposite end of the curve is probably the "go borrow someone's eyeballs if you need them" sort of review. This misses a lot of errors, but sometimes you get so deep into a problem that you lose track of the obvious. A second set of eyes can see that you failed to close an open stream, or that you're doing something that a library routine already exists for. A second set of eyes will probably miss subtleties like locking on the wrong object.
Personally, I'm afraid that of all the possibilities, I get the very least out of collective review over video conference. The audio and video quality is generally very poor, so it's hard for me to follow what anyone else is saying; if my attention wanders for a moment, it's socially awkward to ask people to back up, so I tend to lose track of crucial details; and since it's the code owner who drives the review, any assumptions they have about what is and is not important become shared. Often bugs lurk precisely within those assumptions.
Finally there's the question of what sort of comments matter. In the most formal review process I was part of, we prioritized comments exactly the same way we'd have prioritized bugs:
- pri 1 was a bug that would make the code fail significantly or a design flaw that would prevent the code from working with the rest of the product
- pri 2 was a more minor bug or design infelicity, or possible performance improvements
- pri 3 was a bug that wouldn't actually affect the code as it stood but that might cause problems during maintenance or if someone else tried to use it in a different way
- pri 4 was typographical stuff, like use of whitespace, naming of internal variables, and the like.
Using a structure like that, we went through each module of code, collecting and discussing comments in priority order. Pri 1 bugs had to be fixed before the code could be checked in, and generally another review meeting would be scheduled to review the fix, depending on the developer's track record. Pri 2 bugs were usually fixed depending on the amount of effort required. Pri 3 bugs might be deferred or ignored. Generally priority-4 bugs would be relegated to email, that is, the scribe would just collect them from the individual reviewers and tack them onto the followup email without them ever being discussed in person, to save time and avoid religious wars; the developer would be expected to make a personal decision about whether to implement the sugggested change. Doing it this way also let us directly compare the relative effectiveness of reviewing versus testing.
I'm against any review process that takes a lot of people's time to do a shallow review. Either review thinly and cheaply, or deeply and expensively, but don't fool yourself into thinking that a lightweight review process is finding all the interesting bugs.
Sunday, November 30, 2008
Imbalanced outputs
Although I earn most of my living writing software, I also work on audio whenever I get the chance. A few days ago one of my clients, a bass player, came to me with a piece of equipment that was causing a buzz.
The equipment in question was powered by an AC wall adapter, and it had a balanced output for sending the signal to the PA system. Balanced signal transmission is used in professional audio in order to reduce induced noise problems: the idea is that the signal is sent along two wires simultaneously, but with opposite polarity. At the receiving end, one voltage is subtracted from the other, eliminating any common noise that might have crept into the cables and leaving only the intended signal. The problem my client had was that whenever he used this output, it caused a terrible buzz in the PA system - rather counterproductive.
It turns out that a lot of what the industry calls "balanced outputs" really aren't. There's a popular belief amongst equipment designers that to make a balanced output, what you need is two signals in opposition - that is, you split the intended signal, send it unaltered onto one wire, and then flip its polarity (that is, multiply the voltage by -1) and send that to the other wire.
This is malarkey, as has been pointed out by luminaries like Douglas Self and Bill Whitlock. Having two voltages in opposition is irrelevant; if that mattered, then when the signal was zero (dead quiet), noise would no longer be eliminated. What actually matters is that the impedance on the two legs is balanced, so that any induced common-mode noise is the same on the two legs.
This is pretty old news, but has been largely ignored in the industry. So I was not surprised to discover, on tracing the circuit in my client's equipment, the following "balanced" output stage (I've eliminated a few unimportant details, like DC blocking capacitors):
It does just what it was meant to - the voltage on pin 3 will always be -1 times the voltage on pin 2. But look at what happens when the AC adapter is plugged into the wall:
There's always a little bit of leakage between the windings in a power transformer, perhaps a few picofarads. 120V from the wall leaks through that capacitance, into the power supply, into the signal ground. From there, I've traced the two main current paths to the output. Notice that one path goes through about 21.8k of resistance before getting to the output, while the other sees only 1k.
This 22:1 imbalance, with the tiny leakage through the transformer, was enough to generate a couple mV of differential signal - a lot, in the terms of audio signals, which rarely exceed one volt. By replacing this output stage with one that was truly impedance-balanced (based on an SSM2142 chip), I was able to reduce the noise signal by a factor of 30 - enough to get it below the noise floor of the unit.
The cost of the extra components was about $8 retail. But frankly, if cost were an issue, it might have been just as good to have gotten rid of the entire inverting amp stage and simply connected pin 3 with a 1k resistor to ground. The impedances would be balanced. The differential voltage would only be half as big as before, but the common-mode noise voltage would be reduced by much more than half, so signal to noise ratio would be better than with the "balanced" output the designer came up with.
Many software errors come from using a common design pattern without understanding the problem it's aimed at. This was the same mistake in hardware.
The equipment in question was powered by an AC wall adapter, and it had a balanced output for sending the signal to the PA system. Balanced signal transmission is used in professional audio in order to reduce induced noise problems: the idea is that the signal is sent along two wires simultaneously, but with opposite polarity. At the receiving end, one voltage is subtracted from the other, eliminating any common noise that might have crept into the cables and leaving only the intended signal. The problem my client had was that whenever he used this output, it caused a terrible buzz in the PA system - rather counterproductive.
It turns out that a lot of what the industry calls "balanced outputs" really aren't. There's a popular belief amongst equipment designers that to make a balanced output, what you need is two signals in opposition - that is, you split the intended signal, send it unaltered onto one wire, and then flip its polarity (that is, multiply the voltage by -1) and send that to the other wire.
This is malarkey, as has been pointed out by luminaries like Douglas Self and Bill Whitlock. Having two voltages in opposition is irrelevant; if that mattered, then when the signal was zero (dead quiet), noise would no longer be eliminated. What actually matters is that the impedance on the two legs is balanced, so that any induced common-mode noise is the same on the two legs.
This is pretty old news, but has been largely ignored in the industry. So I was not surprised to discover, on tracing the circuit in my client's equipment, the following "balanced" output stage (I've eliminated a few unimportant details, like DC blocking capacitors):
It does just what it was meant to - the voltage on pin 3 will always be -1 times the voltage on pin 2. But look at what happens when the AC adapter is plugged into the wall:
There's always a little bit of leakage between the windings in a power transformer, perhaps a few picofarads. 120V from the wall leaks through that capacitance, into the power supply, into the signal ground. From there, I've traced the two main current paths to the output. Notice that one path goes through about 21.8k of resistance before getting to the output, while the other sees only 1k.
This 22:1 imbalance, with the tiny leakage through the transformer, was enough to generate a couple mV of differential signal - a lot, in the terms of audio signals, which rarely exceed one volt. By replacing this output stage with one that was truly impedance-balanced (based on an SSM2142 chip), I was able to reduce the noise signal by a factor of 30 - enough to get it below the noise floor of the unit.
The cost of the extra components was about $8 retail. But frankly, if cost were an issue, it might have been just as good to have gotten rid of the entire inverting amp stage and simply connected pin 3 with a 1k resistor to ground. The impedances would be balanced. The differential voltage would only be half as big as before, but the common-mode noise voltage would be reduced by much more than half, so signal to noise ratio would be better than with the "balanced" output the designer came up with.
Many software errors come from using a common design pattern without understanding the problem it's aimed at. This was the same mistake in hardware.
Sunday, November 16, 2008
Comments
I often hear that comments in source code are A Bad Thing. Comments are evil because they don't accurately describe the code they apply to; because the code gets modified and the comment doesn't; because good code is self-documenting and therefore doesn't need comments; and because you need to read the code anyway to understand what it does and so comments just get in the way.
Horseshit.
This is roughly like saying that synchronization is evil, because it's often done incorrectly, because it gets broken when people update the code without fixing the synchronization, and because good code uses constructs like immutability that don't require synchronization.
If we treated comments as being as important as synchronization, they'd live up to their end of the deal just fine. There is nothing inherent in the idea of a comment that renders it impotent. Think of comments as being like error-handling code or synchronization: bulky, hard to write, even harder to test, but crucial to reliability.
I think the real reason so little code is commented is simply that most code is written in short bursts of effort by highly productive individuals, and while they're writing it, they understand their own assumptions well enough to not need the comments themselves, and they're in too much of a hurry to worry about the next fellow. And because this is what new programmers then see all around them, this is how they in turn learn to program.
If we built buildings this way, instead of having architectural drawings, the carpenters would come in after the foundations were poured, look at where the rebar was sticking out, and frame the walls where it looked like they should probably go. The resulting buildings would be ugly, short, and tend to collapse in a few years. Much like software, in fact.
If I were to design a programming language, any uncommented abstract definition (for instance, a method definition in an interface class) would be a compiler error. Yes, people would work around it by putting in useless comments, but it would be a start.
Horseshit.
This is roughly like saying that synchronization is evil, because it's often done incorrectly, because it gets broken when people update the code without fixing the synchronization, and because good code uses constructs like immutability that don't require synchronization.
If we treated comments as being as important as synchronization, they'd live up to their end of the deal just fine. There is nothing inherent in the idea of a comment that renders it impotent. Think of comments as being like error-handling code or synchronization: bulky, hard to write, even harder to test, but crucial to reliability.
I think the real reason so little code is commented is simply that most code is written in short bursts of effort by highly productive individuals, and while they're writing it, they understand their own assumptions well enough to not need the comments themselves, and they're in too much of a hurry to worry about the next fellow. And because this is what new programmers then see all around them, this is how they in turn learn to program.
If we built buildings this way, instead of having architectural drawings, the carpenters would come in after the foundations were poured, look at where the rebar was sticking out, and frame the walls where it looked like they should probably go. The resulting buildings would be ugly, short, and tend to collapse in a few years. Much like software, in fact.
If I were to design a programming language, any uncommented abstract definition (for instance, a method definition in an interface class) would be a compiler error. Yes, people would work around it by putting in useless comments, but it would be a start.
Tuesday, November 11, 2008
Control Theory
In the last post I mentioned that we programmers don't generally get training in control theory. This is too bad, because I think learning to recognize the behavioral modes of feedback-controlled systems can have a lot of practical benefit for us.
Suppose that you have the following two Java classes, B and Main. Without running this, can you tell how it will behave?
If you do run it, you'll see that the output is spread across a range of numbers. Without changing class B, how would you make that range smaller? What properties of the code determine the range?
This program is an analog of an elementary problem in control theory, so basic that even my espresso machine implements a solution to it. But I suspect that most computer programmers, coming upon behavior like this while performance-tuning an application, wouldn't immediately recognize it.
Suppose that you have the following two Java classes, B and Main. Without running this, can you tell how it will behave?
If you do run it, you'll see that the output is spread across a range of numbers. Without changing class B, how would you make that range smaller? What properties of the code determine the range?
This program is an analog of an elementary problem in control theory, so basic that even my espresso machine implements a solution to it. But I suspect that most computer programmers, coming upon behavior like this while performance-tuning an application, wouldn't immediately recognize it.
public class B implements Runnable {
private static final int ELEMENTS = 100;
private double[] v = new double[ELEMENTS];
private boolean state;
private volatile boolean done = false;
public void run() {
while (!done) {
synchronized(this) {
v[0] = state ? 200.0 : 0.0;
// propagate gradually through the array
int i;
for (i = 1; i < v.length - 1; ++i) {
v[i] = (v[i - 1] + v[i] + v[i + 1]) / 3.0;
}
v[i] = (v[i - 1] + v[i]) / 2.0;
}
try {
Thread.sleep(1);
} catch (InterruptedException e) {}
}
}
public synchronized int get() {
return (int)v[v.length - 1];
}
public synchronized void set(boolean state) {
this.state = state;
}
public void finish() {
done = true;
}
}
public class Main {
final Object l = new Object();
final long end = System.currentTimeMillis() + 60000; // 1 minute
public static void main(String[] args) {
new Main().run();
}
private void run() {
B b = new B();
new Thread(b).start();
while (System.currentTimeMillis() < end) {
double t = b.get();
boolean state = t < 50.0;
System.out.println("t = " + t +
" - state is " + (state ? "true" : "false"));
b.set(t < 50.0);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {}
}
b.finish();
}
}
Wednesday, November 5, 2008
Self-reference and Statistical Density
Programming languages are ways of mapping between three things: a set of problems in some problem domain; a set of envisioned solutions to those problems in a computer programmer's mind; and a set of instructions that can be executed by a computer.
Computers are very stupid, and not at all lazy. Programmers are very lazy, and hopefully quite smart. So, the language has a big gap to close.
For example, suppose I have an object that stores some numbers "a", "b", ..., "y", and "z", and I want to provide "fudge" and "muddle" operations on these numbers. I could write functions called fudgeA, fudgeB, muddleA, muddleB, and so on. Boring! Once I've written the code for fudge and muddle for A, I'd really like to just say "and do the same thing for the rest". So I want a language with that sort of expressive power, a language that can contain concepts like "the rest" and "the same thing". A language, that is, that can refer to itself, not only to the entities in a problem domain.
Taken to the extreme, what this leads to is programs of maximal statistical density: that is, programs that contain no repetition, but lots of self-reference and automatic code generation. There may in fact be many nested layers of this stuff.
As with Zappa's opus of statistical density, The Black Page, the problem in practice is that this becomes devilishly hard to play. Highly self-modifying programs save the original programmer's time, but generally make life for everyone else worse. The goal of making code more concise is seriously flawed. The goal should be to make code more understandable, not more concise.
I say "generally", because there's one case where it works. That's when the self-referential features of the language fit with the way that programmers think about solutions. For instance, "do the same thing to all these numbers" is the way that a programmer would typically describe the solution; to have to spell out each function separately isn't just repetitive, it makes the program code be less like what's in the programmer's brain in the first place.
My point is that I think that language designers should learn more about how human minds work, and should prefer native constructs and shy away from constructs that require unusual intelligence and training to master. Computers do not think like people; so programming languages need to.
There is a lot of overlap between computer science and cognitive psychology, but so far as I know this premise has not been applied; quite the opposite, it seems that new programming languages are more often used as a vehicle for showing how smart the programmer is.
Specifically, there are certain constructs that are demonstrably hard for most people to reason about. The ability to reason about recursion, for instance, is often used as an interview test to separate the "real programmers" from the mediocre. Recursion is actually one of the few things that computers innately know how to do, so it's not surprising that it's in computer languages too; but if it makes it so that only really smart people can write good programs, then I think it's something we should be trying to get rid of, not use more of. If a car is hard to steer for people of normal strength, we give it power steering, we don't just assume that Driving is a job open only to the abnormally strong.
Of course, I'm blowing into the wind here, because cognitive psychology has not held up its end of the deal. Language designers can't learn much more about how the mind works, because cognitive scientists haven't figured it out yet either. We know very little about what processing contructs are "native" to the mind.
But as a starting point: three things that seem hard for most people are feedback loops (that is, control theory, home of things like proportional-integral-derivative algorithms); recursion (especially if the rules change depending on the depth); and n-dimensional geometry. Electrical and industrial engineers get advanced training in handling feedback loops; I've never met a computer scientist who has, and I've never met a non-specialist who has. Physicists and mathematicians work with n-dimensional manifolds, but EEs and programmers rarely go beyond three. Programmers get advanced training in recursion, but no one else does. In each case, the fact that training is required and that these are all exclusive fields says something about the cognitive difficulty.
The question is not how we can make programming more efficient for a vanishingly small number of people. The question is how we can make it more efficient for a larger number of people.
Computers are very stupid, and not at all lazy. Programmers are very lazy, and hopefully quite smart. So, the language has a big gap to close.
For example, suppose I have an object that stores some numbers "a", "b", ..., "y", and "z", and I want to provide "fudge" and "muddle" operations on these numbers. I could write functions called fudgeA, fudgeB, muddleA, muddleB, and so on. Boring! Once I've written the code for fudge and muddle for A, I'd really like to just say "and do the same thing for the rest". So I want a language with that sort of expressive power, a language that can contain concepts like "the rest" and "the same thing". A language, that is, that can refer to itself, not only to the entities in a problem domain.
Taken to the extreme, what this leads to is programs of maximal statistical density: that is, programs that contain no repetition, but lots of self-reference and automatic code generation. There may in fact be many nested layers of this stuff.
As with Zappa's opus of statistical density, The Black Page, the problem in practice is that this becomes devilishly hard to play. Highly self-modifying programs save the original programmer's time, but generally make life for everyone else worse. The goal of making code more concise is seriously flawed. The goal should be to make code more understandable, not more concise.
I say "generally", because there's one case where it works. That's when the self-referential features of the language fit with the way that programmers think about solutions. For instance, "do the same thing to all these numbers" is the way that a programmer would typically describe the solution; to have to spell out each function separately isn't just repetitive, it makes the program code be less like what's in the programmer's brain in the first place.
My point is that I think that language designers should learn more about how human minds work, and should prefer native constructs and shy away from constructs that require unusual intelligence and training to master. Computers do not think like people; so programming languages need to.
There is a lot of overlap between computer science and cognitive psychology, but so far as I know this premise has not been applied; quite the opposite, it seems that new programming languages are more often used as a vehicle for showing how smart the programmer is.
Specifically, there are certain constructs that are demonstrably hard for most people to reason about. The ability to reason about recursion, for instance, is often used as an interview test to separate the "real programmers" from the mediocre. Recursion is actually one of the few things that computers innately know how to do, so it's not surprising that it's in computer languages too; but if it makes it so that only really smart people can write good programs, then I think it's something we should be trying to get rid of, not use more of. If a car is hard to steer for people of normal strength, we give it power steering, we don't just assume that Driving is a job open only to the abnormally strong.
Of course, I'm blowing into the wind here, because cognitive psychology has not held up its end of the deal. Language designers can't learn much more about how the mind works, because cognitive scientists haven't figured it out yet either. We know very little about what processing contructs are "native" to the mind.
But as a starting point: three things that seem hard for most people are feedback loops (that is, control theory, home of things like proportional-integral-derivative algorithms); recursion (especially if the rules change depending on the depth); and n-dimensional geometry. Electrical and industrial engineers get advanced training in handling feedback loops; I've never met a computer scientist who has, and I've never met a non-specialist who has. Physicists and mathematicians work with n-dimensional manifolds, but EEs and programmers rarely go beyond three. Programmers get advanced training in recursion, but no one else does. In each case, the fact that training is required and that these are all exclusive fields says something about the cognitive difficulty.
The question is not how we can make programming more efficient for a vanishingly small number of people. The question is how we can make it more efficient for a larger number of people.
Monday, October 27, 2008
Maven Sucks
Tools like Maven make me want to quit the software industry before I die of frustration. I've wasted basically the last two days screwing around with problems related to Maven. In a nutshell, Maven does not provide a consistent or reliable build environment.
Modern software is not written from scratch; it's assembled from myriad components from other vendors, in much the same way that car manufacturers buy their parts (seats, brakes, wiper motors, radios) from other companies. And just like a car seat is itself made of springs and fabric and metal bits that come from yet other manufacturers, each software component may itself be assembled from other components.
Maven is a tool to help manage all the little pieces that fit together into a software product - to manage the fact that my product consists of components X, Y, and Z, which in turn require the presence of P, Q, R, and S, which in turn require A, B, and C; to fetch those components from wherever they come from if they're not already on my computer; and to make sure that the versions of these different components are all compatible.
But it sucks. Maven wants to update things without me asking, so that if I run a test twice in a row I don't necessarily get the same results. Maven wants to download things without me asking, so a task that took 15 seconds the last time might take a few minutes, or fail utterly (if I lose my network connection), leaving me in an unknown and inconsistent state where I can't build at all. And most frustratingly, Maven is itself highly modular, meaning that it's not all downloaded until it's needed, meaning that in the event of network trouble it can again fail utterly. Networks are not yet reliable enough for that to be okay.
Further, Maven seems to do a crappy job of understanding and resolving or reporting version conflicts. I'm extending some test code, that's supposed to exercise version 3.2.5 of product HappyWidget. The test code therefore tells Maven that it needs HappyWidget version 3.2.5. But the test code also needs OtherThing version 2.7.1; which needs FussyBit 2.0.1; which needs HappyWidget 3.1.2. Whoops! This is a version conflict. It would have been easy to fix, except that Maven didn't tell me there was a problem, it just silently deployed HappyWidget 3.1.2. So all along this test was actually testing HappyWidget 3.1.2. What's really scary is that when I uninstalled and then reinstalled everything, without explicitly making any changes in my dependencies, the behavior changed - now it's deploying 3.2.5, which is nice for my test but not so nice for the corresponding code that is supposed to test HappyWidget 3.1.2.
The first requirement of a build tool is that it should behave predictably. Maven fails.
It's late and I'm not going to spend more time saying all the hateful things that Maven has done to me in the last two days. I will blog in the future about what I think the solution to this should be.
Modern software is not written from scratch; it's assembled from myriad components from other vendors, in much the same way that car manufacturers buy their parts (seats, brakes, wiper motors, radios) from other companies. And just like a car seat is itself made of springs and fabric and metal bits that come from yet other manufacturers, each software component may itself be assembled from other components.
Maven is a tool to help manage all the little pieces that fit together into a software product - to manage the fact that my product consists of components X, Y, and Z, which in turn require the presence of P, Q, R, and S, which in turn require A, B, and C; to fetch those components from wherever they come from if they're not already on my computer; and to make sure that the versions of these different components are all compatible.
But it sucks. Maven wants to update things without me asking, so that if I run a test twice in a row I don't necessarily get the same results. Maven wants to download things without me asking, so a task that took 15 seconds the last time might take a few minutes, or fail utterly (if I lose my network connection), leaving me in an unknown and inconsistent state where I can't build at all. And most frustratingly, Maven is itself highly modular, meaning that it's not all downloaded until it's needed, meaning that in the event of network trouble it can again fail utterly. Networks are not yet reliable enough for that to be okay.
Further, Maven seems to do a crappy job of understanding and resolving or reporting version conflicts. I'm extending some test code, that's supposed to exercise version 3.2.5 of product HappyWidget. The test code therefore tells Maven that it needs HappyWidget version 3.2.5. But the test code also needs OtherThing version 2.7.1; which needs FussyBit 2.0.1; which needs HappyWidget 3.1.2. Whoops! This is a version conflict. It would have been easy to fix, except that Maven didn't tell me there was a problem, it just silently deployed HappyWidget 3.1.2. So all along this test was actually testing HappyWidget 3.1.2. What's really scary is that when I uninstalled and then reinstalled everything, without explicitly making any changes in my dependencies, the behavior changed - now it's deploying 3.2.5, which is nice for my test but not so nice for the corresponding code that is supposed to test HappyWidget 3.1.2.
The first requirement of a build tool is that it should behave predictably. Maven fails.
It's late and I'm not going to spend more time saying all the hateful things that Maven has done to me in the last two days. I will blog in the future about what I think the solution to this should be.
Thursday, October 23, 2008
Synchronization defines structure
Consider the following code snippets, from some Aspectwerkz code:
Notice that the first method is synchronized, and the second is not. How come? Is this a bug in Aspectwerkz? Both methods seem to require synchronization, because the "check something and then conditionally change it" pattern is otherwise unsafe.
My inclination is to say that it's just a bug. But it might not be; there might be some external reason that the second method does not need to be synchronized here. For instance, it might always be called from within another synchronized block (though the fact that it's got public access scope makes this hard to enforce).
The point here is that synchronization (almost by definition) implies a particular call structure: to correctly synchronize a particular body of data, you need to know how that data will be accessed, by whom, in what possible sequences of events. You can't just put the "synchronized" keyword in front of every method, because over-synchronization leads to deadlock; you can't just synchronize every method that changes state, because you won't get the right visibility guarantees. You have to actually know what the code is doing, to correctly synchronize it.
This is a huge problem for two reasons. First, while you're coding, you're changing structure, so it's hard to keep up; thus, synchronization bugs creep in. In the above example, it's possible that the second method was originally private (and always known to be called from within some other synchronized block), and then someone changed it to be public without updating the synchronization. Second, it makes it much harder to change code locally: you have to understand the overall behavior of the code in more detail than would otherwise be needed.
Which brings me to the main point: unlike a lot of code, synchronization is not self-documenting. It is simply too fragile and opaque. I cannot look at the above code and figure out what pattern the developer had in mind, what assumptions s/he was making. When maintaining code, I want to preserve assumptions or else systematically and thoroughly change them. I can't do that if I can't even discern them.
As a side note, isn't that Javadoc something special? Really, "returns the superclass" is easy to figure out from the method name. What I need to know from this method's documentation are things like "under what circumstances might it return a null?" and "are there initialization requirements before this method can safely be called?".
public synchronized ClassInfo[] getInterfaces() {
if (m_interfaces == null) {
m_interfaces = new ClassInfo[interfaces.length];
// etc.
}
return m_interfaces;
}
/**
* Returns the super class.
*
* @return the super class
*/
public ClassInfo getSuperclass() {
if (m_superClass == null) {
m_superClass = m_classInfoRepository.getClassInfo(superclass.getName());
// etc.
}
return m_superClass;
}
Notice that the first method is synchronized, and the second is not. How come? Is this a bug in Aspectwerkz? Both methods seem to require synchronization, because the "check something and then conditionally change it" pattern is otherwise unsafe.
My inclination is to say that it's just a bug. But it might not be; there might be some external reason that the second method does not need to be synchronized here. For instance, it might always be called from within another synchronized block (though the fact that it's got public access scope makes this hard to enforce).
The point here is that synchronization (almost by definition) implies a particular call structure: to correctly synchronize a particular body of data, you need to know how that data will be accessed, by whom, in what possible sequences of events. You can't just put the "synchronized" keyword in front of every method, because over-synchronization leads to deadlock; you can't just synchronize every method that changes state, because you won't get the right visibility guarantees. You have to actually know what the code is doing, to correctly synchronize it.
This is a huge problem for two reasons. First, while you're coding, you're changing structure, so it's hard to keep up; thus, synchronization bugs creep in. In the above example, it's possible that the second method was originally private (and always known to be called from within some other synchronized block), and then someone changed it to be public without updating the synchronization. Second, it makes it much harder to change code locally: you have to understand the overall behavior of the code in more detail than would otherwise be needed.
Which brings me to the main point: unlike a lot of code, synchronization is not self-documenting. It is simply too fragile and opaque. I cannot look at the above code and figure out what pattern the developer had in mind, what assumptions s/he was making. When maintaining code, I want to preserve assumptions or else systematically and thoroughly change them. I can't do that if I can't even discern them.
As a side note, isn't that Javadoc something special? Really, "returns the superclass" is easy to figure out from the method name. What I need to know from this method's documentation are things like "under what circumstances might it return a null?" and "are there initialization requirements before this method can safely be called?".
Wednesday, October 22, 2008
I Blame My Tools
Computer science was one of the things I studied a long time ago in school. We learned how well-chosen algorithms can operate on well-chosen data structures to achieve powerful results. The work is highly conceptual and rather intuitive - like calculating differential equations, it relies more on a flash of inspiration, of being able to see the problem in the right way, than on methodically "turning the crank". There are no good algorithms to generate good code.
That maybe described computer science, but it describes only a very small part of the day to day work of computer programming. The reality of computer programming, at least for me, is that most of my time is spent wrestling with tools and technologies that don't do what they're supposed to. Metaphorically speaking, I don't get to envision graceful bridges and soaring skyscrapers; instead I futz around with a load of concrete that won't set, my lumber delivery is delayed till next week, and the extension cord doesn't reach from the outlet to my power saw.
I'm trying to do a little bit of programming work today. I use the Eclipse programming editor. Somehow the shortcut I use to start multiple instances of Eclipse on the Mac got turned back into a plain text file - I have no idea how. I got that sorted after half an hour or so. Now I want to build my project but I can't because someone added a dependency on another module of code that I don't have. I downloaded that, and built it, but in so doing I triggered some sort of version check and now it's complaining that my version of Maven, the build tool, is impermissibly out of date. (That Maven versions matter at all is a sign that Maven is trying to do way too much.) So now I need to download and install a new version of Maven. This is what my day has been like, all day long.
I know, it's a poor craftsman who blames his tools. But I blame them anyway.
Follow-up: my command-line tar utility (the Mac equivalent of "unzip") won't recognize the format of the Maven download file. Finder won't let me copy the files to the directory they need to go in; I don't have permissions. On my own machine.
Follow-up #2: I used sudo to let me copy the files. I used diff to see if the settings file had changed. It shows me that every line is different in some mysterious way that is not evident from looking at the files - perhaps the line endings changed between Windows and Unix style? Anyway, ignoring that, I then futzed around trying to change my old symbolic link for Maven to point to the new copy. That took a bunch of googling because I don't know how to create and delete links on Unix. All this is just so that I can run the build tool, to build the project that the changes I'm supposed to be working on will affect. I haven't even begun to actually do the work I'm supposed to be doing. It's quarter till 5.
That maybe described computer science, but it describes only a very small part of the day to day work of computer programming. The reality of computer programming, at least for me, is that most of my time is spent wrestling with tools and technologies that don't do what they're supposed to. Metaphorically speaking, I don't get to envision graceful bridges and soaring skyscrapers; instead I futz around with a load of concrete that won't set, my lumber delivery is delayed till next week, and the extension cord doesn't reach from the outlet to my power saw.
I'm trying to do a little bit of programming work today. I use the Eclipse programming editor. Somehow the shortcut I use to start multiple instances of Eclipse on the Mac got turned back into a plain text file - I have no idea how. I got that sorted after half an hour or so. Now I want to build my project but I can't because someone added a dependency on another module of code that I don't have. I downloaded that, and built it, but in so doing I triggered some sort of version check and now it's complaining that my version of Maven, the build tool, is impermissibly out of date. (That Maven versions matter at all is a sign that Maven is trying to do way too much.) So now I need to download and install a new version of Maven. This is what my day has been like, all day long.
I know, it's a poor craftsman who blames his tools. But I blame them anyway.
Follow-up: my command-line tar utility (the Mac equivalent of "unzip") won't recognize the format of the Maven download file. Finder won't let me copy the files to the directory they need to go in; I don't have permissions. On my own machine.
Follow-up #2: I used sudo to let me copy the files. I used diff to see if the settings file had changed. It shows me that every line is different in some mysterious way that is not evident from looking at the files - perhaps the line endings changed between Windows and Unix style? Anyway, ignoring that, I then futzed around trying to change my old symbolic link for Maven to point to the new copy. That took a bunch of googling because I don't know how to create and delete links on Unix. All this is just so that I can run the build tool, to build the project that the changes I'm supposed to be working on will affect. I haven't even begun to actually do the work I'm supposed to be doing. It's quarter till 5.
Friday, October 10, 2008
Synchronization and Relativity
Thinking about state as a way of reasoning about synchronization seems like a good approach. But the problem is, the concerns I have about synchronization are often about execution: will it deadlock? Will there be lock contention? Reasoning purely about state leads me to write programs where the data is always correct but nothing can actually complete. Both state and execution are important, and they're kind of like matter and energy, seemingly unrelated concepts.
Einstein managed to show that energy and matter were actually two sides of the same coin: that a certain amount of matter was, in fact, equivalent to a certain amount of energy, if you chose the right units. Particle physicists took this and ran with it, coming up with the idea of symmetry breaking and explaining the circumstances it takes for the coin to flip. I need someone to do the same for multithreading. I want a theory of multithreading relativity that explains how given multithreading constructs act on execution and state, and what the symmetries are.
For instance, if you protect state with a mutex (in Java, a "synchronized()" block), then you can debug deadlocks by asking the system what threads own what locks. But you can also protect state by waiting for a notification event; this is a common way to implement a reader/writer pattern. The result is similar: the state is protected at the expense of some thread being blocked. But when a system deadlocks because every thread is waiting for a notification, there's no way to ask the system which thread was supposed to send it.
You'll never make a program hang by removing a synchronized(), but you might make a program hang by removing a notify(). On the other hand, you'll never make a program corrupt data by removing a notify(), but you might make a program corrupt data by removing a synchronized(). Is this a real symmetry?
Similarly, it's possible (but maybe not algorithmically possible) to look at code and see at least the hallmarks of deadlockability: for instance, code that takes nested locks out of order. Is there an equivalent analysis for code based on wait() and notify()?
In electrical engineering, it's possible to take any circuit based on voltage sources and impedances, and convert it (using the Thevenin and Norton theorems) to a different but equivalent circuit based on current sources and impedances, that will behave just the same to an outside observer. Doing this is often very useful for understanding how a circuit works. Is this possible for synchronization? Given some code implemented with certain synchronization tools, is it always possible to reimplement that code using different synchronization tools, such that the behavior will be the same? What do the rules of that conversion look like, and what will I learn about the underlying synchronization pattern by doing this? Exactly what "behavior" will prove to be invariant?
Einstein managed to show that energy and matter were actually two sides of the same coin: that a certain amount of matter was, in fact, equivalent to a certain amount of energy, if you chose the right units. Particle physicists took this and ran with it, coming up with the idea of symmetry breaking and explaining the circumstances it takes for the coin to flip. I need someone to do the same for multithreading. I want a theory of multithreading relativity that explains how given multithreading constructs act on execution and state, and what the symmetries are.
For instance, if you protect state with a mutex (in Java, a "synchronized()" block), then you can debug deadlocks by asking the system what threads own what locks. But you can also protect state by waiting for a notification event; this is a common way to implement a reader/writer pattern. The result is similar: the state is protected at the expense of some thread being blocked. But when a system deadlocks because every thread is waiting for a notification, there's no way to ask the system which thread was supposed to send it.
You'll never make a program hang by removing a synchronized(), but you might make a program hang by removing a notify(). On the other hand, you'll never make a program corrupt data by removing a notify(), but you might make a program corrupt data by removing a synchronized(). Is this a real symmetry?
Similarly, it's possible (but maybe not algorithmically possible) to look at code and see at least the hallmarks of deadlockability: for instance, code that takes nested locks out of order. Is there an equivalent analysis for code based on wait() and notify()?
In electrical engineering, it's possible to take any circuit based on voltage sources and impedances, and convert it (using the Thevenin and Norton theorems) to a different but equivalent circuit based on current sources and impedances, that will behave just the same to an outside observer. Doing this is often very useful for understanding how a circuit works. Is this possible for synchronization? Given some code implemented with certain synchronization tools, is it always possible to reimplement that code using different synchronization tools, such that the behavior will be the same? What do the rules of that conversion look like, and what will I learn about the underlying synchronization pattern by doing this? Exactly what "behavior" will prove to be invariant?
Thursday, October 9, 2008
Synchronization is Hard
I read a newspaper story about a neuropsychologist who had a stroke. She recounted trying to call 911, but not being able to figure out which digit was which on the phone, or what the steps were to make a phone call. She knew what the right tool was, but she'd lost the cognitive tools to use it. All the while, being a neuropsychologist, she was aware of what was going on, and even somewhat fascinated by it, but also aware that her life depended on doing a seemingly simple thing that she nonetheless could not quite grok.
Synchronization is like this for me. At least I know I'm not alone - some of the smartest people I know have a hard time thinking about synchronization problems, and my industry is littered with bugs due to incorrect synchronization. But I always feel like there's a right way to reason about these problems, and I know it's there but I don't know what it is and I can't even quite articulate why it is that I can't think clearly about it. My hope is that one day I'll GET IT and then I won't be able to remember why I couldn't figure it out before.
I do know some wrong ways to think about synchronization, though. Any time I am reasoning about synchronization and I find myself thinking "okay, if two threads come in here at the same time...", I am about to make a mistake or go down a rat hole. This is how books always present the topic, but intuitively I think it's wrong - I don't believe you can think correctly about synchronization by thinking about execution.
Instead, I think it's probably better to reason in terms of state. "What states could this object be in when this variable is evaluated?" "If I modify the state, how will other threads discover the modification?"
Today I spent a couple hours trying, along with some people who are pretty good at these things, to come up with a good pattern for lazy initialization when the initialization routine is not trusted (e.g., when it might try to call back into the object being initialized). The real moral of the experience is twofold: first, we each wrote routines that we thought were good and that were promptly found wrong by the others; second, although I think we did end up with two valid solutions, I'm not sure how to PROVE that they're valid.
Synchronization is like this for me. At least I know I'm not alone - some of the smartest people I know have a hard time thinking about synchronization problems, and my industry is littered with bugs due to incorrect synchronization. But I always feel like there's a right way to reason about these problems, and I know it's there but I don't know what it is and I can't even quite articulate why it is that I can't think clearly about it. My hope is that one day I'll GET IT and then I won't be able to remember why I couldn't figure it out before.
I do know some wrong ways to think about synchronization, though. Any time I am reasoning about synchronization and I find myself thinking "okay, if two threads come in here at the same time...", I am about to make a mistake or go down a rat hole. This is how books always present the topic, but intuitively I think it's wrong - I don't believe you can think correctly about synchronization by thinking about execution.
Instead, I think it's probably better to reason in terms of state. "What states could this object be in when this variable is evaluated?" "If I modify the state, how will other threads discover the modification?"
Today I spent a couple hours trying, along with some people who are pretty good at these things, to come up with a good pattern for lazy initialization when the initialization routine is not trusted (e.g., when it might try to call back into the object being initialized). The real moral of the experience is twofold: first, we each wrote routines that we thought were good and that were promptly found wrong by the others; second, although I think we did end up with two valid solutions, I'm not sure how to PROVE that they're valid.
Wednesday, October 8, 2008
Hibernate locking
To continue the last topic: I'm working on a program that lets examinations be taken online. It uses Hibernate to get stuff from a database - for example, we get a list of possible answers to an exam question. The list is implemented by Hibernate, so that it doesn't actually read the database until the first time someone tries to access the contents of the list. But that fact is invisible to our code - to us it just looks like an ordinary Java list.
Now, like the ordinary Java collections, the Hibernate-provided collections don't have any built-in synchronization. If you tried to read the list from two threads at a time, it might try to initialize the collection twice, or not at all, or it might just crash.
This is not a problem for most uses of Hibernate. Generally any given Hibernate collection is only accessed by one execution thread, even though there might be zillions of Hibernate collections all referencing the same table in the database.
But in our application, we use the same collection from a lot of threads - perhaps tens of thousands, distributed over a cluster of machines with Terracotta. We never modify it, we just read it. Except for the very first time it's accessed. But there's no way to tell, from the outside, whether it's the first time - so we have to treat it as if it might be, every time.
This pattern does not work well. We could wrap the collections inside "synchronized" blocks, like the java.util.Collections$SynchronizedWhatever classes do; but that means that every time any thread tries to read an entry in the list, it has to wait for every other thread to get out of the way first, just because once upon a time one of those threads did the initialization.
Like I said in the last entry: replacing the implementation without changing the interface is powerful, but it means there's no way to know whether an operation is actually a read or a write. Locking is one reason why a caller cares about implementation.
The solution is to change the Hibernate code so that it does its own locking, using a read/write lock. Within a single method, it can take a read lock to figure out whether initialization is needed; if it is, then it gives up the read lock and takes a write lock to do the initialization. The first time through, a write lock will be taken, but (nearly) every time thereafter, it'll only need a read lock, which means that no thread will ever have to wait. In practice this is very effective. In one performance test we saw a roughly 200x speedup: latencies went from 4 seconds to 20 milliseconds.
The unfortunate part is that the code gets messier. This nice code:
Becomes:
Amidst all the locking and unlocking and multiple checking, it's hard to see what's actually being done by the method. Which gets back to my first post.
Now, like the ordinary Java collections, the Hibernate-provided collections don't have any built-in synchronization. If you tried to read the list from two threads at a time, it might try to initialize the collection twice, or not at all, or it might just crash.
This is not a problem for most uses of Hibernate. Generally any given Hibernate collection is only accessed by one execution thread, even though there might be zillions of Hibernate collections all referencing the same table in the database.
But in our application, we use the same collection from a lot of threads - perhaps tens of thousands, distributed over a cluster of machines with Terracotta. We never modify it, we just read it. Except for the very first time it's accessed. But there's no way to tell, from the outside, whether it's the first time - so we have to treat it as if it might be, every time.
This pattern does not work well. We could wrap the collections inside "synchronized" blocks, like the java.util.Collections$SynchronizedWhatever classes do; but that means that every time any thread tries to read an entry in the list, it has to wait for every other thread to get out of the way first, just because once upon a time one of those threads did the initialization.
Like I said in the last entry: replacing the implementation without changing the interface is powerful, but it means there's no way to know whether an operation is actually a read or a write. Locking is one reason why a caller cares about implementation.
The solution is to change the Hibernate code so that it does its own locking, using a read/write lock. Within a single method, it can take a read lock to figure out whether initialization is needed; if it is, then it gives up the read lock and takes a write lock to do the initialization. The first time through, a write lock will be taken, but (nearly) every time thereafter, it'll only need a read lock, which means that no thread will ever have to wait. In practice this is very effective. In one performance test we saw a roughly 200x speedup: latencies went from 4 seconds to 20 milliseconds.
The unfortunate part is that the code gets messier. This nice code:
int size() {
if (!initialized) {
initialize();
initialized = true;
}
return size;
}
Becomes:
int size() {
readLock.lock();
try {
if (initialized) {
return size;
}
} finally {
readLock.unlock();
}
writeLock.lock();
try {
if (!initialized) {
initialize();
initialized = true;
}
return size;
} finally {
writeLock.unlock();
}
}
Amidst all the locking and unlocking and multiple checking, it's hard to see what's actually being done by the method. Which gets back to my first post.
Labels:
hibernate,
Java,
synchronization,
Terracotta
Monday, October 6, 2008
The two R's
Computer software consists of a lot of instructions that read and write information to memory. The basic idea hasn't changed since World War II. Imagine a gazillion toggle switches, each of which can be flipped up or down. That's the memory, and then there's a processor that runs instructions, what we call the "code". The instructions are like "go look at switch 3,452,125 and see if it's flipped up. Then go look at switch 35,289 and see if it's flipped up. If they're both flipped up, then go find switch 278,311 and flip it down." And so forth. Some of the switches do things, like turn on a pixel on the computer screen. Others are just there to remember. We have nice tools so that we don't actually have to use numbers for the individual switches when we write the instructions, but under the covers that's exactly what's going on.
I work at a company called Terracotta Technology. We make software that connects many computers together so that they can solve problems bigger than one computer could handle. We make a sort of virtual computer, that other people's programs can execute on. We fool the programs that run on Terracotta into thinking they're running on a normal computer. A program thinks it's flipping a switch on its own computer, when actually it might be on some other computer.
So, we care a lot about when the programs try to read from memory and when they try to write to it, because we have to intercept all those operations. If all they want to do is read, we don't have to do as much work. Flipping a switch, in our world, that's real work.
This idea of replacing what's under the covers without changing how it looks to the software that's using it is not original - it comes up over and over again in software, in fact it's probably the single biggest idea the industry ever had. Hibernate is another product that does something like this. Hibernate takes reads and writes to memory, and supports them with reads and writes to a database, which is more reliable and persistent and searchable. A programmer could just write instructions to talk directly to the database, but Hibernate makes it easier by hiding some of the complexity under the covers.
But the illusion breaks down. The picture up above is what happens when you ask Hibernate how many things are in a list. Asking how many things are in a list shouldn't change it, right? That's common sense. But asking Hibernate how many things are in a list might change memory, because it might have to go fetch the information from the database and then save it in memory.
So if you want to figure out whether an operation is a "read" or a "write" or both, you need to know who's responsible for performing it. And that's something that can change on the fly, because we're so good at replacing what's under the covers.
But why do we care about the distinction anyway? Are reading and writing really the only way to compute? Why should this implementational distinction matter to a programmer?
Is this the path to functional programming?
Labels:
hibernate,
Java,
synchronization,
Terracotta
Ignorance is the mother of invention
I spend too much of my time trying to figure out what pieces of software do.
In 2008, generations after we started programming, it is still almost always easier to write an okay prototype yourself, than to use an existing component that someone else wrote. This is nuts. This is why software engineers like me still get paid as much as we do. I appreciate the pay, but it's holding back the industry.
The reason for this is that with very rare exceptions, we engineers suck at saying what the code we write does, and for that matter we also suck at coming up with code that does things that can be succinctly described. The rare exceptions have names like Josh Bloch.
Good software engineers excel at looking at other engineers' implementations - at the program code that they wrote - and figuring out what it does. (I am not very good at this, and I admire it in my coworkers.) That is because it's the only way to survive in the industry; there is no other way to figure out what a component does.
Imagine that, before going to the loo, you needed to trace the plumbing to make sure that it actually exited to a sewer rather than the drinking fountain. Imagine that before starting a rental car you needed to trace the ignition wiring to make sure that turning the key clockwise wouldn't break the timing belt. This is the state of the software industry.
When one piece of software calls another piece, it needs to give it some data, and then it expects some data back. The important things that it needs to know include: what is the range of data that it can safely pass in? What is the range of data that might be returned? Will anything change as a result of the call? Is it okay to call again, before the first answer comes back? If the rules are broken, how bad are the consequences?
The tools used in the mainstream software industry do not answer ANY of these questions. Instead we have "comments," which are bits of text written by the software engineer, hopefully describing the situation. This does not work, because comments (a) are not required; (b) are written by software engineers, who are often not very good writers; (c) do not have any validation; (d) are often not updated when the code is updated.
To continue with the loo analogy, it's as if the plumber put a sticky note on the toilet. The plumber assumes that you know the things that were obvious to him and focuses on the details you might not know, so the note says something like "sewer line is made of cast iron," rather than what you really need to know, "flushing toilet will cause contents to be safely sent elsewhere." But it doesn't matter, because at some point someone else went into the basement and replaced the cast iron with PVC, without realizing that there was a sticky note on the toilet.
This is the most serious problem the software industry faces. Arguing about whether Java 7 should include closures is irrelevant. We need enforceable, validatable contracts that describe how software components can correctly be used.
In 2008, generations after we started programming, it is still almost always easier to write an okay prototype yourself, than to use an existing component that someone else wrote. This is nuts. This is why software engineers like me still get paid as much as we do. I appreciate the pay, but it's holding back the industry.
The reason for this is that with very rare exceptions, we engineers suck at saying what the code we write does, and for that matter we also suck at coming up with code that does things that can be succinctly described. The rare exceptions have names like Josh Bloch.
Good software engineers excel at looking at other engineers' implementations - at the program code that they wrote - and figuring out what it does. (I am not very good at this, and I admire it in my coworkers.) That is because it's the only way to survive in the industry; there is no other way to figure out what a component does.
Imagine that, before going to the loo, you needed to trace the plumbing to make sure that it actually exited to a sewer rather than the drinking fountain. Imagine that before starting a rental car you needed to trace the ignition wiring to make sure that turning the key clockwise wouldn't break the timing belt. This is the state of the software industry.
When one piece of software calls another piece, it needs to give it some data, and then it expects some data back. The important things that it needs to know include: what is the range of data that it can safely pass in? What is the range of data that might be returned? Will anything change as a result of the call? Is it okay to call again, before the first answer comes back? If the rules are broken, how bad are the consequences?
The tools used in the mainstream software industry do not answer ANY of these questions. Instead we have "comments," which are bits of text written by the software engineer, hopefully describing the situation. This does not work, because comments (a) are not required; (b) are written by software engineers, who are often not very good writers; (c) do not have any validation; (d) are often not updated when the code is updated.
To continue with the loo analogy, it's as if the plumber put a sticky note on the toilet. The plumber assumes that you know the things that were obvious to him and focuses on the details you might not know, so the note says something like "sewer line is made of cast iron," rather than what you really need to know, "flushing toilet will cause contents to be safely sent elsewhere." But it doesn't matter, because at some point someone else went into the basement and replaced the cast iron with PVC, without realizing that there was a sticky note on the toilet.
This is the most serious problem the software industry faces. Arguing about whether Java 7 should include closures is irrelevant. We need enforceable, validatable contracts that describe how software components can correctly be used.
Subscribe to:
Posts (Atom)