In December, as Maria was talking about creativity and adjusting your ways of handling thoughts, I was thinking about the X-Men.
In my opinion, the X-Men (as a comic book title) hit its high point in the mid-eighties, when the book was written by Chris Claremont and illustrated by John Byrne and Terry Austin. (I’m finding it a bit strange that some of my co-workers were barely born at this time).
One of the things I liked about the book was that it had multiple strong and interesting female characters at a time when comics were mostly catering to teenage fanboys. And I think that it’s not an accident that the X-Men contained interesting female characters: I’ve heard it said that Claremont would frequently ask the question in brainstorming sessions: “Is there any reason why this new character can’t be female?”
What I imagine is that he recognized that comics creators (much like the rest of the world) tend to have a sexist bias, and for him, the question served as a tool to compensate for that blind spot.
There’s a similar story that I recall involving the production designer for the TV series, Star Trek: Deep Space Nine. The premise was that they were on a space station designed by an alien species, so the production designer, Herman Zimmerman wanted the sets to look alien. One of the strategies he used was to take any particular design given to him by his illustrators and turn it upside down, figuring that it would be even more aesthetically unusual.
And I’ve noticed this in the show: the alien devices look oddly top-heavy. And it makes me think: do we have an unconscious aesthetic bias toward “small on top/big on bottom”? Would I have ever really become aware of that?
Both of these “tricks” follow a similar logic: they start with the recognition that people have a bias, or a blind spot, and then they come up with a little strategy to try to mitigate for that blind spot.
And I suppose that this is why I was thinking about stories like this during Maria’s creativity presentation. I’m not always fully in control of my thought processes, and I’m not sure what things I do to change my way of thinking to evoke the “rise” or “jump” strategies that she’s talking about. I guess I always go back to the question: what are the tools to effect change? And the stories about Claremont and Zimmerman have always struck me as describing good strategies for dealing with blind spots.
When those blind spots relate to code, sometimes the strategies are also code related. For example, any time I find myself in a development pattern, I try to write a unit test for it. (A development pattern might be as simple as, “every time I create a model object, I have to include it in the hibernate.cfg.xml file” — that’s an easy unit test to write, and without the unit test, sometimes I find myself forgetting to do it).
I’m also aware of a few “bigger-scale” coding biases that I have. I find writing parsers harder than I feel I should. I’ve written a lot of parsers in my career, and I’ve managed to do the job, but they’re never easy for me, in the way that so many other things are easy. My brain just doesn’t think “parser-y” for some reason. I haven’t had a simple-to-articulate strategy for dealing with the parser blind spot. Instead I’m relying on the whole, “practice, practice, practice” approach. Although maybe a better strategy is to love JavaCC or ANTLR; to date, I haven’t found that love inside me.
In the last several months, I’ve become aware of a few other development blind spots that I have, and I think might be true of our development shop in general. One of those things emerged when I was working on the big Pharmacy application last year, but I’ve seen it in some of the other projects that I’ve been working on. The blind spot is this: I think that I’m not naturally prone to effectively model something that goes through many work steps or events.
I mean, I know how to do it in a way that works: you have an object, and it has a “state” and there are some fields that aren’t filled in until the object hits a certain state, and so forth. And I see a lot of stuff like that. It works, but sometimes it isn’t pretty, and there’s strange code artifacts littered throughout the system that seem to be there to divine the meaning of the presence of a certain number of fields. I think we get to the “make it work” stage without going on to the “make it right, make it fast” stages.
My perception is that as we started to model the events, it quickly became clear that it was a key part of the way we thought about the business model, and we had written a heck of a lot of code without it. Why was it so easy to do that. The only answer that makes sense to me is that it was a blind spot.
But even knowing that the blind spot exists hasn’t yet helped me to figure out the coping strategy. Maybe it’s another one of those “practice, practice, practice” items.
Another blind spot that I’ve been pondering lately is the “versioned data” blind spot. I’ve been building simple Create/Read/Update/Delete (CRUD) applications since I was a child programmer, and it’s easy to think in terms of updating records in a database. The idea of data that changes over time seems surprisingly hard to wrap our heads around: y’know, I change my address, and the application keeps track of all the requests I send, and I want to be able to see all of the requests that I send, and I don’t want to change the historical address on any of the previous requests. I want the system to recognize that I’ve had more than one address in my life and that one of those is the “current” address.
It seems simple to describe. Some good applications, like Amazon.ca, seems to have done a great job implementing something like that. I order stuff; Amazon remembers my current address and gives me simple ways of using other addresses I’ve told it about, or adding a new address. But when I’ve been involved in systems like that I’ve often seen people get really confused, and ask questions like, “why is there a relationship from request to address, when there’s also a relationship from request to requester to address?” There’s something about that modeling exercise that boggles people’s minds.
And it’s not that I can’t do it; it’s just that it doesn’t seem to come as naturally as so many other development heuristics. And it confuses people, even though it’s a pretty common problem.
I’m not leading up to any grand insight. I haven’t solved these blind spots in myself, and I don’t yet have strategies for mitigating the effects of them. But, for me, identification of the problem is an early step.