JAOO 2006 Day 5

Thursday, October 5, 2006
Aarhus, Denmark

Today was a return to tutorials, now that the tracks and speakers portion of the conference has finished. It was a bit of a let down, since both Louis and I got “dud” tutorials for the afternoon, and also because the day lacked the energy that the past three days buzzed with.

That said, the morning tutorial session was quite interesting and was well presented: ATAM – The Architecture Trade Off Analysis Method, by Len Bass (check out the slides found on this link for his other presentation, which outlines the ATAM quite well). Also, see the SEI site.

The software architecture of a program or computing system is the structure or structures of the system which comprise software elements, the externally visible properties of those elements, and the relationships among them.

What does that mean? Every system has an architecture, regardless of how explicitly it has been developed, and that architecture can be evaluated according to how the system behaves with respect to non-functional quality attributes like performance, scalability, concurrency handling, availability, flexibility, modifiability, security, reusability, etc. A monolithic system is sufficient if the only important thing about software were getting the right answer. The ATAM is based on the fact that quality attributes are important, and therefore, so is being able to reasoning about and analyze the architecture of a system.

The method is used to explicitly identify:

  • what the quality attributes are
  • risks to meeting those needs that are inherent in the architecture
  • non-risks (i.e., where the architecture is adequately addressing the requirements)
  • sensitivity points (components that are crucial to meeting a requirement)
  • trade-offs (how aspects of the architecture meet certain requirements in exchange for others).

From slides “Reflections on a decade of architectural evaluations”, Len Bass:

The method involves a series of interviews, first with a domain expert to determine what the required attributes are, and second with the architect to assess how the design of the system addresses the requirements. As usual, concrete use case scenarios are the key to working out how well the solution fits the bill. Scenarios identified by the domain expert or the evaluator are run through with reference to the architecture, identifying the design tactics for handling the scenarios. Risks are noted as the analysis finds scenarios that the architecture does not satisfy on the basis of the required quality attributes.

Then the method goes into a second phase, much like the first, but involving many more stakeholders of the software; people like: developers, testers, security experts, project managers, customers, end-users, system administrators, maintainers, etc. This “bottom-up” approach seeks to identify other quality attributes requirements that are important to everyone that is impacted by the system architecture. The scenarios are run through in the same as in the first pass, each time either identifying design tactics that address the scenario or identifying risks to meeting the quality attribute requirements.

Finally, the risks are summarized or distilled into themes, which then feed back into the decision making process for subsequent planning and development of the system.

This process seems quite heavyweight. Nonetheless, I think we should adopt a lighter, yet still somewhat formal form of this process for our projects. It makes lots of sense to me that we need to consult with all of the system stakeholders and primarily the domain expert to come up with the important quality attributes requirements and usage scenarios that identify them. It also makes sense to explicitly review the architecture to make sure the app satisfies these non-functional requirements. That the reviewer is outside of the project also makes sense to me.

But none of this sounds very agile, does it? Our story approach is usually based strictly on functional requirements that we can show the customer. Things like performance, scalability, and high-availability are usually harder for us to have specified precisely enough by the customer, and, in my experience, we don’t do such a great job at capturing the work to accomplish these sorts of goals in terms of stories.
Perhaps taking an iterative approach will work best (following the pattern)… periodic mini-reviews like these would help to identify risks and to make plans for addressing them. (Incidentally, these ATAM guys are usually called in when it is too late to prevent the risks from being realized.)

Something else of interest:
In a separate presentation, Len Bass presented some summaries of findings from his ATAM experiences. Over 50% of the ATAMs exhibited the following risk themes: performance, requirements uncertainty, unrecognized need, and organizational awareness. Performance risk is one that I would have expected to be common. It’s very interesting to note that an agile process mitigates outright two of these risks (requirements uncertainty and unrecognized need, i.e., over-engineering). We can also argue that an agile process helps to mitigate the last risk, organizational awareness, which refers to a lack of coordination, training, tools, etc. required for implementation.

It's only fair to share...
Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Leave a Reply