On the Road to Embedded World 2021: Episode 4
Editor’s Note: In the first blog in this series of five blogs leading up to Embedded World 2021, Episode 1, an overview of what Embedded World is was presented. In Episode 2, Randall brushed up on his C programming language knowledge. Episode 3 focused on how using Object-Oriented Programming can reduce complexity. This installment, Episode 4, shows how the fundamental measure of good design is its ability to be reconfigured as requirements change without having to reimplement the building blocks. In the final blog, Episode 5, the ever expanding space required by operating systems is questioned and system decomposition is touched upon prior to Randall’s keynote presentation at Embedded World 2021.
So far, I’ve covered how software evolved to become what it is today - object-oriented - and I described how object-oriented has an analogy to electronics. Thinking with an object-oriented frame of mind provides a way to create surrogate models of real-world things (i.e.: objects). I also talked about the attributes of object-oriented programming (OOP) and how to assess the quality of such models, but I haven’t talked about how to establish those models. This has to do with how to decompose a system into useful modules and building blocks. This is an area of design methodology that was written about in the early 1970s and some of those lessons are resurfacing today.
The most popular way to partition a system is surely by function. This consists of itemizing all the functions needed in a system as building blocks and assigning developers to implement those blocks. It turns out this is not the best way to design a system. It has many pitfalls that result in inflexibility and extra work for developers. A system designed by functional decomposition works but there is a better way. It’s true that the other way is harder to start and is tricky to get one’s head around. It takes more time up front but afterward, development goes more smoothly and results in a system that more easily accommodates changes.
David L. Parnas, source: Wikipedia (https://en.wikipedia.org/wiki/David_Parnas)
David Parnas, from Carnegie-Mellon University, wrote several papers on system design and modularity in 1971. These papers helped establish the notions of coupling and cohesion that I mentioned in my earlier blog. One of those papers was called “Information Distribution Aspects of Design Methodology” written in February 1971. This is a short paper but very thoroughly describes what a design methodology is. He said, “Progress in a design is marked by decisions which eliminate some possibilities for system structure.” He says that having eliminated possibilities, a rationale for subsequent decisions is established and he provides examples that support his claim. These methodologies each create convergence to a solution.
He identified three approaches:
1. Obtaining ‘good’ external considerations
2. Reducing the time interval between initiation and completion of a project
3. Obtaining an easily changed system
Approach 1 relates to a “top-down” approach. Approach 2 results in choices being made about modularization that may not consider the full impact on the final useability of the product but it gets developers developing sooner. Approach 3 identifies the factors that are least likely to change with the result that the most general information is used first.
He argues that a top-down approach can be one that results in systems that are harder to change later simply because the decision criterion favors external factors more than those effecting change. He closes this section of his paper by saying that the order of decisions made in these approaches is inconsistent and therefore makes it impossible to satisfy all of them simultaneously. This position is picked up in his next paper, that I will present, but I first want to conclude my comments on this first paper of his.
I find his descriptions of good programmers interesting. He says, “A good programmer makes use of the usable information given him!” He continues by saying that a good programmer will be clever and may use information that is not documented and doing this means that a change in one block requires changes in others. This is the idea of connascence that I mentioned in my last post and it is inconsistent with approach 3. The upshot of this first paper is that information hiding is good - this means system designers should share design information on a “Kneed to Know” basis, as Parnas put it.
Parnas went on to write a subsequent paper called “On the Criterion to Be Used in Decomposing Systems into Modules” in August 1971. This paper received much notoriety and it is enjoying a bit of a renaissance. It is this paper where he makes the case for designing systems based on the likelihood of change - i.e.: approach 3 from his earlier paper.
His objective was to improve system flexibility and comprehensibility while reducing its total development time and he illustrates success with examples. By modularizing the code on the basis of information hiding, one is modularizing a system on the basis of change management. Changes are isolated to fewer modules which means it is easier to make system changes and reconfigurations when requirements change. Whether functional or change decomposition is used, a system can work, but by ensuring high cohesion on the basis of keeping information hidden, a system is made more flexible, comprehensive, and developed more quickly.
Consider the case when a well-known item of a given structure is stored in memory. If a system is decomposed functionally, any or all modules might operate on it. If the structure is changed, all modules accessing it must change. However, if that structure was managed by a module responsible for providing access, the structure could be changed without having to change every module needing access to it. The point is that the designer identifies what is likely to change from the start in this methodology. Each methodology produces the same functionality but it is the modularity that differs. Parnas believes that modularity based on change does better to reveal design decisions than one based on function and thereby makes a system more readily comprehendible.
Modularizing based on information hiding is not without its hazards. Parnas points out that systems designed this way can be inefficient, but he argues it becomes more important to hide information as a system grows. I will discuss hazards in my last post next month.
Most fundamentally, to improve the reuse of functionality, the interface to that functionality must reveal as little information as possible. As Einstein said, “Everything should be made as simple as possible, but no simpler.” In our case, we need to replace “everything” with “the interface.” This task is well-suited to the embedded engineer. The embedded engineer is trained on many techniques, giving him the ability to work at any level of a design compared to someone who is unskilled. The engineer’s designs should be as simple to use as possible if it is to be used by the largest number of users (i.e.: customers).
Source: https://rightingsoftware.org/
As I mentioned above, Parnas’ work is enjoying a bit of a renaissance. I invite anyone wanting to learn more to read Parnas’ papers along with a book titled “Righting Software” by Juval Löwy.
Löwy credits Parnas for his approach, which is to architect a system by decomposing it considering the volatility of potential changes and encapsulate those potential changes within building blocks. The required behavior of the system is then implemented as an interaction between those building blocks. A successful design is one with the smallest set of building blocks that satisfy all use cases. This is easily said.
Löwy says you make this set by knowing the core use-cases. He adds that there are hardly ever many of these, like 1 to 3, with the result that there are generally less than a dozen resulting components. The number of compositions and interactions of 12 elements is enormous. I think of the variety of songs that can be made from 12 notes on a musical instrument. There is the sequence of the notes and the timing of them, so I think he is right.
He goes on to say that a human from 200,000 years ago did not have a different use-case than we have today. That use-case is to survive, yet the architecture that enabled this by hunting and gathering is the same architecture (i.e.: uses the same components) that enables a software engineer to make a living today. He argues that an elephant and a mouse have the same architecture but it is their detailed design that is different. It’s a compelling argument.
So, the fundamental measure of good design is its ability to be reconfigured as requirements change without having to reimplement the building blocks. Successful decomposition satisfies all requirements: past, future, known and unknown. This is heavy stuff and worth studying.
In my last post, as I mentioned, I will describe a problem that we have created as we have enabled more people to use the functionality we have developed, but I will argue that it is the cost we pay for broader markets.

Have questions or comments? Continue the conversation on TechForum, DigiKey's online community and technical resource.
Visit TechForum