ACCU2016: Talk on Software Architecture Design 6: Organising Principles

I now want to try and explain the idea of an Organising Principle more clearly.

As an introduction they have a number of characteristics. They:

  • Embody a Living Wholeness.
  • Have a high degree of Ambiguity.
  • Are never Static.
  • Lie Behind the Parts.

ElanWoods

Embedded within a set of requirements is something that we need to find and embody in our implementation. As with high level schematics, there is a high degree of ambiguity surrounding this ‘something’.

During the talk we have moved from the high level schematics where I can hand wave to my hearts content about structure and architecture; down to diving into the depths of the actual implementation.

You may remember Andrei Alexandrescu’s keynote where he was talking about the details of a sort algorithm – first thing in the morning. Difficult after a late night before!

When trying to understand what he was talking about we are need to think at multiple levels:

  • Listen to the words he was saying.
  • Read through the code examples in his slides.
  • Imagine the dynamic behaviour of these code extracts.

In order to comprehend how this algorithm works dynamically, the listener needs to use their imagination at the same time as trying to understand what Andrei is saying. During this particular talk you could palpably feel the intense concentration in the whole audience.

During deep technical discussions listeners will work at these two levels of understanding:

  • The static code structure.
  • The dynamic behaviour of the static construct.

In addition one will only be looking at a single possible instantiation of a solution. There could be alternative ways of doing the same thing.

The term I like to use here is that during this process we are trying to ‘Livingly Think’ the Organising Principle.

An Organising Principle is NOT a static thing. It cannot be written down on a piece of paper. Although it informs the implementation it cannot be put into the code. If you fix it: You Haven’t Got It. Remember that phrase because getting it is the real challenge. It lives behind the parts of any concrete implementation.

I am explicitly using this separation of Whole and Parts, a discipline called Mereology. How can we understand the world in terms of wholeness and yet also see how the parts work therein?

Experienced people will have a sense of the whole and yet will be able to see where the likely risk points are going to be. They simultaneously see the whole picture and also know the essence of what needs to happen in the parts. This is what you pay for when you employ an expert. In my design example, for instance, an expert will know to ask about your allocation strategies and if they see some allocations going out of order they will point you at where you need to change the system.

Requirements implicitly contain Organising Principles.

They implicitly, not explicitly, embody dynamic specifications of what needs to happen. Much of my job when I talk to any customer is to try and understand:

  • What they want to do.
  • What I think can be built
  • How I can bring those together.

I need to understand what the core dynamic principles are that are embedded in their requirements. By definition if you are doing a useful piece of software the customer wont know what they actually NEED. They can talk about what they WANT, but that is based in their present experience. The conversation between the user and the developer will be generating new knowledge about future experience.

[Software development involves moving knowledge from the Complex to the Complicated domain as Cynefin defines it. This means we are dealing with emergent knowledge.]

Architecture references the Organising Principle.

If we take the point that an Organising Principle is not a fixed, discrete idea then we can see that it is much like comparing a film of an activity with its reality. The film is just a set of discrete frames, but the reality is continuous.

With software, the different architectures and implementations are different views of a specific Organising Principle. This is very hard to get our heads around and needs a far more mobile thinking than we normally use. This is the core of why good software development is so difficult.

Code is the precipitate of the Organising Principle.

If we manage to perceive this dynamic principle, referencing it by designing a software architecture that we embody into a specific code implementation, we are embarking on a process of precipitation. If we DON’T do this then – indeed – we only have a “Hopeful Arrangement of Bytes” as this talk title suggests.

Of course this may be a valid thing to do sometimes as a piece of ‘software research’ just to see how the code works out. This is especially relevant if there is a blank sheet, or ‘green field’ site where you just have to start somewhere. In this case you need to look at the part of the requirements that represent the riskiest aspects and just implement some proof of concept solution to check your understanding.

This is where it may not be an idea for a novice or journeyman programmer to follow the example of an experienced one.

Be warned: It is possible that a master programmer may be able to work at the keyboard and seemingly design as they go creating from internalized mature ideas. This is something that is definitely NOT recommended for inexperienced developers who will need to spend much more time maturing and externalizing their ideas first before those ideas become part of their ‘cognitive muscle memory’.

In the next and final post of the transcript of this talk I will deal with how to improve our perception of Organising Principles…

ACCU2016: Talk on Software Architecture Design 7: Perceiving Organising Principles
ACCU2016: Talk on Software Architecture Design 5: Active Design Ideas

ACCU2016: Talk on Software Architecture Design 5: Active Design Ideas

In the last post I highlighted some specific design problems and associated solutions. Now I want to look at these solutions a little more deeply.

To refresh our memory the solutions were as follows:

  1. Separating Mutex Concerns.
  2. Sequential Resource Allocation.
  3. Global Command Identification.

I want to characterise these differently because these names sound a little like pattern titles. Although we as a software community have had success using the idea of patterns I think we have fixed the concept rather more than Christopher Alexander may have intended.

I want to rename the solutions as shown below in order to expressly highlight their dynamic behavioural aspect:

  1. Access Separation.
  2. Sequential Allocation.
  3. Operation Filtering.

You might have noticed in the third example the original concept of “Global Command Identification” represents just one possible way to implement the dynamic issue of filtering operations. Something it has in common with much of the published design pattern work where specific example solutions are mentioned. To me design patterns represent a more fixed idea that is closer to the actual implementation.

Others may come up with a better renaming, but I am just trying to get to a more mobile and dynamic definition of the solutions. Looking at the issues in this light starts to get to the core of the issue of why it is so hard to develop an architectural awareness.

If you can truly understand, or ‘grok‘, the core concept of this characterisation, regardless of the actual words, you will see that they do not really represent design patterns – not in the way we have them at the moment.

This is where there is a difference between the architecture of buildings – where design patterns originated – and the architecture of software. Although both deal with the design of fixed constructs, whether it be the building or the code, the programmer has to worry far more about the dynamic behaviour of the fixed construct (their code). Yes – a building architect does have to worry about the dynamic behaviour of people inhabiting their design, but software is an innately active artefact.

Let me recap the debugging and design fixing process in terms of the following actions that are carried out in order:

1: Delicately Empirically Collect the Data.
Here we have to be very aware of the boundaries of our knowledge and collect information in a way that does not disturb the phenomenon we are looking at. Awareness of our own thinking process is vital here.

2: Imagine into the Problem Behaviour.
We have to imagine ourselves into the current behaviour that the system is exhibiting. (This is the hard bit when you are under pressure and is what requires a strong focus in order to understand what the existing design is doing)

3: Imagine into the Required Behaviour.
We need to imagine into what the required behaviour of the system NEEDS to be and it is here that we start to meet the ‘gap’ between problem and solution. It may indeed only need a one line fix, but quite likely there is a deeper design problem. Again here is a point where our self-awareness is important. Do we have the discipline to make ourselves stop and think more carefully and widely about the presenting problem?

4: THE GAP. Cognitively Feeling for the best Solution Concept.
In this stage there is a very fine “Cognitive Feeling” in action to decide what is a good fit to the problem. For the experienced programmer this is more than just a question of “Does this solution fit the requirement?”

There is the consideration of whether the proposed solution idea is going to be a sustainable fix during the future lifetime of the project.

This question is much like asking myself if I will still find
this painting beautiful in 10 years time.

YachtClubSmall

There is a current widely held belief that the best procedure for coming up with a design solution is to produce many possible alternatives and evaluate them in order to choose the best one. In practice I have found that this very rarely – if ever – happens.

I usually arrive at a single design solution by trying out the multiplicity of possible solutions while in the ‘gap’ where I am considering various alternatives – imagining each of them in operation, possibly ‘drawing’ the thoughts out on a whiteboard as I think.

In this part of the process the more experienced programmer will slow things down to the extent of even putting in a provisional simple solution that gives them some breathing, or thinking, space. This is the idea of provisionality mentioned by Marian Petre, because this mode of design thinking requires time and reduced pressure.

It is amazing how often this happens in the shower!

Of course this is predicated on the fact that I have done the required detailed groundwork, but as I mentioned in the poem, our logical thinking can only take us to the boundary of what we know. Trying to push to go faster results in inadequate and buggy designs that are based on immature thinking.

This is the central conundrum of software development. The more we dive down into detailed analysis, the more we encounter these ‘softer’, heuristic elements.

5: Implementation.
Finally we get to the implementation. As you will have seen it is far too easy to jump into “premature implementation”. It is hard, if not impossible, to teach people just how small a part the coding is of the whole process. It needs to be experienced. Until you have seen how a good design triggers an amazing collapse in code complexity, the importance of taking the time to search for that great design is not an obvious conclusion. This is a fundamental eye of the needle that all programmers need to go through.

This is the main reason I like programming:

I get less code.
I get something I can reason about.
I get something that does the job!

Beautiful!

In the next post I am going to show how the dynamic design solution ideas and the human analysis process link to what I will call the “Organising Principle”, a term I have borrowed from Rudolf Steiner’s lexicon.

ACCU2016: Talk on Software Architecture Design 6: Organising Principles
ACCU2016: Talk on Software Architecture Design 4: A Design Example

ACCU2016: Talk on Software Architecture Design 4: A Design Example

[The following transcript is more for the techies of my readership. For those of a less technical inclination, feel free to wait for the next post on “Active Design Ideas” which I have separated out due to the length of this post.]

I want to underpin the philosophical aspect of this discussion by using an example software architecture and considering some design problems that I have experienced with multi-threaded video player pipelines. The issues I highlight could apply to many video player designs.

The following image is a highly simplified top-level schematic, the original being just an A4 pdf captured from a whiteboard, a tool that I find much better for working on designs than using any computer-based UML drawing tool. The gross motor movement of hand drawing ‘in the large’ seems to help the thinking process.

Player

There are 3 basic usual commands for controlling any video player that has random access along a video timeline:

  • Show a frame
  • Play
  • Stop

In this example there is a main controller thread that handles the commands and controlling the whole pipeline. I am going to conveniently ignore the hard problem of actually reading anything off a disk fast enough to keep a high resolution high frame-rate player fed with data!

The first operation for the pipeline to do is to render the display frames in a parallel manner. The results of these parallel operations, since they will likely be produced out of order, need to be made into an ordered image stream that can then be buffered ahead to cope with any operating system latencies. The buffered images are then transferred into an output video card, which has only a relatively small amount of video frame storage. This of course needs to be modeled in the software so that (a) you know when the card is full; and (b) you know when to switch the right frame to the output without producing nasty image tearing artefacts.

These are all standard elements you will get with many video player designs, but I want to highlight three design issues that I experienced in order to get an understanding of what I will later term an “Organising Principle”.

First there was slow operation resulting in non real-time playout. Second, occasionally you would get hanging playout or stuttering frames. Third, you could very occasionally get frame jitter on stopping.

Slow operation
Given what I said about Goethe and his concept of Delicate Empiricism, the very first thing to do was to reproduce the problem and collect data, i.e. measure the phenomenon WITHOUT jumping to conclusions. In this case it required the development of logging instrumentation software within the system – implemented in a way that did not disturb the real-time operation.

With this problem I initially found that the image processing threads were taking too long, though the processes were doing their job in time once they had their data. So it was slowing down BEFORE they could get to start their processing.

The processing relied on fairly large processing control structures that were built from some controlling metadata. This build process could take some time so these structures were cached with their access keyed by that metadata, which was a much smaller structure. Accessing this cache would occasionally take a long time and would give slow operation, seemingly of the image processing threads. This cache had only one mutex in its original design and this mutex was locked both for accessing the cache key and for building the data structure item. Thus when thread A was reading the cache to get at an already built data item, it would occasionally block behind thread B which was building a new data item. The single mutex was getting locked for too long while thread B built the new item and put it into the cache.

So now I knew exactly where the problem was. Notice the difference between the original assumption of the problem being with the image processing, rather than with the cache access.

It would have been all too easy to jump to an erroneous conclusion, especially prevalent in the Journeyman phase, and change what was thought to be the problem. Although such a change would not actually fix the real issue, it could have changed the behaviour and timing so that the problem may not present itself, thus looking like it was fixed. It would likely resurface months later – a costly and damaging process for any business.

In this case the solution here was to have finer grained mutexes: one for the key access into the cache and a separate one for accessing the data item, which was then lazily built on first access.

Hanging Playout or Stuttering Frames
The second bug was that the playout would either hang or stutter. This is a great example because it illustrates a principle that we need to learn when dealing with any streamed playout system.

The measurement technique in this case was extremely ‘old school’, simply printing data to a log output file. Of course only a few characters were output per frame, because at 60fps (a typical modern frame-rate) you only have 16ms per frame.

In this case the streaming at the output end of the pipeline was happening out of order, a bad fault for a video playout design. Depending upon how the implementation was done, it would either cause the whole player to hang or produce a stuttered playout. Finding the cause of this took a lot of analysis of the output logs and many changes to what was being logged. An example of needing to be clear about the limits of one’s knowledge and of properly identifying the data that next needed to be collected.

I found that there was an extra ‘hidden’ thread added within the output card handling layer in order to pass off some other output processing that required. However it turned out that there was no enforcement of frame streaming order. This meant that the (relatively) small amount of memory in the output card would get fully allocated and this would give rise to a gap in the output frame ordering. The output control stage was unable to fill the gap in the frame sequence with the correct frame, because there was no room in the output card for that frame. This would usually result in the playout hanging.

MindTheGapCropped

This is why, with a streaming pipeline, where you always have limited resources at some level, allocation of those resources MUST be done in streaming order. This is a dynamic principle that can take a lot of hard won experience to learn.

The usual Journeyman approach to such a problem is just to add more memory, i.e. more resource! This will hide the problem because though processing will still be done out of order, the spare capacity has been increased and it will not go wrong until you next modify the system to use more resource. At this point the following statement is usually made:

“But this has been working ok for years!”

The instructions I need to tell less experienced programmers when trying to debug such problems will usually include the following:

“Do not change any of the existing functionality.
Disturb the system as little as possible.
Keep the bug reproducible so you
can measure what is happening.
Then you will truly know when you have fixed the fault.”

Frame Jitter on Stop
The third fault case was an issue of frame jitter when stopping playout. The problem was that although the various buffers would get cleared, there could still be some frames ‘in flight’ in the handover threads. This is a classic multi-threading problem and one that needs careful thought.

In this case when it came time to show the frame at the current position, an existing playout had to be stopped and the correct frame would need to be processed for output. This correct frame for the current position would make its way through to the end of the pipeline, but could get queued behind a remnant frame from the original stopped playout. This remnant frame would most likely have been ahead of the stop position because of the pre-buffering that needed to take place. Then when it came time to re-enable the output frame viewing in order to show the correct frame, both frames would get displayed, with the playout remnant one being shown first. This manifested on the output as a frame jitter.

One likely fix of an inexperienced programmer would be to make the system sit around waiting for seconds while the buffers were cleared and possibly cleared again, just in case! (The truly awful “sleep” fix.) This is one of those cases where, again due to lack of deep analysis, a defensive programming strategy is used to try and force a fix of what initially seems to be the problem. Again, it is quite likely that this may SEEM to fix the problem, and is likely to happen if the developer is under heavy time pressure.

The final solution to this particular problem was to use the concept of uniquely identified commands, i.e. ‘command ids’. Thus each command from the controlling thread, whether it was a play request or a show frame request, would get a unique id. This id was then tagged on to each frame as it was passed through the pipeline. By using a low-level globally accessible (within the player) ‘valid command id set’ the various parts of the pipeline could decide, by looking at the tagged command id, if they had a valid frame that could be allowed through or quietly ignored.

When stopping the playout all that had to be done was to clear the buffers, remove the relevant id from the ‘valid command id set’ and this would allow any pesky remaining ‘in flight’ frames to be ignored since they had an invalid command id. This changed the stop behaviour from being an occasional, yet persistent bug, into a completely reliable operation and without the need for ‘sleep’ calls anywhere.

In the next post I will recap the above process of finding and fixing the problems from a human development perspective.

ACCU2016: Talk on Software Architecture Design 5: Active Design Ideas
ACCU2016: Talk on Software Architecture Design 3: The Issue of Doubt

ACCU2016: Talk on Software Architecture Design 3: The Issue of Doubt

The next issue I want to explore is the central one of how we handle doubt. This is crucial in understanding how we come to know anything and so is pivotal in developing our own perception.

If we accept that there are problems with truly knowing reality, there are a number of ways of handling the situation.

One way to handle doubt is to decide that we cannot truly know anything and so should not even try. Just RUN AWAY! as the Knights of Arthur did from the rabbit in Monty Python’s Holy Grail! In this case we just fall back to perceiving the whole of the world without any analysis. This wish can particularly happen with the New Age movement where some people want to escape away from the world into a cloud of unknowing. But humans will always analyze their surroundings to some extent so this technique is rarely followed in reality.

A second way to handle doubt is by deciding that if we cannot truly know the world, all we need do is take a quick look at it and then come up with some hypothesized model. We then leave it to experimentation to tell us if we are right or wrong. In some areas of research, particularly anything to do with living processes, this means we will be disturbing the very processes we are looking at. It is like using the proverbial sledgehammer to crack the proverbial nut, and we know how that turns out for the nut in such a case! Scientific research can implicitly fall into this mode of operation, letting our thinking run ahead of the phenomena – a trap that Goethe highlighted.

I think that neither of these two modes of handling doubt really come to grips with improving how we generate knowledge about the world.

MindTheGapCropped

A more fruitful way is to learn to live with the gap in our knowledge, i.e. stay with the not knowing. This is a distinctly uncomfortable option and is very difficult in the world of commercial software development where we may have time and/or financial pressures. Despite not knowing the best way to solve a problem, we need to use the simplest implementation we can and realize that it is a provisional state, allowing us to return to it when our thinking has matured and we can see our way to a better solution. As Dr. Marian Petre mentioned in her keynote, this handling of ‘provisionality’ is a capacity that distinguishes the expert programmers from their less successful counterparts.

The Disciplined Self

As I mentioned at the end of the section on the path of the programmer, this requires the development of a strong sense of self. It requires that we maintain a disciplined practice when external pressures dictate that an immediate solution is required. We need to learn to hold a calm centre in the face of not knowing the right solution to a problem.

If we do not develop the ability to hold this centre we will not be able to think straight – a necessary function for our survival, commercial or otherwise.

ACCU2016: Talk on Software Architecture Design 4: A Design Example
ACCU2016: Talk on Software Architecture Design 2: The Historical Context

ACCU2016: Talk on Software Architecture Design 2: The Historical Context

[This is the second part of the transcript of my talk at ACCU2016 entitled: “Software Architecture: Living Structure, Art, or Just Hopeful Arrangements of Bytes“]

An enlightening aspect that surprisingly pertains to the issue of software design is the philosophical history that has led us to our current technological society.

Batalla_de_rocroi
Looking back we can see some origins in the Thirty Years War that took place between 1618 and 1648. Some commentators have drawn parallels with the impact of WW1 and WW2 between 1914 and 1945, saying that they could also be seen as a thirty year war. (See the book “Cosmopolis” by Steven Toulmin) The Thirty Years War of 1618 was a terrible war over much of Europe that resulted in the death of a third of the German population. It was a religious war between Protestants and Catholics, i.e. one religion – two factions – and raised serious concerns about the subjectivity of religious faith and the human condition. It was this that brought the quest for certainty to a head. The underlying question was: How can we be certain of what is happening in the world around us? And for the faithful in the 1600s, how can we be certain of God’s plan?

Descartes

It was during this time that René Descartes produced his “Discourse on Method” in 1637. He was the father of analytical geometry and of course coined the famous phrase “I think, therefore I am”. But this was predicated on the fact that we first doubt, thus the more correct phrase should be “I doubt, I think, therefore I am”. He concluded that because of our subjectivity, we cannot trust our senses and what they are telling us about the world, so he returned to the point of doubting. Since there was doubt, there must be a being that is doubting. This being, this ‘I’, that is doubting is thinking about this so therefore I am thinking. Since I am thinking I must exist in order to do that thinking.

Because the church was looking for certainty and because Descartes was able to couch his thought in terms that they could accept, this provided the foundation for the Scientific Revolution. This was followed by the Industrial Revolution which has led us to our current modern technological society. It is interesting to consider the fact that all that we take for granted today represents the end of 300 years or so of work based on Descartes’ philosophical premise: “I doubt, I think, therefore I am” where the aim was to try and eradicate subjectivity.

It is ironic that, although the aim was to be objective, his Cartesian coordinate system can be considered to be based on the structure of the human being! I stand up, and my head could be considered as the Y axis. I stretch my arms out to the side, there you have the X axis. I walk forward and there you have the Z axis.

This points to the difficulties that are implicit in the struggle to eradicate subjectivity – an objective (pun intended!) which I do not consider possible.

Kant

I usually refrain from mentioning Immanuel Kant since I am not a Kanitan scholar, but his thinking has formed much of the basis of modern thought. He produced the “Critique of Pure Reason” in 1781, and there is one quote I wish to highlight here from his considerable body of work. He said that “The world in itself is unknowable”. and this strengthened Descartes’ approach of not trusting our senses. It has given our modern scientific and technological society the excuse to allow our thinking to run ahead of the phenomena of the world.

This activity may sound familiar if you think back to the Path of the Programmer. It is a characteristic of the Journeyman phase.

With regard to my previous workshop on imagination, an area dealing with educating our subjectivity, it is interesting to see that one commentator, Mark Johnson, has noted that Kant had difficulty with imagination – Johnson states that he was “not able to find a unified theory of imagination in Kant’s writings” (The Body in the Mind p166).

Goethe

The third person I want to mention, and the one I feel most drawn to, is Goethe. It was Goethe who raised the warning flag to say that there was a problem with the underlying philosophy and practice of the scientific method. He pointed out that there was too much over-hypothesizing and that the thinking was going ahead of the phenomena of the world. Observation was not being given enough time.

This should ring alarm bells for any programmer because it is exactly what happens when someone takes an undisciplined approach to debugging.

Goethe, however, was particularly interested in understanding the growth of plant life. He wrote the Metamorphosis of Plants in 1788 and identified two very important activities. The first one is Delicate Empiricism (or “Zarte Empirie” in German), i.e. carefully collecting the data, carefully observing the world without overly disturbing its processes.

The second activity, which is what gave me the impetus to give my previous workshop on imagination in 2014, is Exact Sensorial Imagination. This is NOT fantasy, but exact, grounded imagination congruent with the observed phenomena. Goethe was trying to understand how plants grew and how their forms changed during growth.

For me this links to how software projects grow over time, as if they have a life of their own. A programmer needs to have a grasp of how the current software forms may change over time within such a context if they are to minimise future bugs.

The key difference between Descartes and Goethe is that Descartes was trying to eradicate subjectivity whereas Goethe was wanting to educate subjectivity.

The next important phase in philosophical thought is the advent of phenomenology in the 1900s. The realization that the process of coming to know something is crucial to, and as important as, the conclusion. Goethe is not considered a phenomenologist as he focused on specific phenomena rather than the philosophy behind what he was doing, but he definitely prefigured some of their ideas and so could be called a proto-phenomenologist.

We need to understand that phenomenology is a sea-change in philosophical thought. Here we are, living in a modern technological society based on 300 years of progress initiated by Descartes and his subject/object duality, and now the underlying foundational thinking has changed significantly.

The discipline of software development in the forefront of trying to understand what this change of thinking means in practice, though it may not have realised it. We need to understand how we develop our ideas and we need to understand our own cognitive biases, the subject of Dr. Marian Petre’s keynote “Balancing Bias in Software Development“. The point here is that we can do a certain amount in teams but there is also some personal work to do in understanding our own learning processes.

There is a wonderful quote by Jenny Quillien who has written a summary of Christopher Alexander’s Nature of Order books. She says in a preface:

“Wisdom tells us not to remain wedded to the products of thought but to court the process.”
(Jenny Quillien Delight’s Muse)

I think this is a lovely way of putting it. The process needs courting, it has to be done carefully as with Goethe’s Delicate Empiricism.

For those who wish to understand Goethe’s work and the philosophical issues around phenomenology, a primary source is Henri Bortoft. His writing is very understandable, particularly his book “Taking Appearance Seriously” and he draws on the work of Gadamer, one of the more recent phenomenologists.

ACCU2016: Talk on Software Architecture Design 3: The Issue of Doubt
ACCU2016: Talk on Software Architecture Design 1: The Path of the Programmer

ACCU2016: Talk on Software Architecture Design 1: The Path of the Programmer

[Following on from my introductory poem, this is the first of a series of posts providing a transcript of my talk at ACCU2016 entitled: “Software Architecture: Living Structure, Art, or Just Hopeful Arrangements of Bytes“. I have modified it to make it read better, cutting out the usual Ums and Errs!]

Introduction
The impetus for this talk came out of a chat I had with a friend, where I was ranting – as I can do – about code, and then realized that of course it is easy to rant about other people’s code. This prompted me to look back at my own experience. I started coding for a living back in 1980 – a fact that doesn’t bear thinking about! – and have spent most of my career implementing high data rate video editing systems. Until recently I worked in a company that does TV and film effects and editing systems, working on a large C++ system of more than 10MLOC. I have now moved into the CAE sector.

This is quite a ‘soft’ talk and I will be following on from some points in the keynote (Balancing Bias in Software Development) given by Dr. Marian Petre, although I will drop into some more grounded issues around video player pipeline design and some of the design issues that I have come across.

As I mentioned, I had a sense of frustration with the quality of what was getting produced in a commercial context, and frustration in terms of finding people who could make that switch from doing the actual coding and implementation to taking a more structural view. But though I started coding in 1980, it was not until 1995 that I can say I was actually happy with what I was producing. That is quite a sobering thought. OK, maybe I have the excuse that I did not really get into Object Orientation until 1985/6, and the Dreyfus brothers say it takes 10 years to become an expert in a domain, but even so…

I therefore want to delve into my own experience and try to understand why this takes so long. This is an issue, not so much about teamwork, but about what we could possibly do individually drawn from my own experiences with being a practitioner with large codebases.

In terms of my inspirations with regard to software architecture, Christopher Alexander of course is one, and there is one from left field. I got involved in starting a Steiner school for my children back in the 1990s and Steiner’s epistemology, drawn from a foundation coming from Goethe, is actually quite relevant.

ProgrammersPath

I will recap some of the points from my talk at ACCU2013 about “Software and Phenomenology”, and my workshop in ACCU2014 about “Imagination in Software Development”, but will be taking a slightly different slant on that content.

The Path of the Programmer

I want to start with some reflections on the path of the programmer as I have come to see it, borrowing an idea from Zen about the three phases on the path to enlightenment.

There is the initial NOVICE phase where you are still learning about the tools you have at your disposal.

Primary

A lot of your thinking is going to be Rule Based since you are learning the steps you need to take to do the job. The complexity of your thought is generally going to be less than the problem complexity you are dealing with when you get into ‘live’ industrial work, and hence you are producing brittle code, and/or it is not doing all that is needed. Here you are aware of your own limits because you know you do not know things, but you are unaware of your own process. I am not here talking about team development process, I am talking about your own personal learning process.

This level is thus characterized by an undisciplined self-awareness. There is little self-awareness about your own limits, and the lack of knowledge about your learning process means what awareness you have is undisciplined.

The next phase is what I call the dangerous phase, the JOURNEYMAN phase. It was about 1984 when I was in this phase.

Discus

Here you have a better knowledge of tools, having learnt about many of the programming libraries available to you. But the trap here is that the Journeyman is so very enamoured of those tools, and this conforms to the upward spike in the confidence curve that Dr. Marian Petre talked about this morning (The Dunning-Kruger effect).

Here the problem is that you can get into Abstract thinking and this can lead you to having an overly complex view of the solution. Your thinking here is more complex than the problem warrants. It is quite possible that up to 80% of the code will never be used. Therefore you are unaware of your own thinking limits and this can lead to an experience of total panic, especially if you are working on larger systems. [About a quarter of the listeners raised their hand when I asked if anyone had ever experienced this] This conforms to the downward spike that occurs after the upward spike on the confidence curve.

One anecdote I have is the story of one rather over-confident colleague who was given responsibility for a project. The evening before the client was due to turn up for a demo he was still coding away. When I came into work the next morning there was a note on his desk saying ‘I RESIGN’. He had been working through the night and didn’t manage to get to any solution. Of course the contract was lost.

This highlighted the total lack of awareness about his own limits. In this phase I too remember having an arrogant positivity – “its just software”, with the accompanying assumption that anything is possible. I had an undisciplined lack of self-awareness. Some people can stay in this phase for a long time, indeed their whole career and it is characterized by an insistence on designing and coding to the limit of the complexity of their thinking. This means, by definition, that they will have big problems during debugging because more complex thinking is needed to debug a system than was used in its creation.

We have gone here from one undisciplined state of partial self-awareness to another undisciplined state of no self-awareness. Of course this could be seen to be a bit of a caricature but you know if you hit that panic feeling – you are in this phase.

The next phase is the MASTER phase. In the past I have hesitated to call it the Master phase, referring to it instead as the Grumpy Old Programmer phase!

PetrelLand_4x3_2kcropped

Here we have a good knowledge of tools, but the issue that is different is that you will be using a Context Based thinking. You are looking at the problem you have got in front of you and fitting the tools to that problem. There is a strong link here with a practice when flying aircraft where you need to read from the ground to map, not the other way around. You must do it correctly because there have been a number of accidents where the pilots have read from the map to ground thus misidentifying their location.

It is the same with problem-solving. Focus on the problem, use the appropriate tools as you need them. It is interesting what Dr. Marian Petre said about how experts can seem as though they are novices – which is exactly what I feel like. Sometimes I look at my code and think “that doesn’t really look that complicated”. You bring out the ‘big guns’ when you need them, hopefully abstracted down under a good interface, but you know you need to keep the complexity down because there will be a lot of maintenance in the future, where you or others will have to reason about the code.

In this phase the software complexity is of the order of the problem complexity, perhaps a bit more because you will need a some ‘slack’ within the solution. At a personal level the major point here is that you are aware of your own limits because in the previous phase you have reached that panicked state.

One of the big things I have learnt through my career is the need to develop an inner strength and ability to handle this stressed state. For example there will be a bug. The client may panic. This is to be expected. The salesman may panic. Still possibly to be expected. As a developer if your manager panics too, you have a problem, because the buck will stop with you. Can you discipline your own thinking and your own practice so that you can calmly deal with the issue, regardless of how others are handling the situation? This is the struggle you can get in a commercial coding environment.

Implicit in this description is that you have developed a disciplined personal practice.

So in summary:

Novice

  • Rule-based thinking
  • Undisciplined
  • Some self-awareness.

Journeyman

  • Abstract thinking
  • Undisciplined
  • No (or very little) self-awareness.

Master

  • Contextual thinking
  • Disciplined
  • Deep self-awareness.

ACCU2016: Talk on Software Architecture Design 2: The Historical Context
ACCU2016: The Organising Principle

STUDY DIARIES: Background on Texts

This is a catch-up post to bring you up to date with some of the study history from the last couple of decades. Actually it is not a long list since we truly have taken our time!

The first thing to say is that we initially wanted to focus on Steiner’s philosophical written work. This was driven by a wish to start right at the beginning and, for my part, NOT wanting to go through his lectures. He took a massive amount of care with his written work, feeling that he had a deep responsibility to his readers. Obviously lectures would not be able to have that same depth of care since they were more of a living experience.

[Background Point: Steiner initially did not want his lectures written down at all since he maintained that the lectures were delivered for the particular audience. However events somewhat overtook him and some of his critics started misquoting what he had said. Thus he felt the need to have the lectures transcribed by a stenographer. I believe there are about 6000 lectures available now.]

However when some lectures have piqued my interest I have actually found it good to listen to readings of them since it gives you that auditory experience which, I think, works well with the content. However sometimes you need to realize that he was talking at a different time. See Dale Brunsvold’s site where he has produced audio recordings of his readings – which has been a great resource for allowing me to listen to lectures in the car. A real labour of love I think and deserving of a small donation towards hosting costs if you do end up using his site a lot.

However, back to the study list so far:

1: Truth & Knowledge : 1892
The very beginning. Steiner’s epistemological doctoral dissertation and a prelude to the Philosophy of Freedom.

2: Boundaries of Natural Science : 1920
A lecture series with an exploration of how Goethean Science and the Philosophy of Freedom can help us go beyond the limits of natural science to provide a healthy foundation for social science.

3: Anthroposophy Science : 1921
A lecture series that somewhat covered our favourite subject of technology and its relationship the development of consciousness.

4: Philosophy of Freedom : 1894
Also known as the Philosophy of Spiritual Activity, to highlight the fact that freedom is never a finished thing but requires constant activity. This is the main foundational book to all he subsequently worked on. In hindsight we should have perhaps studied this earlier.

5: Study of Man : 1919
This is a series of lectures given to teachers of the first Steiner School in Stuttgart.

In future posts I will attempt to summarize some of the earlier study texts but some of the content is now lost in the mists of time.

The main thing that I remember is that in the early stages of our study it was too easy to just skip through the texts without really grasping them. Thus it was that we really, really slowed it down and would not move forward if we did not think we had ‘got it’.

There was many a wrinkled brow, coupled with a feeling of “I am just not getting this”, which brings me to an important question that we and many other people have about Steiner’s output:

“For goodness sake, why is it so DIFFICULT to work with?”

From where I stand now, I can say that there is a very, very good reason for this.

Steiner is not giving us finished answers. His whole raison d’etre is to help us come to a different way of perceiving the world, something that our education beats out of us. In what he writes, he is trying to give you indications about different ways of seeing things, AS WELL AS doing it in a way that challenges you to try and develop this different way of perception as you study.

Remember I talked about congruence in a previous post between his content and method? But oh dear me, it really can make it hard going at times. The way I have come to see this now is that we are literally creating extra organs of perception in our thinking, and this is a long process. It is as if we have been given all the physical senses at birth, but now we need to take our own development in hand, creating new dynamic senses in our thinking.

So no quick fixes here.

[Definition: Anthroposophy was the term Steiner coined for his approach. Literally “wisdom of the human being”. Indicating that Steiner felt that it was important to understand and develop ourselves, embracing this task consciously, since we represent a strongly co-creative force in the world.]

Homage to a Gentle Seer

After many years of feeling the loss of my father its only recently that I have reflected more deeply on why I find myself missing him so much. He died back in 2001 the day before 9/11, almost as though he knew there would be work to do!

I can get caught unawares and then end up crying like one of the Lost Boys from Neverland. The most recent episode being when I was watching the latest Cinderella film directed by Kenneth Branagh. Despite the Disney heritage it was a surprisingly good production and the scene that caught me unawares was the one where the old king, played by Derek Jacobi, was dying and telling his son to marry for love. Yeah – ok – clichéd or what? Also Mr Jacobi is someone who strongly resembles my father in his later years so it was probably no surprise that this scene reduced me to tears.

But I wanted to understand just where this sense of loss was really coming from.

I remember my mother commenting about Dad and saying how he was a dreamer. He would see things differently and was a very gentle person who loved children very much, seeing something great in them – a facet he inherited from his mother who was always looking after the waifs and strays of the neighbourhood. I particularly remember how he could just sit with my daughter and just ‘be’. He had a way of sitting back in his mind’s eye, holding back his preconceptions and waiting to see what the world was really showing him.

It was when I saw his nature in this light – that it hit me between the eyes. He was one of those rare folk who could SEE. I don’t mean visions & things, I mean he could see BEHIND what the world was presenting and give it a bit of deeper experiential thought.

In this time of surfaces and quick fixes it is something I truly miss.

It is worth recounting one of his family anecdotes, a great story from his younger years. He was not well educated, having left school at an early age and from there becoming a consummate dancer and small-time actor – though you couldn’t tell from a performance of his in one of the Ealing comedies!

He once told me about a time when he was listening to friends from work as they were discussing something about current affairs. His friends were all highly educated and of an intellectual turn of mind and until this point he had always been shy of joining in, feeling that his lack of education held him back. However, the more he listened to the conversation, the more he realized that his friends didn’t have a clue about the issues, despite their education, and that in many cases he could see things more clearly than they could.

It was obviously one of those great epiphanies for him and thereafter he stopped holding himself back, and allowed his more experiential take on the world to blossom into a truly foundational wisdom.

There are times when I could do with hearing some of his wisdom

But what it has given me is a deep thankfulness for his beautiful parenting, his wise words and his insight that great education does not necessarily lead to great wisdom.

It is worth noting that although he might like this post, he would feel deeply embarrassed on the outside and most likely crack an awful ‘Daddy’ joke to defuse the feeling!

But what the hell.
Here’s to you Dad, a gentle teacher and seer.

PAINTING: Observation and Painting Dancers

I recently discovered a lovely painting of Tango dancers by Pat Murray which prompted me to get in touch with her for some tips about painting dancers. The things she said – see below – reminded me why I love painting so much – when I get the time – and how it all hooks into a phenomenological observation process. I love how she notes that it does not matter about the accuracy of fast sketches because so much still goes into your subconscious. I have definitely experienced this with my ‘Rose’ painting and the one of my daughter, ‘Princess’.

Fascinating – a true life’s work.

Unfortunately the original of this watercolour has been sold but you can get hold of prints from her website.

Watercolour by Pat Murray

‘Afternoon Milonga’ : Watercolour by Pat Murray

She was very helpful and provided these thoughts about the above painting:

“I also provide the portrait version of ‘Afternoon Milonga’ as a print now as a friend who dances Tango said that British Tango dancers would relate to the church hall image as so much Tango is danced in places like that in this country.

This painting was inspired by watching a friend dance at an afternoon Milonga in a church hall in Sneinton, Nottingham. Try as I might, I could not locate the exact place but tried to recreate the atmosphere that I’d remembered. Late afternoon light was filtering down from a high window and the music was beautiful and melancholic. I’m sure you know what I mean.

I actually did some lightning sketches, a bit ropey but that act of observation is crucial as so much goes into your subconscious mind. Also it helped me to identify the exact moves that float my boat. In my case they are the more subtle ones.

I did three quick tonal sketches from freeze framing youtube footage. You will see these provide me with reference for where the light and shade fall. I think watching carefully and really fast drawing is the trick. Even if what you do is wildly innacurate, you will capture some of the movement.”

I suspect that the hall in question in Sneinton is the Hermitage Community Centre because I think I can see the big window from the painting in the header photo from their site.

Pat very kindly sent me the following digital copies of her sketches.

Tango Sketch 1

Tango Sketch 2

Tango Sketch 3

Many thanks Pat!

Post-ACCU2014 Thoughts

My thinking has been working overtime since I attended and presented at the ACCU2014 conference in Bristol.

[The delay in producing another post has been due to a lot of rather extensive personal development that has been occurring for me. Add to this some rather surreal experiences with dance – clubbing in Liverpool being one particular – and you might understand the delay. But that will be the subject of a separate post on dancing – I promise!]

But back to thoughts subsequent to my attendance at ACCU2014…

The Myth of Certification

The Bronze Badge. Small but beautiful.
One experience that really got me thinking was a pre-conference talk by Bob Martin reflecting on the path the Agile software development movement has taken since its beginnings. He mentioned an early quote from Kent Beck that Agile was meant to “heal the split between programmers and management”, and that one of the important guiding principles was transparency about the technical process.

But then there was a move to introduce a certification for what are called ‘SCRUM Masters’, key personnel – though not project managers – in an Agile software development approach. The problem is that it is just too simplistic to think that getting a ‘certified’ person involved to ‘manage’ things will sort everything out. This is never how things happen in practice and despite early successes Bob observed that subsequently Agile has not lived up its original expectations.

The transparency that the Agile founders were after has once again been lost. I consider that this happened because the crutch of certification has fostered inappropriately simplistic thinking for a domain that is inherently complex.

My inner response to this was: Well what do you expect?

I very much appreciate and value the principles of Agile, but there is a personal dimension here that we cannot get away from. If the individuals concerned do not change their ideas, and hence their behaviour, then how can we expect collective practices to improve? As I experienced when giving my recent workshop, it is so easy to fall prey to the fascination of the technological details and the seeming certainty of defined processes and certified qualifications.

I remember a conversation with my friend and co-researcher Paul in the early days of embarking upon this research into the personal area of software development. We wanted to identify the essential vision of what we were doing. The idea of maybe producing a training course with certification came up. I immediately balked at the thought of certification because I felt that an anonymising label or certificate would not help. But I could not at the time express why. However it seems that Bob’s experience bears this out and this leaves us with the difficult question:
How do we move any technical discipline forward and encourage personal development in sync with technical competence?

The Need for Dynamic Balance

K13 being winch launched, shown here having just left the ground.
This was another insight as to why I enjoy ACCU conferences so much. There is always the possibility of attending workshops about the technical details of software development and new language features on the one hand, along with other workshops that focus on the more ‘fluffy’ human side of the domain.

I live in two worlds:

  1. When programming I need to be thoroughly grounded and critically attend to detail.
  2. I am also drawn to the philosophy (can’t you tell?) and the processes of our inner life.

Perhaps the latter is to be expected after 30 years of seeing gadgets come and go and the same old messes happen. This perspective gives me a more timeless way of looking at the domain. Today’s gadget becomes tomorrow’s dinosaur – I have some of them in my garage – and you can start to see the ephemeral nature of our technology.

This is what is behind the ancient observation that the external world is Maya. For me the true reality is the path we tread as humans developing ourselves.

Also we need to embrace BOTH worlds, the inner and the outer, in order to keep balance. Indeed Balance is a watchword of mine, but I see it as being a dynamic thing. Life means movement. We cannot fall into the stasis of staying at one point between the worlds, we need to move between them and then they will cross-fertilise in a way that takes you from the parts to the whole.

In our current culture technical work is primarily seen in terms of managing details and staying grounded. But as any of my writings will testify, there is devilry lurking in those details that cannot be handled by a purely technical approach.

Teacher As Master

So John - Do I have to wear the silly hat? Well Bill, only if you want to be a REAL glider pilot.
Another epiphany that I experienced at the conference was a deeper insight into the popular misconception that teachers are not competent practitioners. There is the saying that “Those that can – Do. Those that can’t – Teach”. So there I was in a workshop wondering if that meant that because I was teaching programming, was I automatically not as good at the programming? But then a participant highlighted the fact that this was not so in traditional martial arts disciplines.

Indeed – teaching was seen as a step on the path to becoming a master.

We – hopefully – develop competence which over time tends to become implicit knowledge, but to develop further we need to start teaching. This will force us to make our knowledge explicit and give us many more connections of insight, indeed helping us to see the essential aspects of what we already know. There may be a transitional time where our competence might suffer – a well known phase in learning to teach gliding – as well as being a normal learning process whenever we take our learning to a higher level.

So I think the saying needs changing:
Those that can Do. Those that are masters – Teach.