ACCU2016: Talk on Software Architecture Design 7: Perceiving Organising Principles

Perceiving Organising Principles requires us to develop a living and mobile thinking perception.

Unfortunately, as programmers, we are at a disadvantage.

We work in a domain where a lot of our thinking needs to be fixed into a rule-based structure in order to imagine how a computer program will function. This can have unwanted side effects of making it difficult to think in a mobile, living way. Multi-threaded programming demands this mobile thinking and is why it is so difficult.

At a personal level if we want to develop this other way of seeing we need to engage in some activities that foster that mobile mode of cognition. Perceiving livingness, almost by definition, requires that we need to handle ambiguity. This is what is required when we are working in the ‘gap’, or whenever we are dealing with human situations.

Logical thinking can cope with known and static issues, but as programmers we need to be very aware of the boundaries of our knowledge, more so than the lay person due to the inherent fixity of the domain of computer programming.

Alexander Revisited
At this point it is useful to look at some of Christopher Alexander’s ideas about the perception of beauty and links to what I have been saying about the idea of Cognitive Feeling.

Alexander started with defining a Pattern Language to help foster good architectural design – what he called Living Structure. This metamorphosed into his masterwork, The Nature of Order where he tried to get a better understanding of why we find certain structures beautiful.

In the Nature of Order, Volume 1 Chapter 5, he identified the following 15 properties of Living Structure:

  • Levels Of Scale
  • Strong Centres
  • Boundaries
  • Alternating Repetitions
  • Positive Space
  • Good Shape
  • Local Symmetries
  • Deep Interlock And Ambiguity
  • Contrast
  • Gradients
  • Roughness
  • Echoes
  • The Void
  • Simplicity And Inner Calm
  • Not-Separateness

If you just look at this as a list of items, it can be difficult to understand how these may be useful in design, apart from as heuristic guidelines. Although useful, if we look at them in the light of the dynamic concept of the Organising Principle, they make a lot more sense.

A major point is Alexander’s use of the word: Living. As I point out, this implies ambiguity. Therefore these 15 Properties can be seen instead as Organising Principles and when we try and ‘bring them down’ into a more fixed understanding we will only be seeing one way of looking at each one.

Perceiving the Organising Principle as a Disciplined Artistic Process.
In order to develop a mobile dynamic cognition that can better perceive Organising Principles, my thesis is that we need to take up some artistic pursuit in a disciplined and self-aware way. Do whatever appeals to you. For me I find painting and dance work well.

Lets look at how the practice of these pursuits parallels software development, or indeed any technical effort.

The following image is a watercolour painting of my daughter.

Princess

Freehand painting based on photo of my daughter.

This was one of my first forays into the world of painting and like the good novice artist I was, I decided to draw the picture first, using a photograph as a reference.

It took me 3 hours!

The first effort took 2 hours. The next took 1 hour and the last two each took half an hour, with the final result intended as the basis for the final painting. Being the worried novice that I was I decided to perform a ‘colour check’ painting freehand before doing the final version. In the end this became the final painting I have included here as I found that when I tried to paint into the final drawing it did not have the same life as the freehand painting.

This is an example of the difference between the ‘master’ freehand approach as compared to the ‘journeyman’ drawn approach. Of course I do not consider myself to be a master painter, but this example illustrates the self-developmental dynamic inherent in the artistic process.

We can also see here the need to do the foundational, ‘analytic’ work, in this case the drawing; followed by the ‘gap’ of putting the drawing away and using the freehand skill to come up with the ‘solution idea’.

The following is a painting by Jim Spencer and is for me an example of how less is more and illustrates how such minimalism is an essential aspect of any mastery. In this case Jim began learning art just after the second world war. (Also see my post Minimalist Mastery)

RedSkyAtNightSmall

The third example of and artistic pursuit is that of dance, in this case Argentine Tango. This particular dance is a form strongly founded on being far more conscious about what is a primary human activity: walking. (See my post on Dance as True Movement)

Here there is a need for structure, and a mobile process of interpretation and improvisation, both founded on a disciplined form of the dance. It can take years to learn how to ‘walk’ again but if followed in a disciplined manner can lead to sublime experiences of ‘Living Structure’ as the ‘team’ of two people dance from a common centre of balance.

In conclusion I hope you have been able to see the implicit link between Art and Technology and the value of balancing ourselves up as human beings.

Thank you for your attention.

In response to my statement about dancing John Lakos (author of Large Scale C++ Software Design) asked for some tango teaching at the end of the talk! The picture was taken by Mogens Hansen.

CharlesAndJohnTangoSmall

ACCU2016: Talk on Software Architecture Design 6: Organising Principles

ACCU2016: Talk on Software Architecture Design 5: Active Design Ideas

In the last post I highlighted some specific design problems and associated solutions. Now I want to look at these solutions a little more deeply.

To refresh our memory the solutions were as follows:

  1. Separating Mutex Concerns.
  2. Sequential Resource Allocation.
  3. Global Command Identification.

I want to characterise these differently because these names sound a little like pattern titles. Although we as a software community have had success using the idea of patterns I think we have fixed the concept rather more than Christopher Alexander may have intended.

I want to rename the solutions as shown below in order to expressly highlight their dynamic behavioural aspect:

  1. Access Separation.
  2. Sequential Allocation.
  3. Operation Filtering.

You might have noticed in the third example the original concept of “Global Command Identification” represents just one possible way to implement the dynamic issue of filtering operations. Something it has in common with much of the published design pattern work where specific example solutions are mentioned. To me design patterns represent a more fixed idea that is closer to the actual implementation.

Others may come up with a better renaming, but I am just trying to get to a more mobile and dynamic definition of the solutions. Looking at the issues in this light starts to get to the core of the issue of why it is so hard to develop an architectural awareness.

If you can truly understand, or ‘grok‘, the core concept of this characterisation, regardless of the actual words, you will see that they do not really represent design patterns – not in the way we have them at the moment.

This is where there is a difference between the architecture of buildings – where design patterns originated – and the architecture of software. Although both deal with the design of fixed constructs, whether it be the building or the code, the programmer has to worry far more about the dynamic behaviour of the fixed construct (their code). Yes – a building architect does have to worry about the dynamic behaviour of people inhabiting their design, but software is an innately active artefact.

Let me recap the debugging and design fixing process in terms of the following actions that are carried out in order:

1: Delicately Empirically Collect the Data.
Here we have to be very aware of the boundaries of our knowledge and collect information in a way that does not disturb the phenomenon we are looking at. Awareness of our own thinking process is vital here.

2: Imagine into the Problem Behaviour.
We have to imagine ourselves into the current behaviour that the system is exhibiting. (This is the hard bit when you are under pressure and is what requires a strong focus in order to understand what the existing design is doing)

3: Imagine into the Required Behaviour.
We need to imagine into what the required behaviour of the system NEEDS to be and it is here that we start to meet the ‘gap’ between problem and solution. It may indeed only need a one line fix, but quite likely there is a deeper design problem. Again here is a point where our self-awareness is important. Do we have the discipline to make ourselves stop and think more carefully and widely about the presenting problem?

4: THE GAP. Cognitively Feeling for the best Solution Concept.
In this stage there is a very fine “Cognitive Feeling” in action to decide what is a good fit to the problem. For the experienced programmer this is more than just a question of “Does this solution fit the requirement?”

There is the consideration of whether the proposed solution idea is going to be a sustainable fix during the future lifetime of the project.

This question is much like asking myself if I will still find
this painting beautiful in 10 years time.

YachtClubSmall

There is a current widely held belief that the best procedure for coming up with a design solution is to produce many possible alternatives and evaluate them in order to choose the best one. In practice I have found that this very rarely – if ever – happens.

I usually arrive at a single design solution by trying out the multiplicity of possible solutions while in the ‘gap’ where I am considering various alternatives – imagining each of them in operation, possibly ‘drawing’ the thoughts out on a whiteboard as I think.

In this part of the process the more experienced programmer will slow things down to the extent of even putting in a provisional simple solution that gives them some breathing, or thinking, space. This is the idea of provisionality mentioned by Marian Petre, because this mode of design thinking requires time and reduced pressure.

It is amazing how often this happens in the shower!

Of course this is predicated on the fact that I have done the required detailed groundwork, but as I mentioned in the poem, our logical thinking can only take us to the boundary of what we know. Trying to push to go faster results in inadequate and buggy designs that are based on immature thinking.

This is the central conundrum of software development. The more we dive down into detailed analysis, the more we encounter these ‘softer’, heuristic elements.

5: Implementation.
Finally we get to the implementation. As you will have seen it is far too easy to jump into “premature implementation”. It is hard, if not impossible, to teach people just how small a part the coding is of the whole process. It needs to be experienced. Until you have seen how a good design triggers an amazing collapse in code complexity, the importance of taking the time to search for that great design is not an obvious conclusion. This is a fundamental eye of the needle that all programmers need to go through.

This is the main reason I like programming:

I get less code.
I get something I can reason about.
I get something that does the job!

Beautiful!

In the next post I am going to show how the dynamic design solution ideas and the human analysis process link to what I will call the “Organising Principle”, a term I have borrowed from Rudolf Steiner’s lexicon.

ACCU2016: Talk on Software Architecture Design 6: Organising Principles
ACCU2016: Talk on Software Architecture Design 4: A Design Example

ACCU2016: Talk on Software Architecture Design 4: A Design Example

[The following transcript is more for the techies of my readership. For those of a less technical inclination, feel free to wait for the next post on “Active Design Ideas” which I have separated out due to the length of this post.]

I want to underpin the philosophical aspect of this discussion by using an example software architecture and considering some design problems that I have experienced with multi-threaded video player pipelines. The issues I highlight could apply to many video player designs.

The following image is a highly simplified top-level schematic, the original being just an A4 pdf captured from a whiteboard, a tool that I find much better for working on designs than using any computer-based UML drawing tool. The gross motor movement of hand drawing ‘in the large’ seems to help the thinking process.

Player

There are 3 basic usual commands for controlling any video player that has random access along a video timeline:

  • Show a frame
  • Play
  • Stop

In this example there is a main controller thread that handles the commands and controlling the whole pipeline. I am going to conveniently ignore the hard problem of actually reading anything off a disk fast enough to keep a high resolution high frame-rate player fed with data!

The first operation for the pipeline to do is to render the display frames in a parallel manner. The results of these parallel operations, since they will likely be produced out of order, need to be made into an ordered image stream that can then be buffered ahead to cope with any operating system latencies. The buffered images are then transferred into an output video card, which has only a relatively small amount of video frame storage. This of course needs to be modeled in the software so that (a) you know when the card is full; and (b) you know when to switch the right frame to the output without producing nasty image tearing artefacts.

These are all standard elements you will get with many video player designs, but I want to highlight three design issues that I experienced in order to get an understanding of what I will later term an “Organising Principle”.

First there was slow operation resulting in non real-time playout. Second, occasionally you would get hanging playout or stuttering frames. Third, you could very occasionally get frame jitter on stopping.

Slow operation
Given what I said about Goethe and his concept of Delicate Empiricism, the very first thing to do was to reproduce the problem and collect data, i.e. measure the phenomenon WITHOUT jumping to conclusions. In this case it required the development of logging instrumentation software within the system – implemented in a way that did not disturb the real-time operation.

With this problem I initially found that the image processing threads were taking too long, though the processes were doing their job in time once they had their data. So it was slowing down BEFORE they could get to start their processing.

The processing relied on fairly large processing control structures that were built from some controlling metadata. This build process could take some time so these structures were cached with their access keyed by that metadata, which was a much smaller structure. Accessing this cache would occasionally take a long time and would give slow operation, seemingly of the image processing threads. This cache had only one mutex in its original design and this mutex was locked both for accessing the cache key and for building the data structure item. Thus when thread A was reading the cache to get at an already built data item, it would occasionally block behind thread B which was building a new data item. The single mutex was getting locked for too long while thread B built the new item and put it into the cache.

So now I knew exactly where the problem was. Notice the difference between the original assumption of the problem being with the image processing, rather than with the cache access.

It would have been all too easy to jump to an erroneous conclusion, especially prevalent in the Journeyman phase, and change what was thought to be the problem. Although such a change would not actually fix the real issue, it could have changed the behaviour and timing so that the problem may not present itself, thus looking like it was fixed. It would likely resurface months later – a costly and damaging process for any business.

In this case the solution here was to have finer grained mutexes: one for the key access into the cache and a separate one for accessing the data item, which was then lazily built on first access.

Hanging Playout or Stuttering Frames
The second bug was that the playout would either hang or stutter. This is a great example because it illustrates a principle that we need to learn when dealing with any streamed playout system.

The measurement technique in this case was extremely ‘old school’, simply printing data to a log output file. Of course only a few characters were output per frame, because at 60fps (a typical modern frame-rate) you only have 16ms per frame.

In this case the streaming at the output end of the pipeline was happening out of order, a bad fault for a video playout design. Depending upon how the implementation was done, it would either cause the whole player to hang or produce a stuttered playout. Finding the cause of this took a lot of analysis of the output logs and many changes to what was being logged. An example of needing to be clear about the limits of one’s knowledge and of properly identifying the data that next needed to be collected.

I found that there was an extra ‘hidden’ thread added within the output card handling layer in order to pass off some other output processing that required. However it turned out that there was no enforcement of frame streaming order. This meant that the (relatively) small amount of memory in the output card would get fully allocated and this would give rise to a gap in the output frame ordering. The output control stage was unable to fill the gap in the frame sequence with the correct frame, because there was no room in the output card for that frame. This would usually result in the playout hanging.

MindTheGapCropped

This is why, with a streaming pipeline, where you always have limited resources at some level, allocation of those resources MUST be done in streaming order. This is a dynamic principle that can take a lot of hard won experience to learn.

The usual Journeyman approach to such a problem is just to add more memory, i.e. more resource! This will hide the problem because though processing will still be done out of order, the spare capacity has been increased and it will not go wrong until you next modify the system to use more resource. At this point the following statement is usually made:

“But this has been working ok for years!”

The instructions I need to tell less experienced programmers when trying to debug such problems will usually include the following:

“Do not change any of the existing functionality.
Disturb the system as little as possible.
Keep the bug reproducible so you
can measure what is happening.
Then you will truly know when you have fixed the fault.”

Frame Jitter on Stop
The third fault case was an issue of frame jitter when stopping playout. The problem was that although the various buffers would get cleared, there could still be some frames ‘in flight’ in the handover threads. This is a classic multi-threading problem and one that needs careful thought.

In this case when it came time to show the frame at the current position, an existing playout had to be stopped and the correct frame would need to be processed for output. This correct frame for the current position would make its way through to the end of the pipeline, but could get queued behind a remnant frame from the original stopped playout. This remnant frame would most likely have been ahead of the stop position because of the pre-buffering that needed to take place. Then when it came time to re-enable the output frame viewing in order to show the correct frame, both frames would get displayed, with the playout remnant one being shown first. This manifested on the output as a frame jitter.

One likely fix of an inexperienced programmer would be to make the system sit around waiting for seconds while the buffers were cleared and possibly cleared again, just in case! (The truly awful “sleep” fix.) This is one of those cases where, again due to lack of deep analysis, a defensive programming strategy is used to try and force a fix of what initially seems to be the problem. Again, it is quite likely that this may SEEM to fix the problem, and is likely to happen if the developer is under heavy time pressure.

The final solution to this particular problem was to use the concept of uniquely identified commands, i.e. ‘command ids’. Thus each command from the controlling thread, whether it was a play request or a show frame request, would get a unique id. This id was then tagged on to each frame as it was passed through the pipeline. By using a low-level globally accessible (within the player) ‘valid command id set’ the various parts of the pipeline could decide, by looking at the tagged command id, if they had a valid frame that could be allowed through or quietly ignored.

When stopping the playout all that had to be done was to clear the buffers, remove the relevant id from the ‘valid command id set’ and this would allow any pesky remaining ‘in flight’ frames to be ignored since they had an invalid command id. This changed the stop behaviour from being an occasional, yet persistent bug, into a completely reliable operation and without the need for ‘sleep’ calls anywhere.

In the next post I will recap the above process of finding and fixing the problems from a human development perspective.

ACCU2016: Talk on Software Architecture Design 5: Active Design Ideas
ACCU2016: Talk on Software Architecture Design 3: The Issue of Doubt

ACCU2016: Talk on Software Architecture Design 1: The Path of the Programmer

[Following on from my introductory poem, this is the first of a series of posts providing a transcript of my talk at ACCU2016 entitled: “Software Architecture: Living Structure, Art, or Just Hopeful Arrangements of Bytes“. I have modified it to make it read better, cutting out the usual Ums and Errs!]

Introduction
The impetus for this talk came out of a chat I had with a friend, where I was ranting – as I can do – about code, and then realized that of course it is easy to rant about other people’s code. This prompted me to look back at my own experience. I started coding for a living back in 1980 – a fact that doesn’t bear thinking about! – and have spent most of my career implementing high data rate video editing systems. Until recently I worked in a company that does TV and film effects and editing systems, working on a large C++ system of more than 10MLOC. I have now moved into the CAE sector.

This is quite a ‘soft’ talk and I will be following on from some points in the keynote (Balancing Bias in Software Development) given by Dr. Marian Petre, although I will drop into some more grounded issues around video player pipeline design and some of the design issues that I have come across.

As I mentioned, I had a sense of frustration with the quality of what was getting produced in a commercial context, and frustration in terms of finding people who could make that switch from doing the actual coding and implementation to taking a more structural view. But though I started coding in 1980, it was not until 1995 that I can say I was actually happy with what I was producing. That is quite a sobering thought. OK, maybe I have the excuse that I did not really get into Object Orientation until 1985/6, and the Dreyfus brothers say it takes 10 years to become an expert in a domain, but even so…

I therefore want to delve into my own experience and try to understand why this takes so long. This is an issue, not so much about teamwork, but about what we could possibly do individually drawn from my own experiences with being a practitioner with large codebases.

In terms of my inspirations with regard to software architecture, Christopher Alexander of course is one, and there is one from left field. I got involved in starting a Steiner school for my children back in the 1990s and Steiner’s epistemology, drawn from a foundation coming from Goethe, is actually quite relevant.

ProgrammersPath

I will recap some of the points from my talk at ACCU2013 about “Software and Phenomenology”, and my workshop in ACCU2014 about “Imagination in Software Development”, but will be taking a slightly different slant on that content.

The Path of the Programmer

I want to start with some reflections on the path of the programmer as I have come to see it, borrowing an idea from Zen about the three phases on the path to enlightenment.

There is the initial NOVICE phase where you are still learning about the tools you have at your disposal.

Primary

A lot of your thinking is going to be Rule Based since you are learning the steps you need to take to do the job. The complexity of your thought is generally going to be less than the problem complexity you are dealing with when you get into ‘live’ industrial work, and hence you are producing brittle code, and/or it is not doing all that is needed. Here you are aware of your own limits because you know you do not know things, but you are unaware of your own process. I am not here talking about team development process, I am talking about your own personal learning process.

This level is thus characterized by an undisciplined self-awareness. There is little self-awareness about your own limits, and the lack of knowledge about your learning process means what awareness you have is undisciplined.

The next phase is what I call the dangerous phase, the JOURNEYMAN phase. It was about 1984 when I was in this phase.

Discus

Here you have a better knowledge of tools, having learnt about many of the programming libraries available to you. But the trap here is that the Journeyman is so very enamoured of those tools, and this conforms to the upward spike in the confidence curve that Dr. Marian Petre talked about this morning (The Dunning-Kruger effect).

Here the problem is that you can get into Abstract thinking and this can lead you to having an overly complex view of the solution. Your thinking here is more complex than the problem warrants. It is quite possible that up to 80% of the code will never be used. Therefore you are unaware of your own thinking limits and this can lead to an experience of total panic, especially if you are working on larger systems. [About a quarter of the listeners raised their hand when I asked if anyone had ever experienced this] This conforms to the downward spike that occurs after the upward spike on the confidence curve.

One anecdote I have is the story of one rather over-confident colleague who was given responsibility for a project. The evening before the client was due to turn up for a demo he was still coding away. When I came into work the next morning there was a note on his desk saying ‘I RESIGN’. He had been working through the night and didn’t manage to get to any solution. Of course the contract was lost.

This highlighted the total lack of awareness about his own limits. In this phase I too remember having an arrogant positivity – “its just software”, with the accompanying assumption that anything is possible. I had an undisciplined lack of self-awareness. Some people can stay in this phase for a long time, indeed their whole career and it is characterized by an insistence on designing and coding to the limit of the complexity of their thinking. This means, by definition, that they will have big problems during debugging because more complex thinking is needed to debug a system than was used in its creation.

We have gone here from one undisciplined state of partial self-awareness to another undisciplined state of no self-awareness. Of course this could be seen to be a bit of a caricature but you know if you hit that panic feeling – you are in this phase.

The next phase is the MASTER phase. In the past I have hesitated to call it the Master phase, referring to it instead as the Grumpy Old Programmer phase!

PetrelLand_4x3_2kcropped

Here we have a good knowledge of tools, but the issue that is different is that you will be using a Context Based thinking. You are looking at the problem you have got in front of you and fitting the tools to that problem. There is a strong link here with a practice when flying aircraft where you need to read from the ground to map, not the other way around. You must do it correctly because there have been a number of accidents where the pilots have read from the map to ground thus misidentifying their location.

It is the same with problem-solving. Focus on the problem, use the appropriate tools as you need them. It is interesting what Dr. Marian Petre said about how experts can seem as though they are novices – which is exactly what I feel like. Sometimes I look at my code and think “that doesn’t really look that complicated”. You bring out the ‘big guns’ when you need them, hopefully abstracted down under a good interface, but you know you need to keep the complexity down because there will be a lot of maintenance in the future, where you or others will have to reason about the code.

In this phase the software complexity is of the order of the problem complexity, perhaps a bit more because you will need a some ‘slack’ within the solution. At a personal level the major point here is that you are aware of your own limits because in the previous phase you have reached that panicked state.

One of the big things I have learnt through my career is the need to develop an inner strength and ability to handle this stressed state. For example there will be a bug. The client may panic. This is to be expected. The salesman may panic. Still possibly to be expected. As a developer if your manager panics too, you have a problem, because the buck will stop with you. Can you discipline your own thinking and your own practice so that you can calmly deal with the issue, regardless of how others are handling the situation? This is the struggle you can get in a commercial coding environment.

Implicit in this description is that you have developed a disciplined personal practice.

So in summary:

Novice

  • Rule-based thinking
  • Undisciplined
  • Some self-awareness.

Journeyman

  • Abstract thinking
  • Undisciplined
  • No (or very little) self-awareness.

Master

  • Contextual thinking
  • Disciplined
  • Deep self-awareness.

ACCU2016: Talk on Software Architecture Design 2: The Historical Context
ACCU2016: The Organising Principle

ACCU2015: Day1: Bits & Pieces

So it is rather late at the moment. Just had a great evening at a local Tango class with some great live music. Absolutely & totally sublime and completely different to the Nerdfest!

However I did retire to the bar on getting back from the dancing – it is important to network – to find that it was still buzzing. Had a great chat with a guy who works in CFD (Computational Fluid Dynamics), even though it was midnight! Amazing how you can talk about multi-processing late at night after a beer. Or have I been doing this job too long?

As for the day’s conference proceedings: Because Jim Coplien is recovering – thankfully – from a seriously illness, today the first keynote was by Pete Goodliffe about “Becoming a Better Programmer”. It was OK but not too many new points for me. Just a light presentation. The main points I took away were:

  • Money incentives work against producing better software. They are different targets. (A subject close to my heart)
  • The 4 levels of Maslow Competence Hierarchy (not the needs one): Unconscious Incompetence (Dangerous!), Conscious Incompetence, Unconscious Competence & Conscious Competence.
  • Dunning Kruger Effect and the cognitive bias of experts under-estimating their ability while novices over-estimate theirs.
  • Of course there was also the obligatory mention of the Dreyfus model of Skill Acquisition which has a good breakdown of the learning levels as well as mentioning that it takes 10000 hours (about 10 years) to become an expert in something.

Next up I went to a talk by Seb Rose, a TDD advocate, on “Less is More” making the case for adjusting the progression of test changes to make them have less ‘fidelity’ to your final test intentions during development to allow you to converge faster to the final implementation. This one left me wondering about this whole stepwise approach to knowledge generation implicit in TDD. Sometimes it does not happen in this left-brain way of small steps. Sometimes there are massive jumps as we tear down old structures and remake them. Also whatever happened to the differentiation between growth & development? Subsequent conversation with other participants showed that I was not alone in thinking about this although, of course, TDD does have its place.

One great point that Seb raised was the importance of saying “I don’t know”, which can be a difficult thing for anyone who is supposed to be considered competent. We need to assume we are initially ignorant and be happy in accepting it.

He cast aspersions on Planning Poker saying that in his experience it has always been wildly inaccurate. For light relief he showed an image of these cards from LunarLogic.
LunarLogic No Bullshit Cards
The main point I took away from this talk was just how much software development is a knowledge generation process (i.e. epistemic) and how we need to be clear about the next smallest question we need answering.

After lunch I split my attendance between a talk called “Talking to the Suits” and a more techie C++ one about the issues in converting the Total War game from Windows to OS/X which was mainly about the differences between the Microsoft and Clang C++ compilers (the Microsoft compiler is much more accepting – which might not help since it wont detect errors in template code if the template is not instantiated).

I will only mention the “Suits” talk because there should be a video of the Windows to OS/X one.

I came away from the “Suits” talk with some great cliches, e.g. Quality is not job number 1; Business loves legacy. The basic point here is that we have to be more explicit about the quantitative costs/gains when making a case for any technical work. We cannot assume that business leaders will be able to understand the ins and outs of things like IDEs, Technical Debt, etc. A good call to make sure we communicate more effectively about such things. A good idea here was the “Problem : Solution : Results” guideline when presenting information. For example: Problem: “Even simple changes take a lot (quantify) of time”; Solution: “If we improve the design of this part of the system”; Results: “we will save weeks of effort (quantify) when we add new workflows”.

That is probably enough for now since I really need my beauty(!) sleep. I have also put my name forward to give a Lightning Talk on “My Thinking is NOT for Sale”. Oh dear. I need to sort out the slides for that now!

Till later…

My Thinking is NOT for Sale

Its 2 o’clock in the morning and I am finding I cannot sleep. A thought that is so off the wall has been gripping my mind for a while now and I am finding it more and more relevant to what I have seen happen during my career as a programmer.

The title is worth restating:

My Thinking is NOT for Sale

This is not so much a shouted response to all those times that good technical effort has been driven carelessly under the steamroller of prevailing economic needs – usually those of the money swallowing monsters that are most companies – than it is a statement of an underlying truth, if only I can express it well enough and in shorter sentences. So here goes…

If you pay for software you will not get what you need. In fact you CANNOT buy software because it is not a finished product. The current economic model we have just does not fit and I believe this is why there is so much trouble in this area.

What is important about good software development?

Over my 30 odd years of work the primary creative and energizing point has been the interaction between the developer and the actual user as a system has come into being. The best of it has been the conversation between the two as they navigate the area of the user’s needs. If the developer is skilled, both technically and personally, they help facilitate both parties in mapping an unknown area, probably only vaguely expressed in the “wants” that the user can currently identify.

This is a conversation of human discovery in thinking.

It is priceless.

It is a gift.

It is a Free process. Capital F.

It cannot be bought.
It cannot be sold.
It is NOT a product.

It only makes sense if the effort is freely given by the developer. The inner costs of doing this are so high that it requires a high level of motivation that can ONLY be internal. To try and shoehorn it into our current ways of thinking about money devalues the process and I think this is what is underlying the problems I have seen happen many times.

The kicker here is that it is likely that it can only be funded by gift money. That means that there can be NO LINK between the funding and the final “product”. I use quotes because that word is a misnomer of what is actually going on.

Unrealistic?

Just go and read a book called Turing’s Cathedral by George Dyson and you will see how the Princeton Institute for Advanced Study was funded by donation. This was where John von Neumann worked and developed the architecture that underlies modern computers.

The picture of how the whole current edifice of modern computing was birthed from gift money just blows me away. I find my thinking so bound up in the capitalist model that to separate the resource – i.e. the money to give time for people to think – from the product of that thinking in such a way shows up the illusion of the current funding models for such work.

Is that enough to allow you to see it? Truly?
If you can then maybe you might understand why I am having trouble sleeping because in my tossing and turning my feelings tell me it could change everything…

Or maybe this is all just a dream and I shall be sensible when I wake up.
Hmmmm.

Post-ACCU2014 Thoughts

My thinking has been working overtime since I attended and presented at the ACCU2014 conference in Bristol.

[The delay in producing another post has been due to a lot of rather extensive personal development that has been occurring for me. Add to this some rather surreal experiences with dance – clubbing in Liverpool being one particular – and you might understand the delay. But that will be the subject of a separate post on dancing – I promise!]

But back to thoughts subsequent to my attendance at ACCU2014…

The Myth of Certification

The Bronze Badge. Small but beautiful.
One experience that really got me thinking was a pre-conference talk by Bob Martin reflecting on the path the Agile software development movement has taken since its beginnings. He mentioned an early quote from Kent Beck that Agile was meant to “heal the split between programmers and management”, and that one of the important guiding principles was transparency about the technical process.

But then there was a move to introduce a certification for what are called ‘SCRUM Masters’, key personnel – though not project managers – in an Agile software development approach. The problem is that it is just too simplistic to think that getting a ‘certified’ person involved to ‘manage’ things will sort everything out. This is never how things happen in practice and despite early successes Bob observed that subsequently Agile has not lived up its original expectations.

The transparency that the Agile founders were after has once again been lost. I consider that this happened because the crutch of certification has fostered inappropriately simplistic thinking for a domain that is inherently complex.

My inner response to this was: Well what do you expect?

I very much appreciate and value the principles of Agile, but there is a personal dimension here that we cannot get away from. If the individuals concerned do not change their ideas, and hence their behaviour, then how can we expect collective practices to improve? As I experienced when giving my recent workshop, it is so easy to fall prey to the fascination of the technological details and the seeming certainty of defined processes and certified qualifications.

I remember a conversation with my friend and co-researcher Paul in the early days of embarking upon this research into the personal area of software development. We wanted to identify the essential vision of what we were doing. The idea of maybe producing a training course with certification came up. I immediately balked at the thought of certification because I felt that an anonymising label or certificate would not help. But I could not at the time express why. However it seems that Bob’s experience bears this out and this leaves us with the difficult question:
How do we move any technical discipline forward and encourage personal development in sync with technical competence?

The Need for Dynamic Balance

K13 being winch launched, shown here having just left the ground.
This was another insight as to why I enjoy ACCU conferences so much. There is always the possibility of attending workshops about the technical details of software development and new language features on the one hand, along with other workshops that focus on the more ‘fluffy’ human side of the domain.

I live in two worlds:

  1. When programming I need to be thoroughly grounded and critically attend to detail.
  2. I am also drawn to the philosophy (can’t you tell?) and the processes of our inner life.

Perhaps the latter is to be expected after 30 years of seeing gadgets come and go and the same old messes happen. This perspective gives me a more timeless way of looking at the domain. Today’s gadget becomes tomorrow’s dinosaur – I have some of them in my garage – and you can start to see the ephemeral nature of our technology.

This is what is behind the ancient observation that the external world is Maya. For me the true reality is the path we tread as humans developing ourselves.

Also we need to embrace BOTH worlds, the inner and the outer, in order to keep balance. Indeed Balance is a watchword of mine, but I see it as being a dynamic thing. Life means movement. We cannot fall into the stasis of staying at one point between the worlds, we need to move between them and then they will cross-fertilise in a way that takes you from the parts to the whole.

In our current culture technical work is primarily seen in terms of managing details and staying grounded. But as any of my writings will testify, there is devilry lurking in those details that cannot be handled by a purely technical approach.

Teacher As Master

So John - Do I have to wear the silly hat? Well Bill, only if you want to be a REAL glider pilot.
Another epiphany that I experienced at the conference was a deeper insight into the popular misconception that teachers are not competent practitioners. There is the saying that “Those that can – Do. Those that can’t – Teach”. So there I was in a workshop wondering if that meant that because I was teaching programming, was I automatically not as good at the programming? But then a participant highlighted the fact that this was not so in traditional martial arts disciplines.

Indeed – teaching was seen as a step on the path to becoming a master.

We – hopefully – develop competence which over time tends to become implicit knowledge, but to develop further we need to start teaching. This will force us to make our knowledge explicit and give us many more connections of insight, indeed helping us to see the essential aspects of what we already know. There may be a transitional time where our competence might suffer – a well known phase in learning to teach gliding – as well as being a normal learning process whenever we take our learning to a higher level.

So I think the saying needs changing:
Those that can Do. Those that are masters – Teach.

Phenomenal Software: The Internal Dimension: Part 2b: Patterns & Livingness

In this post I am going to review Alexander’s three aspects of patterns mentioned before, namely:

  • The Moral Component
  • Coherent Designs
  • Generative Process

I will show how they link to the following ideas:

  • Freedom
  • Cognitive Feeling
  • Livingness

The Moral Component & Freedom

20110916_1352_BuzzardThe moral aspect of patterns can be approached from any of a number of ‘paths up the mountain’. Certainly Alexander was concerned about whether buildings were ‘nurturing’ for us to live in, and so was thinking about more than utility. With computer systems and applications it is easier to think that this utilitarian aspect is all that exists. But there is an environmental part – an inner environment of thought, or ‘theory’ as Naur would say, whether we be users or developers.

If we think about how tools extend our own faculties, indeed our own being, the importance of the quality of this inner environment takes on a new meaning. The nature of the tool will affect how we form our ideas, which in turn will influence the form of our externally made world. Thus Alexander’s use of the word ‘nurturing’ and its applicability to software is not so out of place as it initially seems.

We can relate the ideas of utility, environment and hence morality by considering the concept of freedom – but defined in terms relevant to computer use. A computer system or application is a tool to get a particular task done. Good tools are ‘transparent’, meaning that you do not notice them when performing a particular task – they ‘disappear’ from your consciousness and leave you ‘free’ to focus upon the task in hand. It is in these terms that we can speak about freedom when using computers.

If you experience this ‘transparency’ when using a computer, I would consider that the software you are using contains this moral component that Alexander has defined. To paraphrase his words from the ‘Mirror of Self’ question:

“‘Moral’ Software gives you the freedom to develop a better picture of the whole of yourself, with all your hopes, fears, weaknesses, glory, absurdity, and which – as far as possible – includes everything that you could ever hope to be.”

What higher statement of purpose could we have for the programs we write? The current prevalent economic vision of the software industry pales into insignificance against such a statement.

We should not forget that this freedom to develop a ‘better picture of the whole of ourselves’ can be experienced by both users and developers. Indeed it is a central tenet of my whole ‘Phenomenal Software’ series that good software developers are implicitly on a path of self development, whether they are conscious of it or not.

Coherent Design & Cognitive Feeling

PetrelWingIn talking about coherent design we need to remember that Alexander is dealing with the external world of objects and a software designer/developer is dealing with non-physical artefacts – the building architect works in an external world, the software architect works in an internal world – though no less real in its effects.

If we consider programming as an ‘internal art’ we can see how it can be difficult to communicate effectively about the ideas that underpin our design and coding. Peter Naur wrote about the need to maintain a theory alive in the minds of the programmers if a system was to be properly extended or maintained. He also noted that the theoretical element could not be communicated accurately via written documentation or even the code itself – it needed human interaction with people holding the living theory of the software.

Reflecting on my own career I have come to realize that it is difficult to identify an abstract form of coherence or goodness for software separate from the context in which it is to be used. For instance some code that I had found to be elegant in the early days of computing, say using little memory and having few instructions, would not be a good solution to the same problem in a modern context. So here we can see the integration required between form and function; solution and problem context. They need to be in harmony: coherent form in design will have the moral component in its function and will mean that the theories and meaning formed by the developer or user will make sense and meet the ‘Mirror of the Self’ needs.

Most novices will work from a set of rules, one such example being to ‘Make it Work, Make it Right, Make it Fast’ in that order. This is a valid heuristic useful to stop programmers optimizing the code too early. However a rule-based approach has the danger of separating the stages into individual parts – which is not the best way to proceed in one’s thinking. This is the same tension as that between the TDD (Test Driven Development) folks and the design-up-front folks – a classic example of the need to work from an integrated view of the whole and the parts – i.e. respectively: making it right and making it work; design-driven and test-driven. In practice being done together.

So over my career I have developed a feeling for good design in the crucible of solving real-world problems. In actuality I cannot make it ‘Work’ until I have a sense of what is ‘Right’, even to a small degree. You can perhaps see that I have a personal preference towards the design view, though during my work I can easily fall into the trap of hitting the keyboard too early, something I have worked vigorously at controlling! As I gained experience I started to get this sense of the best way to structure the software, and in some cases – such as perhaps designing a media player – I might have a feeling for what is ‘Fast’ at an early stage, but this needs to be kept strongly in check against reality. Optimisation should be based upon measurement and human beings can be worse than random at predicting what needs optimising.

This sense for a good or coherent design is what I have called a ‘cognitive feeling’ in an earlier post, which is a very fine and delicate sensation indeed – it is not strong emotion. Over the years of my career I liken its development to the creation of a new sense organ, cognitive in its nature. It can be difficult to explain to less experienced practitioners due to the fact that the sense is likely to have been implicitly developed over the years. However it matches closely to the feelings that are evinced by Alexander’s ‘Mirror of the Self’ test so that frequently when talking to more experienced developers it will not be hard to get to a commonality in judgement.

This means that in order to create coherent designs we will need to develop this extra sense of a fine cognitive feeling. A quote from Alexander serves to give an idea of this feeling sense, and though dealing with external geometric entities, the same comments relate to software design when imagining how the structures will function:

“A pulsating, fluid, but nonetheless definite entity swims in your mind’s eye. It is a geometrical image, it is far more than the knowledge of the problem; it is the knowledge of the problem, coupled with the knowledge of the kinds of geometrics which will solve the problem, and coupled with the feeling which is created by that kind of geometry solving that problem.” A Timeless Way of Building, Chapter 9.

Generative Process & Living Structure

CloudTrailIn Alexander’s talk at the OOPSLA’96 conference in San Jose, he seemed somewhat bemused by the software domain’s use of patterns. On reading Alexander’s Nature of Order series we can perhaps see why. Some of the central ideas are those of ‘living structure’ and ‘structure preserving transformations’ which result in a ‘generative process’. How could these relate to software?

It is easier to understand the concept of structure preserving transformations when looking at how living things grow. As they grow and develop they need to continue living – we cannot just take them apart, do some modifications, and then re-assemble them! Every step of growth cannot disturb their livingness – thus EVERY change must preserve their living structure. The world of living things has no choice but to use a generative process if it is to stay alive.

At first glance this does not relate at all to the built world. When fixing my car in my younger days, there were times when bits of gearbox and engine were all over the floor! If the car had been a living being it would have been dead, but since it was not I of course was able to re-assemble it and make it work. Small software systems are similar. However, if you have ever worked on a sizable legacy system you will know that you need to spend a LOT of effort on NOT breaking the system. Any changes you make need to be closer to structure preserving, and any bad structures will need major surgery to improve. In reality you will not even try if it is not economically viable. Once you have bad structure, or use a ‘structure destroying transformation’ it is extremely difficult if not impossible to remedy:

“Good transformations do not cause any upheaval. So to get a good project, we merely have to make a sequence of structure-preserving transformations. When we do so, a good design evolves smoothly, almost automatically.
However, even a single bad transformation can upset the smooth unfolding. If we make one transformation which destroys structure, in the middle of a sequence of good ones, things become ugly very quickly;”
Nature of Order Book 2 p61. See also chapter 4.
I am not sure about the use of the word ‘merely’ in the above, since it understates the difficulty of identifying good transformations.

Also if we accept Naur’s Theory Building view and the idea of human mental schemas, this idea of a generative process makes more sense, since there is the living theory held by the programmers. If we then go further and connect to the phenomenological ideas of how we create meaning when we develop theories we can see that there is a justification for finding a livingness within the programming activity. Bortoft talks about the link between understanding and meaning which relates well to Naur’s ideas of theory building when understanding software. It also gives another dimension to the idea of livingness:

“understanding is the ‘concretion of meaning itself’, so that meaning comes into being in understanding.” Henri Bortoft in Taking Appearance Seriously p108

Just one final thought about the idea of livingness. Some might think that a running program would have a livingness, especially if it was a big system. I am not so sure and consider that it is WE who provide the livingness in the software domain. It is WE who create; experience design pain; judge. The computers are running a network of finalized thought constructs which is a different process to the thinking we do when defining those thought constructs. For me this perception of livingness in Alexander’s work and its relation to software is an ongoing work-in-progress.

I want to thank Jim Coplien for his help in pointing me at various ideas of Alexander that mesh with my work for this post.

In the next post I shall conclude this series of ‘Phenomenal Software’ by returning to the way philosophy has progressed forward from the Cartesian Subject/Object view. This will mean dealing with the thorny subject of subjectivity and of course you will have to decide if you can trust my judgements!

Thanks for reading.

Phenomenal Software: The Internal Dimension: Part 2a: Patterns & The Mirror of the Self.

When I started out programming the prevalent idea, which I shared at the time with many others, was that an artistic view was not going to be any part of the work. However, after a number of years in the business I began to come across moments of wonder when either I saw a great piece of coding or, very occasionally, managed to create something myself that hit the ‘sweet spot’. It was not until I happened upon Christopher Alexander’s work on patterns that I began to understand some of what was happening during these moments.

My introduction to the patterns movement occurred when reading the book Design Patterns written by the “Gang of Four”: Gamma, Helm, Johnson & Vlissides, this becoming a standard reference text. In trying to better understand the patterns vision I read some of Richard Gabriel who has some interesting ideas about the relationship between art and software. He has even come up with the idea of a Masters in Fine Arts in Software.

In Alexander’s earlier architectural patterns book he defines a library of external geometric entities to be used as design guidelines for buildings, for example: an alcove for chats that is separated off from a corridor. It is in his later masterwork: The Nature of Order that he describes his underlying ideas about ‘living structure’ and his thoughts about the perception of ‘goodness’ in design.

Alexander does not shy away from the moral dimension of his work. In a keynote speech he gave to the OOPSLA’96 conference in San Jose he stated that:

“One of the things we looked for was a profound impact on human life. We were able to judge patterns, and tried to judge them, according to the extent that when present in the environment we were confident that they really do make people more whole in themselves.” OOPSLA’96 keynote.

And later in the same talk:

“The pattern language that we began creating in the 1970s had other essential features. First, it has a moral component. Second, it has the aim of creating coherence, morphological coherence in the things which are made with it. And third, it is generative: it allows people to create coherence, morally sound objects, and encourages and enables this process because of its emphasis on the coherence of the created whole.” OOPSLA’96 keynote.

But how can we judge what is coherent? To understand Alexander’s approach we have to read the first book of ‘The Nature of Order’ series where he describes the ‘The Mirror of the Self’ test.

The Mirror of the Self

To develop this judgement of coherent living structure, Alexander identifies what he calls the ‘Mirror of the Self’ test. He highlights that there is a difference between what he calls ‘apparent liking’ and ‘true liking’. For example, when deciding which of two objects are liked the best, rather than accepting a quick ‘apparently liked’ judgement he asks for a ‘truly liked’ judgement:

“…which of the two objects seems like a better picture of all of you, the whole of you: a picture which shows you as you are, with all your hopes, fears, weaknesses, glory and absurdity, and which – as far as possible – includes everything that you could ever hope to be. In other words, which comes closer to being a true picture of you in all your weakness and humanity;…” Nature of Order: Book 1. p317.

Using this idea he has found that it is possible to have a high level of agreement (80-90%) between people when using their judgment to identify living structure for objects. So it seems that how we phrase the question is all important.

A Reappraisal of the Software Patterns Movement

So far the software patterns movement has tried to abstract out particular solution patterns to be used as guidelines when designing software structures. Despite the best intentions it has degenerated into being a set of document templates, rather than embodying the wider view of Alexander’s work. Once again we have become hooked on a results-oriented view of the world as if we can only feel comfortable with this approach in such a technical domain.

Erich Gamma, one of the co-authors of the Design Patterns book, said that referring to patterns is most useful when we already have a specific design ‘pain’ rather than trying to force patterns onto a particular project from the outset. This points to the fact that we cannot get away from being conscious of how we develop our judgement. How do we even identify that we have a design ’pain’ if not through discerning human judgement and a sense of rightness?

Along with other commentators like Jim Coplien, I consider that Alexander’s vision of patterns (the drive towards living structure and the big question of making human life more whole) has not been truly realized within the software discipline. We need to revisit the Alexandrian roots of the patterns movement and understand how these roots relate to software development.

In Alexander’s OOPSLA’96 talk he identified 3 key points in his vision for the patterns work: a moral component; coherent designs; generative process. Although there has been some discussion in the software community about Alexander’s later work, it is fair to say that it has been difficult to take these ideas further in the domain. However I have found that by connecting the ideas with those prompted by reading Bortoft and early Steiner we can get a bit more clarification which I will report on in my next post.

Thanks for reading so far and I wish you all the very best for 2014…

Phenomenal Software: The Need for Exact Imagination

This is a crucial aspect of being a software developer – the ability to hold an exact imagination of the processes occurring within a system. Fundamentally whenever we have to debug or design a system we need to try and ‘run’ the system or subsystem within our own thinking, if only in part.

It is one thing to appreciate how the concept of ‘imagination’ can be used to describe this process, but it is another to truly understand the deeper aspects of the faculties we need to develop in order to perform this activity. In my experience most software folk have either developed some of these faculties unconsciously at school and/or university, usually by studying maths, or pick it up as they go along during their career – both frighteningly haphazard processes. We may need to include art training for the technical professions.

The Process of Goethean Science

My view is that the insights of Goethean scientific perception can help here, though there are some important differences due to dealing with dead machines instead of a living natural world.

First lets review current ideas about the steps that are used in Goethean scientific perception (See Wahl and Brook):
Created while on a painting course by Claire Warner.

  • Exact Sense Perception.
  • Exact Sensorial Imagination.
  • Beholding the Phenomenon.
  • Being One with the Object.

Usually these steps relate to the perception of natural phenomena rather than to software where we are dealing with human created entities running within a computer, but they are still relevant.

In computing the stage of Exact Sense Perception relates to either developing requirements (when designing) or observing the behaviour of the system (when debugging). I am not going to deal with this step in this post so that I can concentrate on the imagination stage. I will assume that either we have the requirements for a new piece of software, or we have followed the ideas in my previous post when investigating the behaviour of a faulty system that needs debugging.

In imagining these human created entities, or thought structures, we first need to duplicate them in our thinking. However, there is a state beforehand that is worth mentioning as in the above it has been accepted as a given. This is the step of ‘Developing a Focused Attention’ – which is a foundation for all that follows as I shall describe later.

Another point we need to deal with here is that when we work with these thought structures in our heads we have moved away from the sensible world and are using what Rudolf Steiner called sense-free thinking. Certainly we may aid our imagination by naming the structures after physical items, e.g. a pipeline, but we have also created new ideas, e.g. a FIFO [first in first out pipeline], that are not physically based at all. The entities we are dealing with are not sense-perceptible and are not physical items and so it is not accurate to call it Sensorial – a better name being Exact Sense-Free Imagination.

Thus I see the steps as follows:

  • Developing Focused Attention
  • Exact Sense Perception (not dealt with in this post)
  • Exact Sense-Free Imagination
  • Beholding the Phenomenon
  • Being One with the Object

Developing Focused Attention

K13 Approach
In the past this would not have needed mentioning but given modern issues of reduced attention spans it needs to be brought to awareness. I consider this to be the most important faculty I have developed over the 30 years of my career in a technical domain and it is very relevant to life in general. When a software problem is found by a customer it can result in them being quite vocal and upset and getting angry – an appropriate response if they have paid for the system.

This heated response and emotionality can ripple through the vendor company as the customer interacts with its various levels of sales, support and management. If the emotionality persists all the way to the level of the programmers, it is going to be impossible to cleanly fix the problem and will result in a lot of costly, ineffectual ‘thrashing’. This is because in order to properly imagine the system in their thinking, the programmer must hold a focused and clear attention. In effect they have to push the worries of the day-to-day economic world away while they calmly and quietly identify and then fix the fault, possibly educating the various stakeholders about what they are doing in order to gain a little quiet space and time.

Although the customer may be getting ‘appropriately’ angry, ideally this attitude of calmness needs to be developed by all members of any technologically dependent society. Otherwise we will experience a degradation in the quality of our lives as we persist in holding expectations that are not in touch with reality. We need to become more aware about the implicit issues of technological use as it amplifies our intent, including our mistakes. This is where an aggressively economic stance can adversely affect our lives and thinking.

This stage of focused attention requires that we develop our ‘Will’ in the realm of our thinking. We will not make progress by letting our attention wander and our thoughts flit around like moths near a flame. This use of willpower is one of the reasons that the practice of software development is so draining and I liken this to creating and holding a quiet, almost physical, space in my mind in which to run the system in my thinking. (Those of a more spiritual/religious nature might see this as a sacred space, like that of a church, grail, or sanctuary.)

Exact Sense-Free Imagination

BufferThe current prevailing view is that when we are imagining a system in our thinking we are using a visual metaphor. This idea has been furthered by the move from procedural programming to object-oriented programming back in the 80s [See footnote 1]. This assumption has also been consolidated by the discipline making use of the idea of patterns put forward by Christopher Alexander [See footnote 2]. The architectural patterns for buildings indeed are visual entities, but when it comes to imagining the interactions of software structures it is more complicated. (You might actually say the same about building architecture but that is another discussion.)

There are two main aspects to what we have to imagine, first there are STATIC structures, and secondly there are DYNAMIC operations that occur between these structures.

Imagining the static element is when we build the system, usually only partially, in our thoughts first. Here ‘Thought’ is the noun use of the word, and is as close as we come to a visual representation of the code since we usually create a structure of ‘Thoughts’ out of the data structures or objects (if we are using object-oriented programming).

This means that we are imagining sense-free thought structures in the quiet space we have created with our focused attention.

Next is the harder aspect of imagining the dynamic operation of the system. The computer will perform operations exactly in line with the software we are about to write (design) or that we have already written (debugging). When designing we need to imagine if our proposed structure is going to give the required result for the specification drawn up for the system. When debugging we need to imagine what the system is doing and why it is not performing as we expect given our knowledge of the code.

In both these scenarios, as we think through either reproducing the static structures or the activity of the system we need to incrementally move our imagination forward in steps to be sure it is congruent with the code or proposed coding ideas. This, thanks to the operation of the computer, is why this process must be an ‘Exact’, non-fantasy imagination.

A frequent error here is to ‘run ahead’ of the simulation in our heads, missing out vital steps – so we need to start small and will usually use paper and diagrams to help us along. Interestingly I have found a printing whiteboard to be invaluable as the gross motor movement in drawing a structure diagram in the large helps to improve the visualisations of the thought processes and imaginations (schemata as Johnson says in The Body in the Mind).

However the fact that we perform this imagining of the dynamic system state, along with my own experience of the process (many programmers will ‘see’ the code in their mind’s eye), makes me sceptical that we are dealing with a purely image based and visual domain here. This is a current work-in-progress for me at the moment.

Note how this need to NOT run ahead of the simulation echoes the idea of Delicate Empiricism – which leads us neatly onto the next stage of ‘Beholding the Phenomenon’.

Beholding the Phenomenon

This is where we actively perceive the behaviour of our (hopefully exact) imaginations and necessitates switching constantly between imaginer/creator and perceiver. I find that nowadays I do this without noticing the switching, but it takes a significant amount of energy as this is another activity that requires a lot of willpower.

I am having to use my Will to:

  • Maintain focused attention.
  • Create thought structures.
  • Move my imagination through time as I simulate their interactions.
  • Behold what is happening and compare with the requirements.

The best way I can describe the feeling here is that of using a lot of attention to just hold the structures and keep the dynamics ‘alive’ and wait for the perception to catch up. This is why I consider that the word ‘beholding’ is a good way to describe it because we need to balance letting the imagination ‘live’ along with keeping it ‘Exact’.

There is also a very delicate, sensitive ‘cognitive feeling’ going on here when designing and comparing to the requirements as I assess if I am creating the right structures. I will return to this idea in a later post as it relates to Christopher Alexander’s work.

Hopefully this makes it easier to understand why programmers frequently get that far away look in their eyes. Given the complexity of what is happening is it any wonder that bugs occur in our creations?

Being One with the Object

Although difficult to reconcile with the perception of the wholeness of a natural organism, by thinking of this as a creation of knowledge, understanding and meaning, we can make the link. This is the ‘Aha!’ experience and is just as relevant in computing as in traditional science, Goethean or otherwise.

Gliding Sunset

This is the moment when we bring life to the whole enterprise, using our uniquely human faculties.

In a computing context this means that either we have truly understood the detailed elements of the problem and have identified the structures we will need (design), or we have experienced the blinding clarity of seeing exactly where the problem lies (grokking it) and know what we need to do to fix it (debugging).

Health Warnings

crash2
When working with computers we need to realize how it can fossilize our thinking. Because we constrain our inner process to be in step with the machine, we can delude ourselves into thinking that we are just machines. Indeed we may even change our judgement to be far too rule-based, the essence of computer operation.

We need to hold onto the idea of a ‘Living Thinking’ (as Steiner would call it) and I find that the phenomenological ideas of Goethe and those that followed can help us in keeping this uniquely human perspective when dealing with the mechanized world.

Next…

Next time I shall go more into the ideas of patterns, Alexander’s ‘apparent liking’ and ‘true liking’, and the idea of how we use a very fine ‘cognitive feeling’ to judge the rightness of a design.

Footnotes
[1] This was a major change in the way humans thought about programming computers. Initial techniques involved stringing together sequences of machine instructions into procedures that manipulated data, hence the term procedural programming. However it was then decided that it would be a better to give the data structures primacy and attach the procedures to the data. Thus software development became based upon designing structures of objects (or more accurately: instantiations of abstract data types) i.e. data structures with ‘attached’ procedures. Thus was born the idea that you could visually represent software structures which would make it all much easier to imagine.
[2] Christopher Alexander is a mathematician turned architect. The software discipline has used this idea to provide design patterns for software structures. His magnum opus is the 4 book sequence called ‘The Nature of Order‘.