ACCU2016: Talk on Software Architecture Design 6: Organising Principles

I now want to try and explain the idea of an Organising Principle more clearly.

As an introduction they have a number of characteristics. They:

  • Embody a Living Wholeness.
  • Have a high degree of Ambiguity.
  • Are never Static.
  • Lie Behind the Parts.

ElanWoods

Embedded within a set of requirements is something that we need to find and embody in our implementation. As with high level schematics, there is a high degree of ambiguity surrounding this ‘something’.

During the talk we have moved from the high level schematics where I can hand wave to my hearts content about structure and architecture; down to diving into the depths of the actual implementation.

You may remember Andrei Alexandrescu’s keynote where he was talking about the details of a sort algorithm – first thing in the morning. Difficult after a late night before!

When trying to understand what he was talking about we are need to think at multiple levels:

  • Listen to the words he was saying.
  • Read through the code examples in his slides.
  • Imagine the dynamic behaviour of these code extracts.

In order to comprehend how this algorithm works dynamically, the listener needs to use their imagination at the same time as trying to understand what Andrei is saying. During this particular talk you could palpably feel the intense concentration in the whole audience.

During deep technical discussions listeners will work at these two levels of understanding:

  • The static code structure.
  • The dynamic behaviour of the static construct.

In addition one will only be looking at a single possible instantiation of a solution. There could be alternative ways of doing the same thing.

The term I like to use here is that during this process we are trying to ‘Livingly Think’ the Organising Principle.

An Organising Principle is NOT a static thing. It cannot be written down on a piece of paper. Although it informs the implementation it cannot be put into the code. If you fix it: You Haven’t Got It. Remember that phrase because getting it is the real challenge. It lives behind the parts of any concrete implementation.

I am explicitly using this separation of Whole and Parts, a discipline called Mereology. How can we understand the world in terms of wholeness and yet also see how the parts work therein?

Experienced people will have a sense of the whole and yet will be able to see where the likely risk points are going to be. They simultaneously see the whole picture and also know the essence of what needs to happen in the parts. This is what you pay for when you employ an expert. In my design example, for instance, an expert will know to ask about your allocation strategies and if they see some allocations going out of order they will point you at where you need to change the system.

Requirements implicitly contain Organising Principles.

They implicitly, not explicitly, embody dynamic specifications of what needs to happen. Much of my job when I talk to any customer is to try and understand:

  • What they want to do.
  • What I think can be built
  • How I can bring those together.

I need to understand what the core dynamic principles are that are embedded in their requirements. By definition if you are doing a useful piece of software the customer wont know what they actually NEED. They can talk about what they WANT, but that is based in their present experience. The conversation between the user and the developer will be generating new knowledge about future experience.

[Software development involves moving knowledge from the Complex to the Complicated domain as Cynefin defines it. This means we are dealing with emergent knowledge.]

Architecture references the Organising Principle.

If we take the point that an Organising Principle is not a fixed, discrete idea then we can see that it is much like comparing a film of an activity with its reality. The film is just a set of discrete frames, but the reality is continuous.

With software, the different architectures and implementations are different views of a specific Organising Principle. This is very hard to get our heads around and needs a far more mobile thinking than we normally use. This is the core of why good software development is so difficult.

Code is the precipitate of the Organising Principle.

If we manage to perceive this dynamic principle, referencing it by designing a software architecture that we embody into a specific code implementation, we are embarking on a process of precipitation. If we DON’T do this then – indeed – we only have a “Hopeful Arrangement of Bytes” as this talk title suggests.

Of course this may be a valid thing to do sometimes as a piece of ‘software research’ just to see how the code works out. This is especially relevant if there is a blank sheet, or ‘green field’ site where you just have to start somewhere. In this case you need to look at the part of the requirements that represent the riskiest aspects and just implement some proof of concept solution to check your understanding.

This is where it may not be an idea for a novice or journeyman programmer to follow the example of an experienced one.

Be warned: It is possible that a master programmer may be able to work at the keyboard and seemingly design as they go creating from internalized mature ideas. This is something that is definitely NOT recommended for inexperienced developers who will need to spend much more time maturing and externalizing their ideas first before those ideas become part of their ‘cognitive muscle memory’.

In the next and final post of the transcript of this talk I will deal with how to improve our perception of Organising Principles…

ACCU2016: Talk on Software Architecture Design 7: Perceiving Organising Principles
ACCU2016: Talk on Software Architecture Design 5: Active Design Ideas

ACCU2016: Talk on Software Architecture Design 5: Active Design Ideas

In the last post I highlighted some specific design problems and associated solutions. Now I want to look at these solutions a little more deeply.

To refresh our memory the solutions were as follows:

  1. Separating Mutex Concerns.
  2. Sequential Resource Allocation.
  3. Global Command Identification.

I want to characterise these differently because these names sound a little like pattern titles. Although we as a software community have had success using the idea of patterns I think we have fixed the concept rather more than Christopher Alexander may have intended.

I want to rename the solutions as shown below in order to expressly highlight their dynamic behavioural aspect:

  1. Access Separation.
  2. Sequential Allocation.
  3. Operation Filtering.

You might have noticed in the third example the original concept of “Global Command Identification” represents just one possible way to implement the dynamic issue of filtering operations. Something it has in common with much of the published design pattern work where specific example solutions are mentioned. To me design patterns represent a more fixed idea that is closer to the actual implementation.

Others may come up with a better renaming, but I am just trying to get to a more mobile and dynamic definition of the solutions. Looking at the issues in this light starts to get to the core of the issue of why it is so hard to develop an architectural awareness.

If you can truly understand, or ‘grok‘, the core concept of this characterisation, regardless of the actual words, you will see that they do not really represent design patterns – not in the way we have them at the moment.

This is where there is a difference between the architecture of buildings – where design patterns originated – and the architecture of software. Although both deal with the design of fixed constructs, whether it be the building or the code, the programmer has to worry far more about the dynamic behaviour of the fixed construct (their code). Yes – a building architect does have to worry about the dynamic behaviour of people inhabiting their design, but software is an innately active artefact.

Let me recap the debugging and design fixing process in terms of the following actions that are carried out in order:

1: Delicately Empirically Collect the Data.
Here we have to be very aware of the boundaries of our knowledge and collect information in a way that does not disturb the phenomenon we are looking at. Awareness of our own thinking process is vital here.

2: Imagine into the Problem Behaviour.
We have to imagine ourselves into the current behaviour that the system is exhibiting. (This is the hard bit when you are under pressure and is what requires a strong focus in order to understand what the existing design is doing)

3: Imagine into the Required Behaviour.
We need to imagine into what the required behaviour of the system NEEDS to be and it is here that we start to meet the ‘gap’ between problem and solution. It may indeed only need a one line fix, but quite likely there is a deeper design problem. Again here is a point where our self-awareness is important. Do we have the discipline to make ourselves stop and think more carefully and widely about the presenting problem?

4: THE GAP. Cognitively Feeling for the best Solution Concept.
In this stage there is a very fine “Cognitive Feeling” in action to decide what is a good fit to the problem. For the experienced programmer this is more than just a question of “Does this solution fit the requirement?”

There is the consideration of whether the proposed solution idea is going to be a sustainable fix during the future lifetime of the project.

This question is much like asking myself if I will still find
this painting beautiful in 10 years time.

YachtClubSmall

There is a current widely held belief that the best procedure for coming up with a design solution is to produce many possible alternatives and evaluate them in order to choose the best one. In practice I have found that this very rarely – if ever – happens.

I usually arrive at a single design solution by trying out the multiplicity of possible solutions while in the ‘gap’ where I am considering various alternatives – imagining each of them in operation, possibly ‘drawing’ the thoughts out on a whiteboard as I think.

In this part of the process the more experienced programmer will slow things down to the extent of even putting in a provisional simple solution that gives them some breathing, or thinking, space. This is the idea of provisionality mentioned by Marian Petre, because this mode of design thinking requires time and reduced pressure.

It is amazing how often this happens in the shower!

Of course this is predicated on the fact that I have done the required detailed groundwork, but as I mentioned in the poem, our logical thinking can only take us to the boundary of what we know. Trying to push to go faster results in inadequate and buggy designs that are based on immature thinking.

This is the central conundrum of software development. The more we dive down into detailed analysis, the more we encounter these ‘softer’, heuristic elements.

5: Implementation.
Finally we get to the implementation. As you will have seen it is far too easy to jump into “premature implementation”. It is hard, if not impossible, to teach people just how small a part the coding is of the whole process. It needs to be experienced. Until you have seen how a good design triggers an amazing collapse in code complexity, the importance of taking the time to search for that great design is not an obvious conclusion. This is a fundamental eye of the needle that all programmers need to go through.

This is the main reason I like programming:

I get less code.
I get something I can reason about.
I get something that does the job!

Beautiful!

In the next post I am going to show how the dynamic design solution ideas and the human analysis process link to what I will call the “Organising Principle”, a term I have borrowed from Rudolf Steiner’s lexicon.

ACCU2016: Talk on Software Architecture Design 6: Organising Principles
ACCU2016: Talk on Software Architecture Design 4: A Design Example

ACCU2016: Talk on Software Architecture Design 1: The Path of the Programmer

[Following on from my introductory poem, this is the first of a series of posts providing a transcript of my talk at ACCU2016 entitled: “Software Architecture: Living Structure, Art, or Just Hopeful Arrangements of Bytes“. I have modified it to make it read better, cutting out the usual Ums and Errs!]

Introduction
The impetus for this talk came out of a chat I had with a friend, where I was ranting – as I can do – about code, and then realized that of course it is easy to rant about other people’s code. This prompted me to look back at my own experience. I started coding for a living back in 1980 – a fact that doesn’t bear thinking about! – and have spent most of my career implementing high data rate video editing systems. Until recently I worked in a company that does TV and film effects and editing systems, working on a large C++ system of more than 10MLOC. I have now moved into the CAE sector.

This is quite a ‘soft’ talk and I will be following on from some points in the keynote (Balancing Bias in Software Development) given by Dr. Marian Petre, although I will drop into some more grounded issues around video player pipeline design and some of the design issues that I have come across.

As I mentioned, I had a sense of frustration with the quality of what was getting produced in a commercial context, and frustration in terms of finding people who could make that switch from doing the actual coding and implementation to taking a more structural view. But though I started coding in 1980, it was not until 1995 that I can say I was actually happy with what I was producing. That is quite a sobering thought. OK, maybe I have the excuse that I did not really get into Object Orientation until 1985/6, and the Dreyfus brothers say it takes 10 years to become an expert in a domain, but even so…

I therefore want to delve into my own experience and try to understand why this takes so long. This is an issue, not so much about teamwork, but about what we could possibly do individually drawn from my own experiences with being a practitioner with large codebases.

In terms of my inspirations with regard to software architecture, Christopher Alexander of course is one, and there is one from left field. I got involved in starting a Steiner school for my children back in the 1990s and Steiner’s epistemology, drawn from a foundation coming from Goethe, is actually quite relevant.

ProgrammersPath

I will recap some of the points from my talk at ACCU2013 about “Software and Phenomenology”, and my workshop in ACCU2014 about “Imagination in Software Development”, but will be taking a slightly different slant on that content.

The Path of the Programmer

I want to start with some reflections on the path of the programmer as I have come to see it, borrowing an idea from Zen about the three phases on the path to enlightenment.

There is the initial NOVICE phase where you are still learning about the tools you have at your disposal.

Primary

A lot of your thinking is going to be Rule Based since you are learning the steps you need to take to do the job. The complexity of your thought is generally going to be less than the problem complexity you are dealing with when you get into ‘live’ industrial work, and hence you are producing brittle code, and/or it is not doing all that is needed. Here you are aware of your own limits because you know you do not know things, but you are unaware of your own process. I am not here talking about team development process, I am talking about your own personal learning process.

This level is thus characterized by an undisciplined self-awareness. There is little self-awareness about your own limits, and the lack of knowledge about your learning process means what awareness you have is undisciplined.

The next phase is what I call the dangerous phase, the JOURNEYMAN phase. It was about 1984 when I was in this phase.

Discus

Here you have a better knowledge of tools, having learnt about many of the programming libraries available to you. But the trap here is that the Journeyman is so very enamoured of those tools, and this conforms to the upward spike in the confidence curve that Dr. Marian Petre talked about this morning (The Dunning-Kruger effect).

Here the problem is that you can get into Abstract thinking and this can lead you to having an overly complex view of the solution. Your thinking here is more complex than the problem warrants. It is quite possible that up to 80% of the code will never be used. Therefore you are unaware of your own thinking limits and this can lead to an experience of total panic, especially if you are working on larger systems. [About a quarter of the listeners raised their hand when I asked if anyone had ever experienced this] This conforms to the downward spike that occurs after the upward spike on the confidence curve.

One anecdote I have is the story of one rather over-confident colleague who was given responsibility for a project. The evening before the client was due to turn up for a demo he was still coding away. When I came into work the next morning there was a note on his desk saying ‘I RESIGN’. He had been working through the night and didn’t manage to get to any solution. Of course the contract was lost.

This highlighted the total lack of awareness about his own limits. In this phase I too remember having an arrogant positivity – “its just software”, with the accompanying assumption that anything is possible. I had an undisciplined lack of self-awareness. Some people can stay in this phase for a long time, indeed their whole career and it is characterized by an insistence on designing and coding to the limit of the complexity of their thinking. This means, by definition, that they will have big problems during debugging because more complex thinking is needed to debug a system than was used in its creation.

We have gone here from one undisciplined state of partial self-awareness to another undisciplined state of no self-awareness. Of course this could be seen to be a bit of a caricature but you know if you hit that panic feeling – you are in this phase.

The next phase is the MASTER phase. In the past I have hesitated to call it the Master phase, referring to it instead as the Grumpy Old Programmer phase!

PetrelLand_4x3_2kcropped

Here we have a good knowledge of tools, but the issue that is different is that you will be using a Context Based thinking. You are looking at the problem you have got in front of you and fitting the tools to that problem. There is a strong link here with a practice when flying aircraft where you need to read from the ground to map, not the other way around. You must do it correctly because there have been a number of accidents where the pilots have read from the map to ground thus misidentifying their location.

It is the same with problem-solving. Focus on the problem, use the appropriate tools as you need them. It is interesting what Dr. Marian Petre said about how experts can seem as though they are novices – which is exactly what I feel like. Sometimes I look at my code and think “that doesn’t really look that complicated”. You bring out the ‘big guns’ when you need them, hopefully abstracted down under a good interface, but you know you need to keep the complexity down because there will be a lot of maintenance in the future, where you or others will have to reason about the code.

In this phase the software complexity is of the order of the problem complexity, perhaps a bit more because you will need a some ‘slack’ within the solution. At a personal level the major point here is that you are aware of your own limits because in the previous phase you have reached that panicked state.

One of the big things I have learnt through my career is the need to develop an inner strength and ability to handle this stressed state. For example there will be a bug. The client may panic. This is to be expected. The salesman may panic. Still possibly to be expected. As a developer if your manager panics too, you have a problem, because the buck will stop with you. Can you discipline your own thinking and your own practice so that you can calmly deal with the issue, regardless of how others are handling the situation? This is the struggle you can get in a commercial coding environment.

Implicit in this description is that you have developed a disciplined personal practice.

So in summary:

Novice

  • Rule-based thinking
  • Undisciplined
  • Some self-awareness.

Journeyman

  • Abstract thinking
  • Undisciplined
  • No (or very little) self-awareness.

Master

  • Contextual thinking
  • Disciplined
  • Deep self-awareness.

ACCU2016: Talk on Software Architecture Design 2: The Historical Context
ACCU2016: The Organising Principle

ACCU2015: Day1: Bits & Pieces

So it is rather late at the moment. Just had a great evening at a local Tango class with some great live music. Absolutely & totally sublime and completely different to the Nerdfest!

However I did retire to the bar on getting back from the dancing – it is important to network – to find that it was still buzzing. Had a great chat with a guy who works in CFD (Computational Fluid Dynamics), even though it was midnight! Amazing how you can talk about multi-processing late at night after a beer. Or have I been doing this job too long?

As for the day’s conference proceedings: Because Jim Coplien is recovering – thankfully – from a seriously illness, today the first keynote was by Pete Goodliffe about “Becoming a Better Programmer”. It was OK but not too many new points for me. Just a light presentation. The main points I took away were:

  • Money incentives work against producing better software. They are different targets. (A subject close to my heart)
  • The 4 levels of Maslow Competence Hierarchy (not the needs one): Unconscious Incompetence (Dangerous!), Conscious Incompetence, Unconscious Competence & Conscious Competence.
  • Dunning Kruger Effect and the cognitive bias of experts under-estimating their ability while novices over-estimate theirs.
  • Of course there was also the obligatory mention of the Dreyfus model of Skill Acquisition which has a good breakdown of the learning levels as well as mentioning that it takes 10000 hours (about 10 years) to become an expert in something.

Next up I went to a talk by Seb Rose, a TDD advocate, on “Less is More” making the case for adjusting the progression of test changes to make them have less ‘fidelity’ to your final test intentions during development to allow you to converge faster to the final implementation. This one left me wondering about this whole stepwise approach to knowledge generation implicit in TDD. Sometimes it does not happen in this left-brain way of small steps. Sometimes there are massive jumps as we tear down old structures and remake them. Also whatever happened to the differentiation between growth & development? Subsequent conversation with other participants showed that I was not alone in thinking about this although, of course, TDD does have its place.

One great point that Seb raised was the importance of saying “I don’t know”, which can be a difficult thing for anyone who is supposed to be considered competent. We need to assume we are initially ignorant and be happy in accepting it.

He cast aspersions on Planning Poker saying that in his experience it has always been wildly inaccurate. For light relief he showed an image of these cards from LunarLogic.
LunarLogic No Bullshit Cards
The main point I took away from this talk was just how much software development is a knowledge generation process (i.e. epistemic) and how we need to be clear about the next smallest question we need answering.

After lunch I split my attendance between a talk called “Talking to the Suits” and a more techie C++ one about the issues in converting the Total War game from Windows to OS/X which was mainly about the differences between the Microsoft and Clang C++ compilers (the Microsoft compiler is much more accepting – which might not help since it wont detect errors in template code if the template is not instantiated).

I will only mention the “Suits” talk because there should be a video of the Windows to OS/X one.

I came away from the “Suits” talk with some great cliches, e.g. Quality is not job number 1; Business loves legacy. The basic point here is that we have to be more explicit about the quantitative costs/gains when making a case for any technical work. We cannot assume that business leaders will be able to understand the ins and outs of things like IDEs, Technical Debt, etc. A good call to make sure we communicate more effectively about such things. A good idea here was the “Problem : Solution : Results” guideline when presenting information. For example: Problem: “Even simple changes take a lot (quantify) of time”; Solution: “If we improve the design of this part of the system”; Results: “we will save weeks of effort (quantify) when we add new workflows”.

That is probably enough for now since I really need my beauty(!) sleep. I have also put my name forward to give a Lightning Talk on “My Thinking is NOT for Sale”. Oh dear. I need to sort out the slides for that now!

Till later…

ACCU2015: Nerdfest Supreme

Well, another ACCU conference is here and I have just arrived at the hotel today and am settling in, meeting some friends and listening to a talk by one of them, Michael Feathers of ObjectMentor.

I am not presenting this year because I do not have my thoughts straight about some of my latest thinking – namely “My Thinking is NOT for Sale” – especially how I can even get our current business culture to start thinking about knowledge work in such a way. Although there are some signs.

One of the interesting things about these conferences is hearing some of the more vocal and opinionated people hold forth. Initially I start off feeling fairly intimidated, but then inevitably I find something to comment about and we end up having great conversations. In truth there is a nice balance between the more vocal folk and the timid ones. But here good logical thinking always get respect regardless of how quiet or noisy you are – unless of course the participants have stayed too long and late in the bar! Although thinking about it, John Lakos IS a force to be reckoned with!

Unfortunately another person I really really wanted to talk to was Jim Coplien. He was scheduled to deliver a keynote, but I was told he will not be attending due to illness. I was hoping to have heard more about his DCI work (See also his blog post and this video)

But back to Michael’s talk, which was called “Organizational Machinery around Software”. He was arguing for making the code architecture primary and structuring teams around that architecture. Basically saying to flip Conway’s Law and use it as a lever to get better results rather than having it inadvertently mess up your design because you did not structure your teams in the right way. The basic concept is simple. The implementation and convincing of management may be another thing entirely but it is an interesting view of basing your team structure architecturally rather than perhaps by market segment, or in some other way.

One of the things that he said was that we could take lessons from how the military manage personnel rotating through their teams (or crews I guess), and that business could do the same. As you might guess if you read my blog, I would find such a though unsettling, primarily because there is validity in what he says due to the fact that business is usually run on military lines, whereas I consider that there is a difficult tension between trying to write quality software and its usual economic context. More thought required…

I am looking forward to tomorrow, although I shall be breaking away from the conference in the evening to partake of some Argentinian Tango at some local classes! I might see if I can get some of the techies to come along – could be interesting… This will of course mean that any blog post may be delayed.

Here are some of the notes I made about the talk:

  • Being too conservative with code mods will cause a very fast deterioration of the codebase. ie. not refactoring.
  • Interface cruft. Easier to change code either side of an API, rather than mature the API.
  • Legacy comment.
    20150421_1932_MichaelFeathers_OrganizationalMachinery
  • Interesting research from Robert Smallshire in the audience: Developer halflife is 3.5 years. Code halflife is 35 years. (See this presentation)
  • We should be able to visualise a system architecture for ANYONE in the organisation to understand. Not just the techies.

Until next time…

ACCU2014 Workshop : Imagination in Software Development

A week ago on Saturday 12th April I facilitated a workshop at ACCU2014 on Imagination in Software Development which I am pleased to say – thanks to the participants – went very well.

Before the workshop I thought I had bitten off more than I could chew, having read through a lot of Iain McGilchrist’s book “The Master and His Emissary” and realising that using analytical thinking for such an exercise is very difficult. However thanks to my long suffering team at work giving me the chance to do a dry run, I was able to get feedback about what did and did not work and so ended up making some rather last minute changes. The final workshop format ended up being completely different to the dry run.

Before moving onto the exercises I gave a half-hour talk about the links between phenomenology; software; and brain hemisphere function, most of which in hindsight could have been left until after the exercises. My main objective, however, was to raise self-awareness about the participants’ internal imaginative processes.

I thought it would be good to highlight some of the primary ideas that came from the exercises, both in terms of the workshop’s preparation and its execution.

The need to get away from the software domain

The exercises in the workshop involved:

  • Listening to a story excerpt from a book.
  • Watching a film clip of the same excerpt.
  • Performing a software design exercise individually.

Each exercise was followed by discussions in pairs. It became abundantly clear that if you give a bunch of programmers a technical exercise, it will behave like a strong gravitational field for any ideas and it will be very difficult to get them to focus on process instead of content. Indeed during the workshop I had to interrupt the pair-based discussions to make sure they were talking about their own inner processes instead of the results of the design exercise I had given them! By reading a story and watching a film clip first it did make it easier to highlight this as a learning point since it was much easier to focus on internal process for the story and film clip.

Individual working instead of in small groups

The trial run with my team at work used small 3-4 person groups. I found that the team dynamics completely overshadowed their individual awareness. I therefore changed the format to make the core design exercise an individual process, followed by discussions in pairs. This had the desired effect of bringing their internal processes into sharper focus. The more you know about an area the more difficult it can be to “go meta” about it.

Some great insights from the participants

STORY
When listening to the story 3 processes were identified which occurred in parallel:

  • Visual – Picturing.
  • Emotional.
  • Logical – Probing.

FILM

  • The film was much more emotionally powerful, to the point of feeling manipulative.
  • But it was felt to be ‘weaker’ due to the imagery being concrete.

DESIGN

  • When performing the design exercise the ideas were experienced as a story, but as a sequential process rather than a parallel one.
  • The logical analysis required thoughts to be made explicit by writing them down otherwise it was hard to hold them in awareness.
  • There was a more conscious awareness of past experience affecting current ideas.
  • The initial analysis was wide-ranging followed by focussing down to the core ideas.

So if any of the participants make it to this page – I would like to say a great big thank you for getting involved.

Slide set follows:

Phenomenal Software: The Internal Dimension: Part 1: Theory Building

Introduction

DiscusPanelIt is a while since I last posted because I was hoping to produce a concise single post to deal with the issues of how a phenomenological approach to software relates to the issues of Patterns and Living Structure that Christopher Alexander has worked on. So much for hopes. As I started (re-)reading more around the subject, it opened up before me, as one might expect I guess. So I am breaking it down into smaller sections and giving it the subtitle “The Internal Dimension”.

In these “Internal Dimension” posts I am going to deal with the issue of meaning and structure in software, starting with the seminal paper by Peter Naur in 1985 and moving on to the patterns work of Christopher Alexander. I will be informing it with the ideas from an essay by Hans-Georg Gadamer with the great title ‘The Relevance of the Beautiful’ and more recent writing by Wyssusek. Wyssusek also notes how many of these ideas are relevant to users, rather than just the application developers.

The Internal Dimension Part 1: Theory Building & The Generation of Meaning

Back in 1985 Peter Naur, one of the co-creators of the ALGOL60 programming language, wrote an essay entitled “Programming as Theory Building”. This has become a seminal paper highlighting, as it did, that programming was more than just producing the program and its accompanying documentation.

He identified that when handing over a piece of software to other people to maintain and/or extend, it was not enough to just supply the source code and a full set of documentation. You needed to allow access to the original authors of the program because it was they who held the live ‘Theory’ of the program and could ensure that future work maintained a consistent program architecture.

“A main claim of the Theory Building View of programming is that an essential part of any program, the theory of it, is something that could not conceivably be expressed, but is inextricably bound to human beings.”

Indeed the “conceivable expression” will be the code itself plus any documentation. But these are not enough for a working understanding of the system.

Anyone who has tried to understand other people’s programs – something I seem to have been doing for most of my career – will relate to Naur’s thesis. We cannot look on the ‘Theory’ as being an abstract thing and it cannot be put down as a set of rules – by definition the rules are actually within the software. This fallacy of the ‘abstract theory’ also highlights a problem in devising a method for building theories. Naur seems to be very much in tune with the phenomenological idea of the whole:

“In building the theory there can be no particular sequence of actions, for the reason that a theory held by a person has no inherent division into parts and no inherent ordering. Rather, the person possessing a theory will be able to produce presentations of various sorts on the basis of it, in response to questions or demands.”

We can now see that the theory does not so much represent an abstract piece of knowledge to be put forth, but rather a new skill of the person – an ability to respond appropriately to the demands of unknown situations. It is here that we have the link to meaning – the realm of hermeneutics.

Understanding a piece of software is about trying to grasp what the original programmer meant when s/he created the various data structures and functions of the system. It is at this point that the phenomenological approach to the generation of meaning changes the whole view of programming as theory building. The meaning is a live thing which is “inextricably bound to human beings.”, and on a working system the team of programmers is continually creating and re-creating a shared meaning about it. As Wyssusek noted “if this practice is interrupted the system ‘dies’.” Naur’s original words describing this phenomenon were:

“…one might extend the notion of program building by notions of program life, death, and revival. The building of the program is the same as the building of the theory of it by and in the team of programmers. During the program life a programmer team possessing its theory remains in active control of the program, and in particular retains control over all modifications. The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.”

This is an important point to understand because it requires that developers and their management give credence to the living internal dimension of programming. While the domain fails to adequately grasp this dimension and how it can be informed by a phenomenological approach (see Simon & Maria Robinson’s great ideas of ‘Holonomics’) there will continue to be embarrassing and expensive project failures.

References

  1. Alexander. “A Pattern Language.” 1997.
  2. Gadamer. “The Relevance of the Beautiful and Other Essays.” Cambridge University Press, 1986.
  3. Naur. “Programming as Theory Building.” 1985.
  4. Wyssusek “A philosophical re-appraisal of Peter Naur’s notion of “programming as theory building”. Proceedings ECIS2007.

In the next post I will describe how I see the links with Christopher Alexander’s patterns work.
Until then…

Phenomenal Software Development : Key Observations

G_TOWS_Lasham_TugThe Importance of Energy

A fairly obvious observation but one which needs stating as it is easily forgotten when we acclimatize ourselves to the job of programming.

It takes a significant amount of mental energy to do this job.

When programming we will be making decisions minute by minute, if not second by second, and this requires a large expenditure of mental effort. It means that when I get home after a day at work I just do not want to make decisions. I am all “decided out”. This is why I will rely on my partner to decide what to eat for dinner!

It is obvious how much this affects the rate of progress on a project and any experienced technical lead will realise that a significant concern is the level of energy in a team. It usually falls below the radar during the heat of trying to hit deadlines.

Non-Linear Thinking

We need to notice the difference between what I call linear and non-linear thinking.

Even though we may not consciously recognize it, a faculty I see time and again that highlights the difference between programmers and non-programmers is the realisation that we need to do as little as possible to create a solution that has as much impact as possible. Laziness if you will, though I prefer to call it non-linearity.

A good programmer will try to minimise the amount of code they create, or will create firm foundations to allow them to minimise the amount of code they have to create later. While this is being done there may not be much outwardly visible progress, maybe none if it is all in thought patterns. This, along with the fact that programming is not a numbers game (i.e. more people equals faster progress) can be a big cause of strife between techies and managers.

See Robert Glass’ book Facts and Fallacies of Software Engineering, fact number 13: “There is a disconnect between software management and their programmers”, which has an interesting story of how different the two disciplines are in their view of projects.

GoingFlyingThe Foundation in Play

Many early programmers were amateur electronics or radio enthusiasts who then turned professional. I remember going to a radio club meeting where one of the members had built a Z80-based system that had 64KB of RAM! A full complement of memory in those days and we all duly drooled. But at the time there was no software to fill it with that could make any use of that much memory.

We were enthusiasts and at the beginning of the PC revolution many programs were games. As the field progressed there was a transition from programming as play to programming as an economic activity, though the games drive has remained strong.

So here we are now with a very lucrative IT industry that is based on this energy and we forget this foundation in play at our peril. It is a fundamental human activity that fosters resourcefulness. Confirmed by recent research by Sergio Pellis about self-directed play – as if we needed to be told. Self-directed play fosters resilience and resourcefulness, faculties that any company would want in its employees.

I believe that the next 2 issues of Boundary Crossing and the Inner World are the most important observations I have to offer.

AsleepBoundary Crossing

What do I mean by boundary crossing? Let us look at the characteristics of technology.

  • Tools or technology are amplifiers.
    We use technology to amplify or extend our capabilities. Cars amplify our movement and aeroplanes allow us to fly. But an amplifier has no knowledge of good or bad and will also amplify and extend our mistakes. During the Industrial Revolution we created devices to amplify or enhance our physical capabilities, but with the development of information technology we have created devices to amplify our thought.
  • Technology use has transitioned from External to Internal problem-solving.
    As the nature of our tools has changed, the effects of our use of them has moved from being externally visible to others, to being internal and hence invisible to others – as the effects take place in our heads. This is a Significant Shift (which is worth the capitalization) and has many side-effects. My thesis is that this has largely gone unnoticed within IT, especially given the primary focus we have had on the physical gadgets rather than the psychological effects. I take you back to Dykstra’s comment that:

    “…their influence will be but a ripple on the surface of our culture, compared with the much more profound influence they will have in their capacity of intellectual challenge without precedent in the cultural history of mankind.”

  • The Technological Paradox.
    Tool usage requires us to be more awake since it amplifies our actions and thoughts. The paradox with technological use is that we tend to go to sleep more as we let technology take over faculties. (Just look at how people drive – but don’t get me started on that) It is why an incremental process works so well and why Agile techniques have become so popular.

This transition is part of what led me to post about software development requiring an evolution of consciousness and I believe we need an updated vision of the discipline in order for there to be constructive progress.

DiscusPanelThe Inner World

What I find fascinating about the process of developing software is that despite its heritage in a rational scientific process, we have been led into this inner world and no matter how hard we try, will not escape it. For me it is a source of livingness in the job.

Programming involves the creation of a completely human made world where there is little feedback and few checks against an external reality. Indeed we have had to dope silicon away from its natural state to make machines that can be completely controlled by what are effectively our thoughts.

It can be helpful to identify what happens internally when we write software and I believe a simple view goes as follows:

  • First we engage in thinking about the problem and carry on thinking to come up with some possible solution. Notice the verb tense here: thinkING.
  • With this thinking we create mental models or more finalised thought constructs that define our solution to the problem.
  • It is these mental models or thought constructs that we can transcode into software.

Thus the computer is running a network of these thought constructs. It is worth noting that this is NOT the same as the process we use when thinkING about the problem and its possible solution. This is why I am highly sceptical about the promise of so-called artificial intelligence.

As they [try to] run these mental model networks computers will mirror back the quality of our thought process as programmers. Thus any self-respecting programmer will need to become self-aware about their own process and learning in this area of their own thinking.

Initiation in Thinking

I believe that this whole domain can be seen as an initiation in thinking which in former times would have been the province of mystical and religious orders. But the current ideas of religion are not appropriate to this domain although there are some similarities if you think in terms of ritual – but that is a whole other touchy subject to be dealt with later. This move into the inner world is why a developer’s attitude is as important as their technical ability – if not more so.

Next time I will take a trip into the philosophical side of things and link up to some of the great thinkers of history.

Phenomenal Software Development

Over the next few posts I am going to cover the ground of Software and Phenomenology that I dealt with in my recent talks at ACCU2013, ACCU Bristol and ACCU Oxford.

Why Explore Phenomenology?

FlyingViewAs we have progressed through the industrial revolution into our current wide ranging use of information technology, there has been a big change in the form of the tools that we use. The massive impact of this transition from external physical tools to internal virtual tools has largely been unconsciously experienced.

Edgar Dykstra back in 1972 was a notable exception when he gave a talk saying:

“Automatic computers have now been with us for a quarter of a century. They have had a great impact on our society in their capacity of tools, but in that capacity their influence will be but a ripple on the surface of our culture, compared with the much more profound influence they will have in their capacity of intellectual challenge without precedent in the cultural history of mankind.”

Currently our society is heavily based upon the underlying Cartesian dualistic worldview. Along with this orientation we tend to focus primarily on results and though this has been necessary, it has some significant negative consequences. I believe that with the move to virtual tools, the cracks are beginning to show in the Cartesian worldview and its appropriateness for modern times. As computing has progressed along with this has been a questioning of just what it is to be human.

I consider that phenomenology – regardless of whether you can pronounce it or not! – can lead us to a more integrated worldview and I believe the industry needs this more balanced, more human, view if it is to constructively progress.

Overview

I will be starting by providing an overview of my own background. This is important so that you can get a sense of the experiences and thinking that have shaped my conclusions. Only then can you be free to decide what you want to take and what you want to leave.

Then I give some key observations that I’ve made through my career particularly the one about what I call ‘Boundary Crossing’, followed by a short overview of some philosophical ideas. But please note I am not an academic philosopher. Two particular philosophers I highlight are Descartes and Goethe as they represent two realms of thought that I consider relevant in their impact on software development. Notable issues here are: Knowledge Generation, Imagination and the Patterns movement.

I then have some conclusions about how we might progress into the future – both with technology development and technology use.

A Programmer’s Background : Novice – The Early Years

PrimaryI started out being interested in electronics at 17 back in 1974. Originally I was a shy young adolescent nerd who found comfort in the inner world of thought. Also I was not good at dealing with members of the opposite sex, which I believe could be quite a common phenomenon among younger software developers.

Thereafter I gained entry to Southampton University in order to study electronic engineering gaining my degree in 1979. Even at this stage I realized that I wanted to move from hardware development to software development, although I only had an unconscious sense of this physical to virtual transition.

Early programming tasks were a hobby at the time and were based on programming games in BASIC on computers I had built from a kit. There was an initial foray into trying to do an IT records management application which I messed up completely.

Then came the job in the field of media TV and film editing systems where I was definitely feeling that I was working with “cool” tech. Definitely a time of being enticed by the faery glamour of the technological toys.

A Programmer’s Background : Journeyman – The Dangerous Years

DiscusIt was the next phase of the career that I call the dangerous time. A time characterized by the following traits:

  • Wanting to play with more complex and generic structures. (Many of which did not actually get used!)
  • A focus on the tools rather than the problem.
  • The creation of unnecessarily complex systems, letting the internal idea overshadow the external problem context.
  • An arrogance about what could be achieved – soon followed by absolute sheer panic as the system got away from me.
  • No realization that the complexity of thought required to debug a system is higher than that required to originally design and code the system.

This phase of a career can last for a long time and highlights the fact that the programmer needs to become more self-aware in order to progress from this stage. In fact some people never do.

This can be a real problem when recruiting experienced programmers. When interviewing I separate the time into two sections. Initially I ensure that the interviewee has the required level of technical competence, and once I feel they are more settled I move on to see just how self-aware they are.

One question I use here is ‘So tell me about some mistakes?‘ There are two primary indicators that I am looking for in any response. The first one is the pained facial expression as they recall some past mistakes that they have made in their career and how they have improved in the light of those experiences. The second is the use of the word ‘I’.

‘I’ is an important word for me to hear as it indicates an ownership and awareness of the fact that they make mistakes without externalising or projecting it onto other people or the company. This is important because it will show the degree of openness that the interviewee has to seeing their own mistakes, learning from them, and taking feedback. A programmer who cannot take feedback is not someone I would recruit.

A Programmer’s Background : Grumpy Old Programmer

SteinadlerThis ‘Old Grump’ phase is possibly a new one that developers go through before reaching Master level. I hesitate to describe myself as Master but am currently definitely at the Old Grump stage! Traits here I have experienced are:

  • Awareness of the limitations of one’s own thinking, after realizing again and again just how many times one has been wrong in the past. Particularly easy to notice when debugging
  • Realization that maintenance is a priority, leading to a drive to make any solutions as simple and clear and minimalist as possible. Naturally the complexity of the solution will need to match if not exceed the complexity of the problem. Once one has experienced the ease with which it is possible to make mistakes it is always worth spending more time making solutions that are as simple as possible, yet do the job. An Appropriate Minimalism.
  • Code ends up looking like novice code, using complexity and ‘big guns’ when required.
  • A wish to find the true essence of a problem, but when implementing using balanced judgement to choose between perfection and pragmatism.
  • Most people think that because you are more experienced you are able to do more complex work. The paradox is that the reason you do better is that you drop back to a much more simple way of seeing the problem without layering complexity upon complexity. (This strongly correlates with the phenomenological approach)

Next I will be talking about some of the observations I have made throughout my career.
Until next time…