My Thinking is NOT for Sale

Its 2 o’clock in the morning and I am finding I cannot sleep. A thought that is so off the wall has been gripping my mind for a while now and I am finding it more and more relevant to what I have seen happen during my career as a programmer.

The title is worth restating:

My Thinking is NOT for Sale

This is not so much a shouted response to all those times that good technical effort has been driven carelessly under the steamroller of prevailing economic needs – usually those of the money swallowing monsters that are most companies – than it is a statement of an underlying truth, if only I can express it well enough and in shorter sentences. So here goes…

If you pay for software you will not get what you need. In fact you CANNOT buy software because it is not a finished product. The current economic model we have just does not fit and I believe this is why there is so much trouble in this area.

What is important about good software development?

Over my 30 odd years of work the primary creative and energizing point has been the interaction between the developer and the actual user as a system has come into being. The best of it has been the conversation between the two as they navigate the area of the user’s needs. If the developer is skilled, both technically and personally, they help facilitate both parties in mapping an unknown area, probably only vaguely expressed in the “wants” that the user can currently identify.

This is a conversation of human discovery in thinking.

It is priceless.

It is a gift.

It is a Free process. Capital F.

It cannot be bought.
It cannot be sold.
It is NOT a product.

It only makes sense if the effort is freely given by the developer. The inner costs of doing this are so high that it requires a high level of motivation that can ONLY be internal. To try and shoehorn it into our current ways of thinking about money devalues the process and I think this is what is underlying the problems I have seen happen many times.

The kicker here is that it is likely that it can only be funded by gift money. That means that there can be NO LINK between the funding and the final “product”. I use quotes because that word is a misnomer of what is actually going on.

Unrealistic?

Just go and read a book called Turing’s Cathedral by George Dyson and you will see how the Princeton Institute for Advanced Study was funded by donation. This was where John von Neumann worked and developed the architecture that underlies modern computers.

The picture of how the whole current edifice of modern computing was birthed from gift money just blows me away. I find my thinking so bound up in the capitalist model that to separate the resource – i.e. the money to give time for people to think – from the product of that thinking in such a way shows up the illusion of the current funding models for such work.

Is that enough to allow you to see it? Truly?
If you can then maybe you might understand why I am having trouble sleeping because in my tossing and turning my feelings tell me it could change everything…

Or maybe this is all just a dream and I shall be sensible when I wake up.
Hmmmm.

Post-ACCU2014 Thoughts

My thinking has been working overtime since I attended and presented at the ACCU2014 conference in Bristol.

[The delay in producing another post has been due to a lot of rather extensive personal development that has been occurring for me. Add to this some rather surreal experiences with dance – clubbing in Liverpool being one particular – and you might understand the delay. But that will be the subject of a separate post on dancing – I promise!]

But back to thoughts subsequent to my attendance at ACCU2014…

The Myth of Certification

The Bronze Badge. Small but beautiful.
One experience that really got me thinking was a pre-conference talk by Bob Martin reflecting on the path the Agile software development movement has taken since its beginnings. He mentioned an early quote from Kent Beck that Agile was meant to “heal the split between programmers and management”, and that one of the important guiding principles was transparency about the technical process.

But then there was a move to introduce a certification for what are called ‘SCRUM Masters’, key personnel – though not project managers – in an Agile software development approach. The problem is that it is just too simplistic to think that getting a ‘certified’ person involved to ‘manage’ things will sort everything out. This is never how things happen in practice and despite early successes Bob observed that subsequently Agile has not lived up its original expectations.

The transparency that the Agile founders were after has once again been lost. I consider that this happened because the crutch of certification has fostered inappropriately simplistic thinking for a domain that is inherently complex.

My inner response to this was: Well what do you expect?

I very much appreciate and value the principles of Agile, but there is a personal dimension here that we cannot get away from. If the individuals concerned do not change their ideas, and hence their behaviour, then how can we expect collective practices to improve? As I experienced when giving my recent workshop, it is so easy to fall prey to the fascination of the technological details and the seeming certainty of defined processes and certified qualifications.

I remember a conversation with my friend and co-researcher Paul in the early days of embarking upon this research into the personal area of software development. We wanted to identify the essential vision of what we were doing. The idea of maybe producing a training course with certification came up. I immediately balked at the thought of certification because I felt that an anonymising label or certificate would not help. But I could not at the time express why. However it seems that Bob’s experience bears this out and this leaves us with the difficult question:
How do we move any technical discipline forward and encourage personal development in sync with technical competence?

The Need for Dynamic Balance

K13 being winch launched, shown here having just left the ground.
This was another insight as to why I enjoy ACCU conferences so much. There is always the possibility of attending workshops about the technical details of software development and new language features on the one hand, along with other workshops that focus on the more ‘fluffy’ human side of the domain.

I live in two worlds:

  1. When programming I need to be thoroughly grounded and critically attend to detail.
  2. I am also drawn to the philosophy (can’t you tell?) and the processes of our inner life.

Perhaps the latter is to be expected after 30 years of seeing gadgets come and go and the same old messes happen. This perspective gives me a more timeless way of looking at the domain. Today’s gadget becomes tomorrow’s dinosaur – I have some of them in my garage – and you can start to see the ephemeral nature of our technology.

This is what is behind the ancient observation that the external world is Maya. For me the true reality is the path we tread as humans developing ourselves.

Also we need to embrace BOTH worlds, the inner and the outer, in order to keep balance. Indeed Balance is a watchword of mine, but I see it as being a dynamic thing. Life means movement. We cannot fall into the stasis of staying at one point between the worlds, we need to move between them and then they will cross-fertilise in a way that takes you from the parts to the whole.

In our current culture technical work is primarily seen in terms of managing details and staying grounded. But as any of my writings will testify, there is devilry lurking in those details that cannot be handled by a purely technical approach.

Teacher As Master

So John - Do I have to wear the silly hat? Well Bill, only if you want to be a REAL glider pilot.
Another epiphany that I experienced at the conference was a deeper insight into the popular misconception that teachers are not competent practitioners. There is the saying that “Those that can – Do. Those that can’t – Teach”. So there I was in a workshop wondering if that meant that because I was teaching programming, was I automatically not as good at the programming? But then a participant highlighted the fact that this was not so in traditional martial arts disciplines.

Indeed – teaching was seen as a step on the path to becoming a master.

We – hopefully – develop competence which over time tends to become implicit knowledge, but to develop further we need to start teaching. This will force us to make our knowledge explicit and give us many more connections of insight, indeed helping us to see the essential aspects of what we already know. There may be a transitional time where our competence might suffer – a well known phase in learning to teach gliding – as well as being a normal learning process whenever we take our learning to a higher level.

So I think the saying needs changing:
Those that can Do. Those that are masters – Teach.

ACCU2014 Workshop : Imagination in Software Development

A week ago on Saturday 12th April I facilitated a workshop at ACCU2014 on Imagination in Software Development which I am pleased to say – thanks to the participants – went very well.

Before the workshop I thought I had bitten off more than I could chew, having read through a lot of Iain McGilchrist’s book “The Master and His Emissary” and realising that using analytical thinking for such an exercise is very difficult. However thanks to my long suffering team at work giving me the chance to do a dry run, I was able to get feedback about what did and did not work and so ended up making some rather last minute changes. The final workshop format ended up being completely different to the dry run.

Before moving onto the exercises I gave a half-hour talk about the links between phenomenology; software; and brain hemisphere function, most of which in hindsight could have been left until after the exercises. My main objective, however, was to raise self-awareness about the participants’ internal imaginative processes.

I thought it would be good to highlight some of the primary ideas that came from the exercises, both in terms of the workshop’s preparation and its execution.

The need to get away from the software domain

The exercises in the workshop involved:

  • Listening to a story excerpt from a book.
  • Watching a film clip of the same excerpt.
  • Performing a software design exercise individually.

Each exercise was followed by discussions in pairs. It became abundantly clear that if you give a bunch of programmers a technical exercise, it will behave like a strong gravitational field for any ideas and it will be very difficult to get them to focus on process instead of content. Indeed during the workshop I had to interrupt the pair-based discussions to make sure they were talking about their own inner processes instead of the results of the design exercise I had given them! By reading a story and watching a film clip first it did make it easier to highlight this as a learning point since it was much easier to focus on internal process for the story and film clip.

Individual working instead of in small groups

The trial run with my team at work used small 3-4 person groups. I found that the team dynamics completely overshadowed their individual awareness. I therefore changed the format to make the core design exercise an individual process, followed by discussions in pairs. This had the desired effect of bringing their internal processes into sharper focus. The more you know about an area the more difficult it can be to “go meta” about it.

Some great insights from the participants

STORY
When listening to the story 3 processes were identified which occurred in parallel:

  • Visual – Picturing.
  • Emotional.
  • Logical – Probing.

FILM

  • The film was much more emotionally powerful, to the point of feeling manipulative.
  • But it was felt to be ‘weaker’ due to the imagery being concrete.

DESIGN

  • When performing the design exercise the ideas were experienced as a story, but as a sequential process rather than a parallel one.
  • The logical analysis required thoughts to be made explicit by writing them down otherwise it was hard to hold them in awareness.
  • There was a more conscious awareness of past experience affecting current ideas.
  • The initial analysis was wide-ranging followed by focussing down to the core ideas.

So if any of the participants make it to this page – I would like to say a great big thank you for getting involved.

Slide set follows:

Phenomenal Software: The Internal Dimension: Part 2b: Patterns & Livingness

In this post I am going to review Alexander’s three aspects of patterns mentioned before, namely:

  • The Moral Component
  • Coherent Designs
  • Generative Process

I will show how they link to the following ideas:

  • Freedom
  • Cognitive Feeling
  • Livingness

The Moral Component & Freedom

20110916_1352_BuzzardThe moral aspect of patterns can be approached from any of a number of ‘paths up the mountain’. Certainly Alexander was concerned about whether buildings were ‘nurturing’ for us to live in, and so was thinking about more than utility. With computer systems and applications it is easier to think that this utilitarian aspect is all that exists. But there is an environmental part – an inner environment of thought, or ‘theory’ as Naur would say, whether we be users or developers.

If we think about how tools extend our own faculties, indeed our own being, the importance of the quality of this inner environment takes on a new meaning. The nature of the tool will affect how we form our ideas, which in turn will influence the form of our externally made world. Thus Alexander’s use of the word ‘nurturing’ and its applicability to software is not so out of place as it initially seems.

We can relate the ideas of utility, environment and hence morality by considering the concept of freedom – but defined in terms relevant to computer use. A computer system or application is a tool to get a particular task done. Good tools are ‘transparent’, meaning that you do not notice them when performing a particular task – they ‘disappear’ from your consciousness and leave you ‘free’ to focus upon the task in hand. It is in these terms that we can speak about freedom when using computers.

If you experience this ‘transparency’ when using a computer, I would consider that the software you are using contains this moral component that Alexander has defined. To paraphrase his words from the ‘Mirror of Self’ question:

“‘Moral’ Software gives you the freedom to develop a better picture of the whole of yourself, with all your hopes, fears, weaknesses, glory, absurdity, and which – as far as possible – includes everything that you could ever hope to be.”

What higher statement of purpose could we have for the programs we write? The current prevalent economic vision of the software industry pales into insignificance against such a statement.

We should not forget that this freedom to develop a ‘better picture of the whole of ourselves’ can be experienced by both users and developers. Indeed it is a central tenet of my whole ‘Phenomenal Software’ series that good software developers are implicitly on a path of self development, whether they are conscious of it or not.

Coherent Design & Cognitive Feeling

PetrelWingIn talking about coherent design we need to remember that Alexander is dealing with the external world of objects and a software designer/developer is dealing with non-physical artefacts – the building architect works in an external world, the software architect works in an internal world – though no less real in its effects.

If we consider programming as an ‘internal art’ we can see how it can be difficult to communicate effectively about the ideas that underpin our design and coding. Peter Naur wrote about the need to maintain a theory alive in the minds of the programmers if a system was to be properly extended or maintained. He also noted that the theoretical element could not be communicated accurately via written documentation or even the code itself – it needed human interaction with people holding the living theory of the software.

Reflecting on my own career I have come to realize that it is difficult to identify an abstract form of coherence or goodness for software separate from the context in which it is to be used. For instance some code that I had found to be elegant in the early days of computing, say using little memory and having few instructions, would not be a good solution to the same problem in a modern context. So here we can see the integration required between form and function; solution and problem context. They need to be in harmony: coherent form in design will have the moral component in its function and will mean that the theories and meaning formed by the developer or user will make sense and meet the ‘Mirror of the Self’ needs.

Most novices will work from a set of rules, one such example being to ‘Make it Work, Make it Right, Make it Fast’ in that order. This is a valid heuristic useful to stop programmers optimizing the code too early. However a rule-based approach has the danger of separating the stages into individual parts – which is not the best way to proceed in one’s thinking. This is the same tension as that between the TDD (Test Driven Development) folks and the design-up-front folks – a classic example of the need to work from an integrated view of the whole and the parts – i.e. respectively: making it right and making it work; design-driven and test-driven. In practice being done together.

So over my career I have developed a feeling for good design in the crucible of solving real-world problems. In actuality I cannot make it ‘Work’ until I have a sense of what is ‘Right’, even to a small degree. You can perhaps see that I have a personal preference towards the design view, though during my work I can easily fall into the trap of hitting the keyboard too early, something I have worked vigorously at controlling! As I gained experience I started to get this sense of the best way to structure the software, and in some cases – such as perhaps designing a media player – I might have a feeling for what is ‘Fast’ at an early stage, but this needs to be kept strongly in check against reality. Optimisation should be based upon measurement and human beings can be worse than random at predicting what needs optimising.

This sense for a good or coherent design is what I have called a ‘cognitive feeling’ in an earlier post, which is a very fine and delicate sensation indeed – it is not strong emotion. Over the years of my career I liken its development to the creation of a new sense organ, cognitive in its nature. It can be difficult to explain to less experienced practitioners due to the fact that the sense is likely to have been implicitly developed over the years. However it matches closely to the feelings that are evinced by Alexander’s ‘Mirror of the Self’ test so that frequently when talking to more experienced developers it will not be hard to get to a commonality in judgement.

This means that in order to create coherent designs we will need to develop this extra sense of a fine cognitive feeling. A quote from Alexander serves to give an idea of this feeling sense, and though dealing with external geometric entities, the same comments relate to software design when imagining how the structures will function:

“A pulsating, fluid, but nonetheless definite entity swims in your mind’s eye. It is a geometrical image, it is far more than the knowledge of the problem; it is the knowledge of the problem, coupled with the knowledge of the kinds of geometrics which will solve the problem, and coupled with the feeling which is created by that kind of geometry solving that problem.” A Timeless Way of Building, Chapter 9.

Generative Process & Living Structure

CloudTrailIn Alexander’s talk at the OOPSLA’96 conference in San Jose, he seemed somewhat bemused by the software domain’s use of patterns. On reading Alexander’s Nature of Order series we can perhaps see why. Some of the central ideas are those of ‘living structure’ and ‘structure preserving transformations’ which result in a ‘generative process’. How could these relate to software?

It is easier to understand the concept of structure preserving transformations when looking at how living things grow. As they grow and develop they need to continue living – we cannot just take them apart, do some modifications, and then re-assemble them! Every step of growth cannot disturb their livingness – thus EVERY change must preserve their living structure. The world of living things has no choice but to use a generative process if it is to stay alive.

At first glance this does not relate at all to the built world. When fixing my car in my younger days, there were times when bits of gearbox and engine were all over the floor! If the car had been a living being it would have been dead, but since it was not I of course was able to re-assemble it and make it work. Small software systems are similar. However, if you have ever worked on a sizable legacy system you will know that you need to spend a LOT of effort on NOT breaking the system. Any changes you make need to be closer to structure preserving, and any bad structures will need major surgery to improve. In reality you will not even try if it is not economically viable. Once you have bad structure, or use a ‘structure destroying transformation’ it is extremely difficult if not impossible to remedy:

“Good transformations do not cause any upheaval. So to get a good project, we merely have to make a sequence of structure-preserving transformations. When we do so, a good design evolves smoothly, almost automatically.
However, even a single bad transformation can upset the smooth unfolding. If we make one transformation which destroys structure, in the middle of a sequence of good ones, things become ugly very quickly;”
Nature of Order Book 2 p61. See also chapter 4.
I am not sure about the use of the word ‘merely’ in the above, since it understates the difficulty of identifying good transformations.

Also if we accept Naur’s Theory Building view and the idea of human mental schemas, this idea of a generative process makes more sense, since there is the living theory held by the programmers. If we then go further and connect to the phenomenological ideas of how we create meaning when we develop theories we can see that there is a justification for finding a livingness within the programming activity. Bortoft talks about the link between understanding and meaning which relates well to Naur’s ideas of theory building when understanding software. It also gives another dimension to the idea of livingness:

“understanding is the ‘concretion of meaning itself’, so that meaning comes into being in understanding.” Henri Bortoft in Taking Appearance Seriously p108

Just one final thought about the idea of livingness. Some might think that a running program would have a livingness, especially if it was a big system. I am not so sure and consider that it is WE who provide the livingness in the software domain. It is WE who create; experience design pain; judge. The computers are running a network of finalized thought constructs which is a different process to the thinking we do when defining those thought constructs. For me this perception of livingness in Alexander’s work and its relation to software is an ongoing work-in-progress.

I want to thank Jim Coplien for his help in pointing me at various ideas of Alexander that mesh with my work for this post.

In the next post I shall conclude this series of ‘Phenomenal Software’ by returning to the way philosophy has progressed forward from the Cartesian Subject/Object view. This will mean dealing with the thorny subject of subjectivity and of course you will have to decide if you can trust my judgements!

Thanks for reading.

Phenomenal Software: The Internal Dimension: Part 2a: Patterns & The Mirror of the Self.

When I started out programming the prevalent idea, which I shared at the time with many others, was that an artistic view was not going to be any part of the work. However, after a number of years in the business I began to come across moments of wonder when either I saw a great piece of coding or, very occasionally, managed to create something myself that hit the ‘sweet spot’. It was not until I happened upon Christopher Alexander’s work on patterns that I began to understand some of what was happening during these moments.

My introduction to the patterns movement occurred when reading the book Design Patterns written by the “Gang of Four”: Gamma, Helm, Johnson & Vlissides, this becoming a standard reference text. In trying to better understand the patterns vision I read some of Richard Gabriel who has some interesting ideas about the relationship between art and software. He has even come up with the idea of a Masters in Fine Arts in Software.

In Alexander’s earlier architectural patterns book he defines a library of external geometric entities to be used as design guidelines for buildings, for example: an alcove for chats that is separated off from a corridor. It is in his later masterwork: The Nature of Order that he describes his underlying ideas about ‘living structure’ and his thoughts about the perception of ‘goodness’ in design.

Alexander does not shy away from the moral dimension of his work. In a keynote speech he gave to the OOPSLA’96 conference in San Jose he stated that:

“One of the things we looked for was a profound impact on human life. We were able to judge patterns, and tried to judge them, according to the extent that when present in the environment we were confident that they really do make people more whole in themselves.” OOPSLA’96 keynote.

And later in the same talk:

“The pattern language that we began creating in the 1970s had other essential features. First, it has a moral component. Second, it has the aim of creating coherence, morphological coherence in the things which are made with it. And third, it is generative: it allows people to create coherence, morally sound objects, and encourages and enables this process because of its emphasis on the coherence of the created whole.” OOPSLA’96 keynote.

But how can we judge what is coherent? To understand Alexander’s approach we have to read the first book of ‘The Nature of Order’ series where he describes the ‘The Mirror of the Self’ test.

The Mirror of the Self

To develop this judgement of coherent living structure, Alexander identifies what he calls the ‘Mirror of the Self’ test. He highlights that there is a difference between what he calls ‘apparent liking’ and ‘true liking’. For example, when deciding which of two objects are liked the best, rather than accepting a quick ‘apparently liked’ judgement he asks for a ‘truly liked’ judgement:

“…which of the two objects seems like a better picture of all of you, the whole of you: a picture which shows you as you are, with all your hopes, fears, weaknesses, glory and absurdity, and which – as far as possible – includes everything that you could ever hope to be. In other words, which comes closer to being a true picture of you in all your weakness and humanity;…” Nature of Order: Book 1. p317.

Using this idea he has found that it is possible to have a high level of agreement (80-90%) between people when using their judgment to identify living structure for objects. So it seems that how we phrase the question is all important.

A Reappraisal of the Software Patterns Movement

So far the software patterns movement has tried to abstract out particular solution patterns to be used as guidelines when designing software structures. Despite the best intentions it has degenerated into being a set of document templates, rather than embodying the wider view of Alexander’s work. Once again we have become hooked on a results-oriented view of the world as if we can only feel comfortable with this approach in such a technical domain.

Erich Gamma, one of the co-authors of the Design Patterns book, said that referring to patterns is most useful when we already have a specific design ‘pain’ rather than trying to force patterns onto a particular project from the outset. This points to the fact that we cannot get away from being conscious of how we develop our judgement. How do we even identify that we have a design ’pain’ if not through discerning human judgement and a sense of rightness?

Along with other commentators like Jim Coplien, I consider that Alexander’s vision of patterns (the drive towards living structure and the big question of making human life more whole) has not been truly realized within the software discipline. We need to revisit the Alexandrian roots of the patterns movement and understand how these roots relate to software development.

In Alexander’s OOPSLA’96 talk he identified 3 key points in his vision for the patterns work: a moral component; coherent designs; generative process. Although there has been some discussion in the software community about Alexander’s later work, it is fair to say that it has been difficult to take these ideas further in the domain. However I have found that by connecting the ideas with those prompted by reading Bortoft and early Steiner we can get a bit more clarification which I will report on in my next post.

Thanks for reading so far and I wish you all the very best for 2014…

Phenomenal Software: The Internal Dimension: Part 1: Theory Building

Introduction

DiscusPanelIt is a while since I last posted because I was hoping to produce a concise single post to deal with the issues of how a phenomenological approach to software relates to the issues of Patterns and Living Structure that Christopher Alexander has worked on. So much for hopes. As I started (re-)reading more around the subject, it opened up before me, as one might expect I guess. So I am breaking it down into smaller sections and giving it the subtitle “The Internal Dimension”.

In these “Internal Dimension” posts I am going to deal with the issue of meaning and structure in software, starting with the seminal paper by Peter Naur in 1985 and moving on to the patterns work of Christopher Alexander. I will be informing it with the ideas from an essay by Hans-Georg Gadamer with the great title ‘The Relevance of the Beautiful’ and more recent writing by Wyssusek. Wyssusek also notes how many of these ideas are relevant to users, rather than just the application developers.

The Internal Dimension Part 1: Theory Building & The Generation of Meaning

Back in 1985 Peter Naur, one of the co-creators of the ALGOL60 programming language, wrote an essay entitled “Programming as Theory Building”. This has become a seminal paper highlighting, as it did, that programming was more than just producing the program and its accompanying documentation.

He identified that when handing over a piece of software to other people to maintain and/or extend, it was not enough to just supply the source code and a full set of documentation. You needed to allow access to the original authors of the program because it was they who held the live ‘Theory’ of the program and could ensure that future work maintained a consistent program architecture.

“A main claim of the Theory Building View of programming is that an essential part of any program, the theory of it, is something that could not conceivably be expressed, but is inextricably bound to human beings.”

Indeed the “conceivable expression” will be the code itself plus any documentation. But these are not enough for a working understanding of the system.

Anyone who has tried to understand other people’s programs – something I seem to have been doing for most of my career – will relate to Naur’s thesis. We cannot look on the ‘Theory’ as being an abstract thing and it cannot be put down as a set of rules – by definition the rules are actually within the software. This fallacy of the ‘abstract theory’ also highlights a problem in devising a method for building theories. Naur seems to be very much in tune with the phenomenological idea of the whole:

“In building the theory there can be no particular sequence of actions, for the reason that a theory held by a person has no inherent division into parts and no inherent ordering. Rather, the person possessing a theory will be able to produce presentations of various sorts on the basis of it, in response to questions or demands.”

We can now see that the theory does not so much represent an abstract piece of knowledge to be put forth, but rather a new skill of the person – an ability to respond appropriately to the demands of unknown situations. It is here that we have the link to meaning – the realm of hermeneutics.

Understanding a piece of software is about trying to grasp what the original programmer meant when s/he created the various data structures and functions of the system. It is at this point that the phenomenological approach to the generation of meaning changes the whole view of programming as theory building. The meaning is a live thing which is “inextricably bound to human beings.”, and on a working system the team of programmers is continually creating and re-creating a shared meaning about it. As Wyssusek noted “if this practice is interrupted the system ‘dies’.” Naur’s original words describing this phenomenon were:

“…one might extend the notion of program building by notions of program life, death, and revival. The building of the program is the same as the building of the theory of it by and in the team of programmers. During the program life a programmer team possessing its theory remains in active control of the program, and in particular retains control over all modifications. The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.”

This is an important point to understand because it requires that developers and their management give credence to the living internal dimension of programming. While the domain fails to adequately grasp this dimension and how it can be informed by a phenomenological approach (see Simon & Maria Robinson’s great ideas of ‘Holonomics’) there will continue to be embarrassing and expensive project failures.

References

  1. Alexander. “A Pattern Language.” 1997.
  2. Gadamer. “The Relevance of the Beautiful and Other Essays.” Cambridge University Press, 1986.
  3. Naur. “Programming as Theory Building.” 1985.
  4. Wyssusek “A philosophical re-appraisal of Peter Naur’s notion of “programming as theory building”. Proceedings ECIS2007.

In the next post I will describe how I see the links with Christopher Alexander’s patterns work.
Until then…

Phenomenal Software: The Need for Exact Imagination

This is a crucial aspect of being a software developer – the ability to hold an exact imagination of the processes occurring within a system. Fundamentally whenever we have to debug or design a system we need to try and ‘run’ the system or subsystem within our own thinking, if only in part.

It is one thing to appreciate how the concept of ‘imagination’ can be used to describe this process, but it is another to truly understand the deeper aspects of the faculties we need to develop in order to perform this activity. In my experience most software folk have either developed some of these faculties unconsciously at school and/or university, usually by studying maths, or pick it up as they go along during their career – both frighteningly haphazard processes. We may need to include art training for the technical professions.

The Process of Goethean Science

My view is that the insights of Goethean scientific perception can help here, though there are some important differences due to dealing with dead machines instead of a living natural world.

First lets review current ideas about the steps that are used in Goethean scientific perception (See Wahl and Brook):
Created while on a painting course by Claire Warner.

  • Exact Sense Perception.
  • Exact Sensorial Imagination.
  • Beholding the Phenomenon.
  • Being One with the Object.

Usually these steps relate to the perception of natural phenomena rather than to software where we are dealing with human created entities running within a computer, but they are still relevant.

In computing the stage of Exact Sense Perception relates to either developing requirements (when designing) or observing the behaviour of the system (when debugging). I am not going to deal with this step in this post so that I can concentrate on the imagination stage. I will assume that either we have the requirements for a new piece of software, or we have followed the ideas in my previous post when investigating the behaviour of a faulty system that needs debugging.

In imagining these human created entities, or thought structures, we first need to duplicate them in our thinking. However, there is a state beforehand that is worth mentioning as in the above it has been accepted as a given. This is the step of ‘Developing a Focused Attention’ – which is a foundation for all that follows as I shall describe later.

Another point we need to deal with here is that when we work with these thought structures in our heads we have moved away from the sensible world and are using what Rudolf Steiner called sense-free thinking. Certainly we may aid our imagination by naming the structures after physical items, e.g. a pipeline, but we have also created new ideas, e.g. a FIFO [first in first out pipeline], that are not physically based at all. The entities we are dealing with are not sense-perceptible and are not physical items and so it is not accurate to call it Sensorial – a better name being Exact Sense-Free Imagination.

Thus I see the steps as follows:

  • Developing Focused Attention
  • Exact Sense Perception (not dealt with in this post)
  • Exact Sense-Free Imagination
  • Beholding the Phenomenon
  • Being One with the Object

Developing Focused Attention

K13 Approach
In the past this would not have needed mentioning but given modern issues of reduced attention spans it needs to be brought to awareness. I consider this to be the most important faculty I have developed over the 30 years of my career in a technical domain and it is very relevant to life in general. When a software problem is found by a customer it can result in them being quite vocal and upset and getting angry – an appropriate response if they have paid for the system.

This heated response and emotionality can ripple through the vendor company as the customer interacts with its various levels of sales, support and management. If the emotionality persists all the way to the level of the programmers, it is going to be impossible to cleanly fix the problem and will result in a lot of costly, ineffectual ‘thrashing’. This is because in order to properly imagine the system in their thinking, the programmer must hold a focused and clear attention. In effect they have to push the worries of the day-to-day economic world away while they calmly and quietly identify and then fix the fault, possibly educating the various stakeholders about what they are doing in order to gain a little quiet space and time.

Although the customer may be getting ‘appropriately’ angry, ideally this attitude of calmness needs to be developed by all members of any technologically dependent society. Otherwise we will experience a degradation in the quality of our lives as we persist in holding expectations that are not in touch with reality. We need to become more aware about the implicit issues of technological use as it amplifies our intent, including our mistakes. This is where an aggressively economic stance can adversely affect our lives and thinking.

This stage of focused attention requires that we develop our ‘Will’ in the realm of our thinking. We will not make progress by letting our attention wander and our thoughts flit around like moths near a flame. This use of willpower is one of the reasons that the practice of software development is so draining and I liken this to creating and holding a quiet, almost physical, space in my mind in which to run the system in my thinking. (Those of a more spiritual/religious nature might see this as a sacred space, like that of a church, grail, or sanctuary.)

Exact Sense-Free Imagination

BufferThe current prevailing view is that when we are imagining a system in our thinking we are using a visual metaphor. This idea has been furthered by the move from procedural programming to object-oriented programming back in the 80s [See footnote 1]. This assumption has also been consolidated by the discipline making use of the idea of patterns put forward by Christopher Alexander [See footnote 2]. The architectural patterns for buildings indeed are visual entities, but when it comes to imagining the interactions of software structures it is more complicated. (You might actually say the same about building architecture but that is another discussion.)

There are two main aspects to what we have to imagine, first there are STATIC structures, and secondly there are DYNAMIC operations that occur between these structures.

Imagining the static element is when we build the system, usually only partially, in our thoughts first. Here ‘Thought’ is the noun use of the word, and is as close as we come to a visual representation of the code since we usually create a structure of ‘Thoughts’ out of the data structures or objects (if we are using object-oriented programming).

This means that we are imagining sense-free thought structures in the quiet space we have created with our focused attention.

Next is the harder aspect of imagining the dynamic operation of the system. The computer will perform operations exactly in line with the software we are about to write (design) or that we have already written (debugging). When designing we need to imagine if our proposed structure is going to give the required result for the specification drawn up for the system. When debugging we need to imagine what the system is doing and why it is not performing as we expect given our knowledge of the code.

In both these scenarios, as we think through either reproducing the static structures or the activity of the system we need to incrementally move our imagination forward in steps to be sure it is congruent with the code or proposed coding ideas. This, thanks to the operation of the computer, is why this process must be an ‘Exact’, non-fantasy imagination.

A frequent error here is to ‘run ahead’ of the simulation in our heads, missing out vital steps – so we need to start small and will usually use paper and diagrams to help us along. Interestingly I have found a printing whiteboard to be invaluable as the gross motor movement in drawing a structure diagram in the large helps to improve the visualisations of the thought processes and imaginations (schemata as Johnson says in The Body in the Mind).

However the fact that we perform this imagining of the dynamic system state, along with my own experience of the process (many programmers will ‘see’ the code in their mind’s eye), makes me sceptical that we are dealing with a purely image based and visual domain here. This is a current work-in-progress for me at the moment.

Note how this need to NOT run ahead of the simulation echoes the idea of Delicate Empiricism – which leads us neatly onto the next stage of ‘Beholding the Phenomenon’.

Beholding the Phenomenon

This is where we actively perceive the behaviour of our (hopefully exact) imaginations and necessitates switching constantly between imaginer/creator and perceiver. I find that nowadays I do this without noticing the switching, but it takes a significant amount of energy as this is another activity that requires a lot of willpower.

I am having to use my Will to:

  • Maintain focused attention.
  • Create thought structures.
  • Move my imagination through time as I simulate their interactions.
  • Behold what is happening and compare with the requirements.

The best way I can describe the feeling here is that of using a lot of attention to just hold the structures and keep the dynamics ‘alive’ and wait for the perception to catch up. This is why I consider that the word ‘beholding’ is a good way to describe it because we need to balance letting the imagination ‘live’ along with keeping it ‘Exact’.

There is also a very delicate, sensitive ‘cognitive feeling’ going on here when designing and comparing to the requirements as I assess if I am creating the right structures. I will return to this idea in a later post as it relates to Christopher Alexander’s work.

Hopefully this makes it easier to understand why programmers frequently get that far away look in their eyes. Given the complexity of what is happening is it any wonder that bugs occur in our creations?

Being One with the Object

Although difficult to reconcile with the perception of the wholeness of a natural organism, by thinking of this as a creation of knowledge, understanding and meaning, we can make the link. This is the ‘Aha!’ experience and is just as relevant in computing as in traditional science, Goethean or otherwise.

Gliding Sunset

This is the moment when we bring life to the whole enterprise, using our uniquely human faculties.

In a computing context this means that either we have truly understood the detailed elements of the problem and have identified the structures we will need (design), or we have experienced the blinding clarity of seeing exactly where the problem lies (grokking it) and know what we need to do to fix it (debugging).

Health Warnings

crash2
When working with computers we need to realize how it can fossilize our thinking. Because we constrain our inner process to be in step with the machine, we can delude ourselves into thinking that we are just machines. Indeed we may even change our judgement to be far too rule-based, the essence of computer operation.

We need to hold onto the idea of a ‘Living Thinking’ (as Steiner would call it) and I find that the phenomenological ideas of Goethe and those that followed can help us in keeping this uniquely human perspective when dealing with the mechanized world.

Next…

Next time I shall go more into the ideas of patterns, Alexander’s ‘apparent liking’ and ‘true liking’, and the idea of how we use a very fine ‘cognitive feeling’ to judge the rightness of a design.

Footnotes
[1] This was a major change in the way humans thought about programming computers. Initial techniques involved stringing together sequences of machine instructions into procedures that manipulated data, hence the term procedural programming. However it was then decided that it would be a better to give the data structures primacy and attach the procedures to the data. Thus software development became based upon designing structures of objects (or more accurately: instantiations of abstract data types) i.e. data structures with ‘attached’ procedures. Thus was born the idea that you could visually represent software structures which would make it all much easier to imagine.
[2] Christopher Alexander is a mathematician turned architect. The software discipline has used this idea to provide design patterns for software structures. His magnum opus is the 4 book sequence called ‘The Nature of Order‘.

Phenomenal Software Development : Delicate Empiricism in Software Development

I am going to start this section with a discussion about doubt, or unknowing. This was not in the original talk since it would not fit within the time allocated, but the response I received when I included it in a subsequent version of the talk makes me realise that it is a key idea we need to consider.

CloudscapeThe Place of Doubt or Unknowing

If we take the Cartesian view that we cannot really know the ‘things in themselves’, then in terms of creating new knowledge we have two possibilities:

  • Run Away
    We can decide that since we cannot get to know the Things, then we should not even try. This is the path chosen by those that wish to shun everything technological and go back to the old pre-modern ways. But as Ken Wilber points out in The Marriage of Sense & Soul(p44), I don’t think many folk today would want to go back there if they really knew what it was like.
  • Over-Hypothesize
    Here we decide that we can be ‘Aggressively Empirical’. Don’t waste too much time with observation, just come up with hypotheses and test these against reality. This approach has no consideration for the embeddedness of the perceiver within the system being perceived, the classic dualistic bind, and can lead to an experiment reinforcing your expectations about a phenomenon.

Note that both of these approaches attempt to push the unknowing away.

Unknowing does not feel comfortable so a typical human reaction is to move away from the discomfort. But in this area it is a mistake. So what can we do?

  • We need to develop ourselves and stay with the unknowing. This is the Delicately Empirical approach. We should consciously stay with the doubt and keep observing, hold back assumptions until our ideas and thinking mature in line with our observations.

It is this latter approach that requires a stronger sense of self in order to stay longer with the unknowing and the ability to hold this state is a core skill when working in a technical environment.

ThermallingKnowledge or Preconceptions?

The first point at which I got an inkling that the field of software development could benefit from the insights of Delicate Empiricism was during a workshop at the OT2001 conference when I attended a session called “Tracer Bullets” hosted by Paul Simmons and Tom Ayerst. They initially showed a short film about the difference between the Russian and American approaches during the space race of the 1960s. The Russians were much more trial-and-error based whereas the Americans had a very strong quality assurance programme. [I notice there was a re-imagining of the session at SPA2013]

The workshop part of the session involved splitting up into small teams of about 4 or 5 people. Each team had a range of materials they could use to try and identify what item was hidden within a tall kitchen bin placed on a stool so you could not look down into it. Each team had to create tools to find out what was at the bottom of the bin. There was a limitation in that each team only had 8 tokens which would allow them to go up to the bin for 1 minute per token. So they only had 8 minutes of ‘research’ time up at the bin.

Having already been familiar with Goethe’s approach of Delicate Empiricism, I argued in my team to use the tokens one at a time and only go forward slowly. Luckily the other folks agreed with the approach and we set off. What surprised me, and provided the impetus for this research into phenomenology, was seeing the approaches taken by the other teams. Whereas our team simply went up with some long balsa wood ‘pokers’ and spent our first minute just stabbing around to get an initial sense of what was at the bottom of the bin, I saw other teams creating elaborate measuring tools which already pre-supposed certain attributes about what they expected to find. A classic case of over-hypothesizing.

Before using each token our team decided to identify what it was that we wanted to learn next, i.e being clear about the boundary of our knowledge, and devising a tool to move that boundary forward. Then once we got the extra data, we would reflect on it and adjust the model of what we thought was at the bottom of the bin. Then we would discuss what it was we wanted to know next, making sure that our hypothesizing did not rush too far ahead of our current knowledge.

My observation of this contrast between our approach and that of the other teams gave me the impetus to delve further into this subject.

The primary question to ask here is just how aware we are about our boundaries of knowledge, and do we know how best we should proceed when at those boundaries? These questions are absolutely crucial when it comes to debugging a wayward software application.

From the workshop I identified the following steps in the process:

  1. Ask: What do we already know?
    We need to be clear about the knowledge we already have and need to consciously check the limits of that knowledge. This knowledge then forms the foundation for expanding what we know. In certain cases we might decide that we need to move into learning about a completely different area.
  2. Ask: What raw data do we need next?
    What is the next piece of raw data we need to help us expand our knowledge? Note that I differentiate here between raw data and knowledge. The raw data we collect relates to the percept. It will need our reflection to find what concept fits the data. Done properly this enables us to create a mental model that properly fits the phenomenon and is not abstract.
  3. Ask: What tool do we need to get this information without overly disturbing the phenomenon?
    Here we need to focus on creating the best tool for the job of finding the next piece of information we need, but without disturbing the system. This latter consideration is particularly important when debugging real-time multi-threaded systems. I find I frequently fall back into the ‘old school’ tradition of printing out the data if I can because usually a debugger is too invasive. (see ‘Staying Free’ below)
  4. Research: Use the tool to run the experiment and collect the raw data.
    So at last we get to do what feels like ‘real’ research. In actuality the ‘real’ research starts with our thinking. Frequently programmers love to get to the keyboard too early because they (and their managers) mistake typing for software development. It feels safer because one feels like one is making demonstrable progress. But this is an illusion and is at the core of Robert Glass’ fact about the ‘disconnect’ between management and programmers. In arguing this I have found David Bohm’s insight that the experiment is purely an external manifestation of the thought concept useful.
  5. Reflect: Expand my knowledge by reflecting on the collected data.
    This is where we need to look at the data we have collected (the percept) and start developing concepts that match it. By using a disciplined imagination we can check our concepts against the data and when we match them (the Aha! experience) we have expanded our knowledge. So now we can go back to stage 1.

Crash1Staying free – Knowing your technology

As mentioned above programmers will likely use debuggers for helping them investigate a problem. This can work well when you are trying to find a fault that is relatively easy to pin down. This is because a debugger will have to modify the running code in order for it to be able to function. For instance it may allow the program to stop at user specified points which it can only do if it modifies the running software. It may also completely modify the memory footprint by putting guard areas around the ‘real’ areas of data in order to catch ‘illegal’ memory accesses outside of what was expected.

Now that systems and debuggers have become more complicated, it is not a given that a programmer will know what the system is doing underneath the hood to provide this helpful functionality. This is where we touch on the nature of freedom. If you do not know what the system is doing – you cannot be free, i.e. you cannot know exactly what is going on, and so your assumptions can be faulty. I will come back to this concept in a later post since it relates to ANY technological use and should start some alarm bells ringing.

But back to the developer. As you can imagine, if you are working on a system that has real-time constraints (I work on video systems which need to play out faultlessly for hours) it just is not feasible to use a debugger to help find some of the nastier problems. Hence my comments above about the ‘old school’ techniques, i.e. printing out the data. With older systems it was not even possible to do this without disturbing the real-time operation because the process was too slow. Nowadays you can generally get away with it, but there will still be the occasional problem where printing out a single number could cause a change to the timing of the system and thus the fault will not manifest. Thus you need to know your system and then know what is the best ‘instrument’ – code modification – that can collect the data you need to find the fault.

Computer as Thought Mirror

Pity the novice or  journeyman programmer who has not yet realised that his or her process of debugging is flawed. All too frequently I see less experienced programmers (a) jumping to an early conclusion about the cause of a fault and then (b) modifying the code to ‘fix’ the problem based on this conclusion. Jumping to the wrong conclusion too early happens time and time again and you will see it happening many times a day in most software companies.

It can cost them a lot of money because they may now ship this software proudly announcing the fix of this specific bug only to find that it has re-appeared in a slightly different guise. What has usually happened is that the timing of the system was changed so the original fault became hidden, but the actual cause, i.e. the bug, was still there. I have found from bitter experience that it can take many many years – a decade is usual – for a programmer to sufficiently hone their debugging skills to the point that they do not fall into the trap of making false assumptions. It truly requires a great deal of willpower and self-awareness. Of course, better processes can help but they will still rely upon discerning human judgement.

The developer is having to learn the boundaries of their knowledge the hard way and the computer is mirroring back the quality of their thought process. Hence my calling it a ‘Thought Mirror’. Generally the novice starts out having great faith in their ability to understand the effects of their software creation, possibly being surprised that it is going wrong. If they are truly growing through the course of their career this will change to assuming they are wrong and possibly not even trusting that the code is correct even though the system is functioning perfectly. You will also find that many seasoned programmers will not buy the latest and greatest software releases – unless they really really have to!

I hope that gives you some sense of how convinced I am that anyone good at debugging will be using the technique of Goethe’s Delicate Empiricism without even realizing it.

Have we unconsciously created a technology that pushes us to the limits of knowledge just so we can come to know ourselves better?

To paraphrase Jeff Carreira, in software development ‘philosophy is not a luxury’.

Next time I shall take a deeper look at the issue of disciplined imagination, Exact Sensorial Imagination to be precise, and how it relates to software development.

Phenomenal Software Development : Philosophical Interlude

Please note that I am not an academic philosopher (as evidenced by my use of the word ‘Dudes’ in the style of Bill & Ted’s Excellent Adventure!) In preparing this material I have drawn heavily on Henri Bortoft’s excellent précis of philosophical history in his book “Taking Appearance Seriously” which he also relates to current phenomenological thinking.

MothSideViewPhilosopher Dudes from History

  • Bacon (1561-1626)
    This is the man who used binary notation to ‘encrypt’ messages transferred around his network of contacts. He concluded that mathematics was the path to certainty and believed in the mastery of Man over Nature, both conclusions that Goethe subsequently disagreed with.
  • Descartes (1596-1650)
    One of the thinkers that has had the most impact on the current time. The founder of the subject/object dualistic way of thinking about things. I shall talk more about Descartes below.
  • Newton (1642-1727)
    The man who gave us those equations of motion I remember having to learn by rote at school. He used a disciplined imagination to propose mathematical models of reality. This then allowed scientists to solve problems within the mathematical realm and subsequently translate the solutions into a physical situation. This link between mathematics and reality is now something we take for granted and is a powerful tool without which we would not have the world we know today.
  • Goethe (1749-1832)
    Although perhaps known more for his artistic endeavours, Goethe was a natural scientist as well. Even back in the 1800s he was raising a warning flag about the scientific method and the problem of over-hypothesising and imposing these hypotheses upon reality. Although in everyday parlance this is the problem of jumping to conclusions, in science this concern has become clearest in the field of quantum physics. The most useful idea I find in Goethe is the metaphor of ‘conversation’ for any scientific research with phenomena. In this Goethe prefigured the coming phenomenological school of thought.
  • Husserl (1859-1938)
    The founder of phenomenology who highlighted the importance of focusing on the process of a thing appearing to us as opposed to the final result of the thing itself. A tricky distinction which I will come back to later.
  • Gadamer (1900-2002)
    Before reading Bortoft’s book I was not familiar with Gadamer’s work and the whole realm of the philosophy of meaning – hermeneutics. Apart from having to rush for the dictionary to check out these new words I relate strongly to the importance of meaning for us. Giving life meaning is something we do all the time, frequently without realizing it. My view is that as we begin to impose our meanings on the world we need to become aware of this aspect of our cognition. (Simon Robinson has produced a great review of Gadamer’s book Truth and Method)

That is the brief overview. Now I will contrast Descartes and Goethe to highlight how these points fit into the discipline of software development.

Descartes and Dualism

Descartes lived between 1596 and 1650. He wanted to be sure of what we could really know uncluttered by the input from our senses. The foundation that he found was the ‘rock’ that is our thinking – something we can be sure of. Then given that we knew we were thinking we could be sure that we were thinking with something, an element of our body.

This is how he came to identify this Dualism of the mind and the body, but it is important to realise that he also wanted to fit in with the church’s views at that time which held that the human being as composed of body and soul. Descartes wanted the church to accept the primacy of thinking. He succeeded in doing so which provided an acceptable foundation for the mathematics that became the basis for the Scientific Revolution. This Scientific Revolution then led to the Industrial Revolution which has created the world we know today.

An interesting point here is the language he used to describe the mind and body. The minds was ‘res cogitans’ – thinking, a verb. An active principle. The body was ‘ res extensa’ – extension, the body, a noun. A passive principle. This further consolidated the view of Francis Bacon that man’s mind was to be master over nature, the world’s body. However this point of view has significant negative consequences as we now know since it has led to many of the ecological problems we have today.

Descartes ideas and thought are firmly entrenched in software development, because software development is indeed an exercise in applied mathematics.

Goethe and Delicate Empiricism

Goethe was a natural scientist who was around at the time of the Industrial Revolution and he was against the Baconian approach of the separation of man from the natural world and the mastery of man over nature. He warned of the danger of over-hypothesising and recognised the need to focus on the phenomenon and not on abstract ideas. He realised that over-abstracting away from the phenomenon under observation could cause errors in understanding. He counselled against moving too far and too early into the realm of abstract ideas and coined the term Delicate Empiricism.

Delicate Empiricism is where the Observer needs to be aware of their own process of observation and how it affects their final conclusions. He also coined the term Exact Sensorial Imagination. Rather than indicating an ungrounded fantasy this is a disciplined process in understanding any phenomenon and how it exists, hence the used of the word ‘Exact’.

In software development we use this process all the time. Whenever I have to imagine how my software is going to work, or am trying to understand how it may be going wrong, I have to use this disciplined process of imagination to ‘run’ the software’s structures (concepts) in my head. If I am not exact then I will come to incorrect conclusions.

However Goethe only dealt with the natural world, not something that links well into software development. It was Rudolf Steiner (1861-1925), a Goethean scholar, who realised that the same methods could be used for the perception of our own thinking, i.e. treating our thought as a phenomenon in the same way.

At the point where we become aware of our own thinking processes when doing software development – something I consider to be crucial – we are treating our thought as a phenomenon. Thus as we observe our own thinking we are able to improve it and become better developers. Needless to say this does not just apply to software development, but this domain makes it easier to see the link to the phenomenological thinkers.

20111119_1643_SunsetCropped

Phenomenology

Phenomenology focuses on the process of the ‘coming into being’ or ‘Appearance of a Thing’, rather than just focusing on ‘Thing’ itself. This is a very difficult concept to understand but I believe this focus on the process of how we see or know anything is key to healing the destructive consequences of the Cartesian subject object split that are so prevalent in our current world.

It has been said that you cannot actually teach this particular way of seeing things but must embark on a process of just trying to do so. It does represent a change of worldview when we come to look at our environment. It means that we need to realise and understand that the ‘Appearance’ and the ‘Thing’ are ONE item. This can also be understood in terms of reconciling the function of the brain hemispheres.

One of the insights about hemispheric brain function that has endured is that the right brain deals with the immediacy of lived experience, i.e. the Appearance of a thing, and the left brain deals with re-presenting what is lived in order to know something, i.e. the Thing itself.

In the words of Iain McGilchrist, the author of ‘The Master and His Emissary‘, the left brain deals with ‘Static, Isolated, Fixed, Decontextualised, Denotative issues‘. It deals with closed systems and perfection. The right brain, however, deals with ‘Individual, Changing, Evolving, Interconnected, Living ideas‘. It handles things that are never fully graspable, never perfectly known.

This we can see when we are looking at something completely new that we do not understand, or have the concepts to fit. We have the right brain perceiving it and the left trying to make sense of what we are perceiving. When we get it (or ‘grok’ it as Robert Henlein would say) we have that Aha! experience which means that we have identified the concept that fits the perception. It is only at this point that we can know something about what we are seeing.

3dBoxA Perception Exercise

It is difficult to put over this idea of the unity of the thing and the perceptive process but perhaps I can give you a sense of how our concepts affect what we see.

Next to this text you can see an image of lines that could be a cube. There are 2 ways to see the image as a cube, (a) one where the front face is at the lower right of the image, and (b) one where the front face is at the top left of the image.

So now just play with switching your perception over from state (a) to state (b) and be aware that nothing is changing in the external world. All you are doing is adjusting your conceptual filter that you are applying to the external percept.

Now can you see another way of seeing it? Try looking it to see a third way before reading further…

It should be possible to see it just as a flat image with lines on it. Not easy once you have seen the cube. If you have a laptop then perhaps rotate the display 45 degrees and that will make it easier.

If you spend some time with this you might be getting more awareness of what is happening inside you as you change your concept filters.

I am going to leave it there for now.
In the next post I will spend more time going deeper into the links between software development and phenomenology.

Phenomenal Software Development : Key Observations

G_TOWS_Lasham_TugThe Importance of Energy

A fairly obvious observation but one which needs stating as it is easily forgotten when we acclimatize ourselves to the job of programming.

It takes a significant amount of mental energy to do this job.

When programming we will be making decisions minute by minute, if not second by second, and this requires a large expenditure of mental effort. It means that when I get home after a day at work I just do not want to make decisions. I am all “decided out”. This is why I will rely on my partner to decide what to eat for dinner!

It is obvious how much this affects the rate of progress on a project and any experienced technical lead will realise that a significant concern is the level of energy in a team. It usually falls below the radar during the heat of trying to hit deadlines.

Non-Linear Thinking

We need to notice the difference between what I call linear and non-linear thinking.

Even though we may not consciously recognize it, a faculty I see time and again that highlights the difference between programmers and non-programmers is the realisation that we need to do as little as possible to create a solution that has as much impact as possible. Laziness if you will, though I prefer to call it non-linearity.

A good programmer will try to minimise the amount of code they create, or will create firm foundations to allow them to minimise the amount of code they have to create later. While this is being done there may not be much outwardly visible progress, maybe none if it is all in thought patterns. This, along with the fact that programming is not a numbers game (i.e. more people equals faster progress) can be a big cause of strife between techies and managers.

See Robert Glass’ book Facts and Fallacies of Software Engineering, fact number 13: “There is a disconnect between software management and their programmers”, which has an interesting story of how different the two disciplines are in their view of projects.

GoingFlyingThe Foundation in Play

Many early programmers were amateur electronics or radio enthusiasts who then turned professional. I remember going to a radio club meeting where one of the members had built a Z80-based system that had 64KB of RAM! A full complement of memory in those days and we all duly drooled. But at the time there was no software to fill it with that could make any use of that much memory.

We were enthusiasts and at the beginning of the PC revolution many programs were games. As the field progressed there was a transition from programming as play to programming as an economic activity, though the games drive has remained strong.

So here we are now with a very lucrative IT industry that is based on this energy and we forget this foundation in play at our peril. It is a fundamental human activity that fosters resourcefulness. Confirmed by recent research by Sergio Pellis about self-directed play – as if we needed to be told. Self-directed play fosters resilience and resourcefulness, faculties that any company would want in its employees.

I believe that the next 2 issues of Boundary Crossing and the Inner World are the most important observations I have to offer.

AsleepBoundary Crossing

What do I mean by boundary crossing? Let us look at the characteristics of technology.

  • Tools or technology are amplifiers.
    We use technology to amplify or extend our capabilities. Cars amplify our movement and aeroplanes allow us to fly. But an amplifier has no knowledge of good or bad and will also amplify and extend our mistakes. During the Industrial Revolution we created devices to amplify or enhance our physical capabilities, but with the development of information technology we have created devices to amplify our thought.
  • Technology use has transitioned from External to Internal problem-solving.
    As the nature of our tools has changed, the effects of our use of them has moved from being externally visible to others, to being internal and hence invisible to others – as the effects take place in our heads. This is a Significant Shift (which is worth the capitalization) and has many side-effects. My thesis is that this has largely gone unnoticed within IT, especially given the primary focus we have had on the physical gadgets rather than the psychological effects. I take you back to Dykstra’s comment that:

    “…their influence will be but a ripple on the surface of our culture, compared with the much more profound influence they will have in their capacity of intellectual challenge without precedent in the cultural history of mankind.”

  • The Technological Paradox.
    Tool usage requires us to be more awake since it amplifies our actions and thoughts. The paradox with technological use is that we tend to go to sleep more as we let technology take over faculties. (Just look at how people drive – but don’t get me started on that) It is why an incremental process works so well and why Agile techniques have become so popular.

This transition is part of what led me to post about software development requiring an evolution of consciousness and I believe we need an updated vision of the discipline in order for there to be constructive progress.

DiscusPanelThe Inner World

What I find fascinating about the process of developing software is that despite its heritage in a rational scientific process, we have been led into this inner world and no matter how hard we try, will not escape it. For me it is a source of livingness in the job.

Programming involves the creation of a completely human made world where there is little feedback and few checks against an external reality. Indeed we have had to dope silicon away from its natural state to make machines that can be completely controlled by what are effectively our thoughts.

It can be helpful to identify what happens internally when we write software and I believe a simple view goes as follows:

  • First we engage in thinking about the problem and carry on thinking to come up with some possible solution. Notice the verb tense here: thinkING.
  • With this thinking we create mental models or more finalised thought constructs that define our solution to the problem.
  • It is these mental models or thought constructs that we can transcode into software.

Thus the computer is running a network of these thought constructs. It is worth noting that this is NOT the same as the process we use when thinkING about the problem and its possible solution. This is why I am highly sceptical about the promise of so-called artificial intelligence.

As they [try to] run these mental model networks computers will mirror back the quality of our thought process as programmers. Thus any self-respecting programmer will need to become self-aware about their own process and learning in this area of their own thinking.

Initiation in Thinking

I believe that this whole domain can be seen as an initiation in thinking which in former times would have been the province of mystical and religious orders. But the current ideas of religion are not appropriate to this domain although there are some similarities if you think in terms of ritual – but that is a whole other touchy subject to be dealt with later. This move into the inner world is why a developer’s attitude is as important as their technical ability – if not more so.

Next time I will take a trip into the philosophical side of things and link up to some of the great thinkers of history.