Over the next few posts I am going to cover the ground of Software and Phenomenology that I dealt with in my recent talks at ACCU2013, ACCU Bristol and ACCU Oxford.
Why Explore Phenomenology?
As we have progressed through the industrial revolution into our current wide ranging use of information technology, there has been a big change in the form of the tools that we use. The massive impact of this transition from external physical tools to internal virtual tools has largely been unconsciously experienced.
Edgar Dykstra back in 1972 was a notable exception when he gave a talk saying:
“Automatic computers have now been with us for a quarter of a century. They have had a great impact on our society in their capacity of tools, but in that capacity their influence will be but a ripple on the surface of our culture, compared with the much more profound influence they will have in their capacity of intellectual challenge without precedent in the cultural history of mankind.”
Currently our society is heavily based upon the underlying Cartesian dualistic worldview. Along with this orientation we tend to focus primarily on results and though this has been necessary, it has some significant negative consequences. I believe that with the move to virtual tools, the cracks are beginning to show in the Cartesian worldview and its appropriateness for modern times. As computing has progressed along with this has been a questioning of just what it is to be human.
I consider that phenomenology – regardless of whether you can pronounce it or not! – can lead us to a more integrated worldview and I believe the industry needs this more balanced, more human, view if it is to constructively progress.
I will be starting by providing an overview of my own background. This is important so that you can get a sense of the experiences and thinking that have shaped my conclusions. Only then can you be free to decide what you want to take and what you want to leave.
Then I give some key observations that I’ve made through my career particularly the one about what I call ‘Boundary Crossing’, followed by a short overview of some philosophical ideas. But please note I am not an academic philosopher. Two particular philosophers I highlight are Descartes and Goethe as they represent two realms of thought that I consider relevant in their impact on software development. Notable issues here are: Knowledge Generation, Imagination and the Patterns movement.
I then have some conclusions about how we might progress into the future – both with technology development and technology use.
A Programmer’s Background : Novice – The Early Years
I started out being interested in electronics at 17 back in 1974. Originally I was a shy young adolescent nerd who found comfort in the inner world of thought. Also I was not good at dealing with members of the opposite sex, which I believe could be quite a common phenomenon among younger software developers.
Thereafter I gained entry to Southampton University in order to study electronic engineering gaining my degree in 1979. Even at this stage I realized that I wanted to move from hardware development to software development, although I only had an unconscious sense of this physical to virtual transition.
Early programming tasks were a hobby at the time and were based on programming games in BASIC on computers I had built from a kit. There was an initial foray into trying to do an IT records management application which I messed up completely.
Then came the job in the field of media TV and film editing systems where I was definitely feeling that I was working with “cool” tech. Definitely a time of being enticed by the faery glamour of the technological toys.
A Programmer’s Background : Journeyman – The Dangerous Years
It was the next phase of the career that I call the dangerous time. A time characterized by the following traits:
- Wanting to play with more complex and generic structures. (Many of which did not actually get used!)
- A focus on the tools rather than the problem.
- The creation of unnecessarily complex systems, letting the internal idea overshadow the external problem context.
- An arrogance about what could be achieved – soon followed by absolute sheer panic as the system got away from me.
- No realization that the complexity of thought required to debug a system is higher than that required to originally design and code the system.
This phase of a career can last for a long time and highlights the fact that the programmer needs to become more self-aware in order to progress from this stage. In fact some people never do.
This can be a real problem when recruiting experienced programmers. When interviewing I separate the time into two sections. Initially I ensure that the interviewee has the required level of technical competence, and once I feel they are more settled I move on to see just how self-aware they are.
One question I use here is ‘So tell me about some mistakes?‘ There are two primary indicators that I am looking for in any response. The first one is the pained facial expression as they recall some past mistakes that they have made in their career and how they have improved in the light of those experiences. The second is the use of the word ‘I’.
‘I’ is an important word for me to hear as it indicates an ownership and awareness of the fact that they make mistakes without externalising or projecting it onto other people or the company. This is important because it will show the degree of openness that the interviewee has to seeing their own mistakes, learning from them, and taking feedback. A programmer who cannot take feedback is not someone I would recruit.
A Programmer’s Background : Grumpy Old Programmer
This ‘Old Grump’ phase is possibly a new one that developers go through before reaching Master level. I hesitate to describe myself as Master but am currently definitely at the Old Grump stage! Traits here I have experienced are:
- Awareness of the limitations of one’s own thinking, after realizing again and again just how many times one has been wrong in the past. Particularly easy to notice when debugging
- Realization that maintenance is a priority, leading to a drive to make any solutions as simple and clear and minimalist as possible. Naturally the complexity of the solution will need to match if not exceed the complexity of the problem. Once one has experienced the ease with which it is possible to make mistakes it is always worth spending more time making solutions that are as simple as possible, yet do the job. An Appropriate Minimalism.
- Code ends up looking like novice code, using complexity and ‘big guns’ when required.
- A wish to find the true essence of a problem, but when implementing using balanced judgement to choose between perfection and pragmatism.
- Most people think that because you are more experienced you are able to do more complex work. The paradox is that the reason you do better is that you drop back to a much more simple way of seeing the problem without layering complexity upon complexity. (This strongly correlates with the phenomenological approach)
Next I will be talking about some of the observations I have made throughout my career.
Until next time…
13 thoughts on “Phenomenal Software Development”
Aargh! You got my interest and now I have to wait for the next part. Hope it won’t be long — I find your analyses fascinating.
Hmmm. Sorry about that Ben. I plan to do them every week…
Well… thats the plan! Mind you – this stuff has been a long time coming – 30 years or so! So I hope you can hang on.
I’ve been fascinated by Goethe’s approach to scientific inquiry for a few years. Books:
The Wholeness of Nature: Goethe’s Way of Science by Henri Bortoft
Goethe’s Way of Science (Suny Series, Env…) by David Seamon
The Metamorphosis of Plants by Johann Wolfgang von Goethe and Gordon L. Miller
The last is Goethe’s work with new pictures better illustrating what Goethe describes. The book is a work of art.
I read it along with a printed list of definitions for terms that identify plant parts.
To me, the key difference in approach of Goethe and the prevailing tradition(s) [e.g., Descartes, Newton]
is that Goethe listened, and reminded himself to keep listening, for what Nature had to say.
He worked to tease out messages about process, structure, … from his observations.
Newton, Descartes, et. al. work to find a model by which to describe “what’s out there” and then
use that model as “that through which the world is observed.”
Yes, you make a good point about ‘listening’. I plan to comment more on this in the next post but one of the great metaphors of Goethe’s approach is the idea that we can be seen as having a conversation with the phenomena. If taken seriously this means that one would guard oneself against over-hypothesizing, which is a common trait these days. But this approach is very very ‘tricksy’ (as Gollum would say) and I think it is very easy to fool oneself that one is taking a ‘whole’ approach to looking at the world when one may still be using Cartesian/Newtonian ideas, especially given how we are educated these days. I love Bortoft’s books – indeed the talk I gave at ACCU was inspired by his book “Taking Appearance Seriously”.
Many thanks for your comments
My first programming was in 1965 – age nearly-17 – on a Bendix G-15 computer using a language called Intercom 500. It would now be called an assembly language, in rough form. The main memory was a rotating drum, not core memory (little donuts of magnetic material strung together with signal wires) that came soon after.
I’ve written programs in FORTRAN, COBOL, many variations of BASIC, APL (an incredibly terse mathematical language for representing data & transforms), PL/I, PDP 11 assembly, and a host of others.
Your asking interviewees about their mistakes, and what you’re looking for as a response – seems a powerful way to get at their basic approach to life.
I appreciate your comments on taking a minimalist approach and other aspects of developing, debugging & maintaining applications.
Wow! I remember having to setup the bootstrap loader program by switches so that the main system could be loaded off tape.
Starts reminding me of the ‘LUXURY, LUXURY’ sketch by Monty Python!
But joking aside, this has meant that people like ourselves have been experientially dragged through the development of the computer technology from close to the beginnings to the current day and so it can be easier to grasp the link between the low and high level parts of the system (important as I mentioned in my earlier post). This is a real problem for students these days to get to this level of knowledge. We had a presentation at work recently about the actual structure of the code generated by a C++ compiler and the younger member of the group came up with the classic quote that he had “enjoyed it and learnt a lot because before this it had all been MAGIC TO ME”.
I still find it hard to train folk into these ways of thinking, especially appropriate minimalism. I think you actually have to make the mistakes first before you can get it.
Your site looks interesting – I shall be subscribing!
Not sure which site you found – http://www.solbakkn.com (business), http://www.systalk.org (personal site on systems), http://www.solvt.com (personal/professional site), http://techedbits.wordpress.com/ (tech & other tips for educators), or one of the other sites I’ve put out there.
I’ve gravitated towards “small computer” through my career (though I’ve used “larger” systems as well). That has led to the client problems I work on are ones where I do everything from client contact through development, customer training, redesign, etc. I’ve used MS Access because it’s such a wonderful tool for prototyping and, for environments with at most a few users, production use. Your work environment sounds entirely different – working on tasks that require dozens or hundreds of people to create the final product. How much are the issues you discuss due to having lots of people involved in the enterprise, and how much due to the intrinsic nature of developing/implementing something new?
I consider that the issues I discuss are primarily due to the size and domain of system that I work on – which has been the product of about 20-30 people over a timespan of 10 years but with knowledge from the previous 20. There have been a number of attempts at open source development in the are – all of which get to the alpha stage and then seem to founder. The complexity of the domain and its requirements seem to demand a fairly high level of complexity along with the economic driver.
So I think that we have been developing something new – yes – but also of a fairly high level of complexity. Something which pushes to and past the boundaries of human thought capacity – something I intend to touch on later.
A post from Feb 2012 on “Growth vs Fixed Mindsets” [http://techedbits.wordpress.com/2012/02/04/brainology/] may be of interest.
Pingback: ソフトウェア開発の現象学 | 野生のソフトウェア思考
I have known Charles a long time and I heartily agree with his ideas on simplicity, and the difficulty of debugging complexity. It seems to me that the speed and ability of current computers have led to over-expectation by programmers, it takes just as long as ever to find and debug a single line of code, yet if you have a million lines in the program every one is surprised how long it takes to debug each problem. I remember the day when we were pretty certain that our operating system had no faults, and it was wonderful! It took several products using it and several years to get to this point, but everything got so much easier. Later I remember writing an assembler program for a product which took a couple of weeks to get working completely despite having two processors, but the structure was simple and development incremental, each function was fully checked before starting the next one. Compare this with today, where the features of the compiler are taken for granted, and the lack error messages as absolute signs of correctness. So typing is vague due to overloaded operators and automatic typecasting, network messages are often unchecked for correct grammar, task interactions depend on multiple variable event timings and debug is very hard indeed. The thing to remember are the points of difficulty IN DEBUG when doing the design, and adding features which explain to the problem clearly to an event file or whatever to enable debug, and there must always be a way out from any event which fails to happen when expected. Remember the old ASSERT FAILURE (line number) in Cypher Charles?
Good to hear from you.
We still use the assert failures and will keep some of them even in shipping software in order to stop further corruption of databases etc etc. This is one of the horrible trade-offs where – given the size of systems now – you cannot be sure that you have ironed out all the faults so it is better to fail the system rather than let it continue corrupting valuable data. A crash is better than a corrupt database.
Then there will be other areas where you will ‘gracefully’ handle bad parameters etc etc.
Once again it is a judgement call which needs the phenomenological balance between a whole view of the problem and parts view of the system.