ACCU2015: Days 2-4: Asynchronisation & Philosophy

As you might have suspected, I have been either too intellectually stimulated or beer fazed to write a blog post from day 2 onwards…

So the time has come to sit down quietly on the Sunday afternoon after the conference and write down some summarising words about the following days.

I had a fair number of insightful moments, either during presentations or subsequent discussions with participants. So much so that I will write about my day 2 lightning talk in a separate post.

Once again there was a great blend of the deeply technical (in this case lots on multi-threading) and the philosophical. This is what I love about this conference because it grounds the philosophical and enlightens the deeply technical. Wonderful.

Olve Maudal & Jon Jagger play Music Mashup

Olve Maudal & Jon Jagger play Music Mashup


Day 2:
There was a truly scary presentation by Alex Naumann from CERN about how physicists were using a C++ interpreter CLING and novices were developing a massive C++ codebase with loads of cut and paste. Thanks to the massive data sets that exist, at least they have a large body of test data to verify correctness – but still…

There was also a great quote from Olve Maudal’s presentation on the history of C and C++ saying that enums in these languages are troublesome because Dennis Ritchie hated them and hacked them in as quickly as possible! A classic point that highlights the relationship between the quality of software and whether a programmer is externally or internally motivated.

Phil Nash’s presentation on seeking simplicity was a great presentation about differentiating the Simple/Complex axis from the Easy/Hard axis. It was also interesting for its reference to Cynefin, which was found in other talks too. His references are at http://www.levelofindirection.com/storage/simplerefs.html.

In the bar (thanks to Bloomberg for the free beer) there was a fun music mashup machine which I had a play with. It consisted of a camera below a translucent panel with blocks which had QR codes on them for the 4 sound types of bass, percussion, melody and voice banks. So intriguing that I actually took a photo of Olve and Jon playing with it.

David Sackstein presenting (with apologies for image quality)

David Sackstein presenting (with apologies for image quality)


Day 3:
David Sackstein’s presentation on Coroutines was given to a packed audience. Although it was fascinating for its content, I found it more so for its effect on the audience. The coroutine idea is not new, as anyone who used to use setjmp and longjmp will know.

When David explained that a coroutine handed off control to another context (or stack) while the initiating one was held, there were the inevitable cries of – but that means things are not being done in a multi-threaded way! The noise of hearing the pennies drop was almost deafening as he explained: “But that is what the operating system is doing for you anyway in a multi-threading environment! It simplifies your code to be more explicit about the handover points.”.

Classic! Yeah I know we have multiple cores but if you don’t grok how schedulers work on a single processor you are asking for trouble before you get anywhere near parallel processing.

The great thing here is that there are great simplifying techniques to make the scheduling far more explicit in people’s heads – which is wonderful for educating folk about multi-threading. In this case coroutines simplify the thinking needed to cope with processing in two different contexts, which means I may be able to simplify all my ugly callback code.

This thread (pun intended) during the conference was repeated in other presentations, so it seems like the community is getting its head around asynchronous programming. Maybe I will get more people who survive my interviews when we reach the multi-threading questions! I can hope.

There was also a great, and courageous, talk by Peter Hilton about how to name things. A great reference here was to George Orwell’s essay, Politics and the English Language.

Day 4:
On the final day I really enjoyed Roger Orr’s talk on Coding without Words questioning when we should name things rather than make use of C++ features such as lambdas and auto. This talk helped me realise that using names instead of implicit language constructs is important if the name actually denotes an important piece of knowledge in the application domain or surrounding code segment context. For me this presentation epitomised the link between technology and philosophy really well.

Roger’s talk also allowed me to unbend my mind after listening to Jonathan Wakely talk about the importance of Application Binary Interfaces (or ABIs). Really interesting, really important and really good – but Ouch! Oh yeah – for the programmers out there – in his lightning talk he said PLEASE do NOT use leading underscores on your #ifdef names for header file include guards – as a library writer that is HIS namespace you are encroaching on! Good one.

The final keynote by Chandler Carruth of Google was an exposition of the set of tools, or “ecosystem”, around the clang compiler.

All in all a great conference. Now all I need to do is go and get my head back into normal mode again – whatever that is!
Until the next time…

ACCU2015: Day1: Bits & Pieces

So it is rather late at the moment. Just had a great evening at a local Tango class with some great live music. Absolutely & totally sublime and completely different to the Nerdfest!

However I did retire to the bar on getting back from the dancing – it is important to network – to find that it was still buzzing. Had a great chat with a guy who works in CFD (Computational Fluid Dynamics), even though it was midnight! Amazing how you can talk about multi-processing late at night after a beer. Or have I been doing this job too long?

As for the day’s conference proceedings: Because Jim Coplien is recovering – thankfully – from a seriously illness, today the first keynote was by Pete Goodliffe about “Becoming a Better Programmer”. It was OK but not too many new points for me. Just a light presentation. The main points I took away were:

  • Money incentives work against producing better software. They are different targets. (A subject close to my heart)
  • The 4 levels of Maslow Competence Hierarchy (not the needs one): Unconscious Incompetence (Dangerous!), Conscious Incompetence, Unconscious Competence & Conscious Competence.
  • Dunning Kruger Effect and the cognitive bias of experts under-estimating their ability while novices over-estimate theirs.
  • Of course there was also the obligatory mention of the Dreyfus model of Skill Acquisition which has a good breakdown of the learning levels as well as mentioning that it takes 10000 hours (about 10 years) to become an expert in something.

Next up I went to a talk by Seb Rose, a TDD advocate, on “Less is More” making the case for adjusting the progression of test changes to make them have less ‘fidelity’ to your final test intentions during development to allow you to converge faster to the final implementation. This one left me wondering about this whole stepwise approach to knowledge generation implicit in TDD. Sometimes it does not happen in this left-brain way of small steps. Sometimes there are massive jumps as we tear down old structures and remake them. Also whatever happened to the differentiation between growth & development? Subsequent conversation with other participants showed that I was not alone in thinking about this although, of course, TDD does have its place.

One great point that Seb raised was the importance of saying “I don’t know”, which can be a difficult thing for anyone who is supposed to be considered competent. We need to assume we are initially ignorant and be happy in accepting it.

He cast aspersions on Planning Poker saying that in his experience it has always been wildly inaccurate. For light relief he showed an image of these cards from LunarLogic.
LunarLogic No Bullshit Cards
The main point I took away from this talk was just how much software development is a knowledge generation process (i.e. epistemic) and how we need to be clear about the next smallest question we need answering.

After lunch I split my attendance between a talk called “Talking to the Suits” and a more techie C++ one about the issues in converting the Total War game from Windows to OS/X which was mainly about the differences between the Microsoft and Clang C++ compilers (the Microsoft compiler is much more accepting – which might not help since it wont detect errors in template code if the template is not instantiated).

I will only mention the “Suits” talk because there should be a video of the Windows to OS/X one.

I came away from the “Suits” talk with some great cliches, e.g. Quality is not job number 1; Business loves legacy. The basic point here is that we have to be more explicit about the quantitative costs/gains when making a case for any technical work. We cannot assume that business leaders will be able to understand the ins and outs of things like IDEs, Technical Debt, etc. A good call to make sure we communicate more effectively about such things. A good idea here was the “Problem : Solution : Results” guideline when presenting information. For example: Problem: “Even simple changes take a lot (quantify) of time”; Solution: “If we improve the design of this part of the system”; Results: “we will save weeks of effort (quantify) when we add new workflows”.

That is probably enough for now since I really need my beauty(!) sleep. I have also put my name forward to give a Lightning Talk on “My Thinking is NOT for Sale”. Oh dear. I need to sort out the slides for that now!

Till later…

ACCU2015: Nerdfest Supreme

Well, another ACCU conference is here and I have just arrived at the hotel today and am settling in, meeting some friends and listening to a talk by one of them, Michael Feathers of ObjectMentor.

I am not presenting this year because I do not have my thoughts straight about some of my latest thinking – namely “My Thinking is NOT for Sale” – especially how I can even get our current business culture to start thinking about knowledge work in such a way. Although there are some signs.

One of the interesting things about these conferences is hearing some of the more vocal and opinionated people hold forth. Initially I start off feeling fairly intimidated, but then inevitably I find something to comment about and we end up having great conversations. In truth there is a nice balance between the more vocal folk and the timid ones. But here good logical thinking always get respect regardless of how quiet or noisy you are – unless of course the participants have stayed too long and late in the bar! Although thinking about it, John Lakos IS a force to be reckoned with!

Unfortunately another person I really really wanted to talk to was Jim Coplien. He was scheduled to deliver a keynote, but I was told he will not be attending due to illness. I was hoping to have heard more about his DCI work (See also his blog post and this video)

But back to Michael’s talk, which was called “Organizational Machinery around Software”. He was arguing for making the code architecture primary and structuring teams around that architecture. Basically saying to flip Conway’s Law and use it as a lever to get better results rather than having it inadvertently mess up your design because you did not structure your teams in the right way. The basic concept is simple. The implementation and convincing of management may be another thing entirely but it is an interesting view of basing your team structure architecturally rather than perhaps by market segment, or in some other way.

One of the things that he said was that we could take lessons from how the military manage personnel rotating through their teams (or crews I guess), and that business could do the same. As you might guess if you read my blog, I would find such a though unsettling, primarily because there is validity in what he says due to the fact that business is usually run on military lines, whereas I consider that there is a difficult tension between trying to write quality software and its usual economic context. More thought required…

I am looking forward to tomorrow, although I shall be breaking away from the conference in the evening to partake of some Argentinian Tango at some local classes! I might see if I can get some of the techies to come along – could be interesting… This will of course mean that any blog post may be delayed.

Here are some of the notes I made about the talk:

  • Being too conservative with code mods will cause a very fast deterioration of the codebase. ie. not refactoring.
  • Interface cruft. Easier to change code either side of an API, rather than mature the API.
  • Legacy comment.
    20150421_1932_MichaelFeathers_OrganizationalMachinery
  • Interesting research from Robert Smallshire in the audience: Developer halflife is 3.5 years. Code halflife is 35 years. (See this presentation)
  • We should be able to visualise a system architecture for ANYONE in the organisation to understand. Not just the techies.

Until next time…

My Thinking is NOT for Sale

Its 2 o’clock in the morning and I am finding I cannot sleep. A thought that is so off the wall has been gripping my mind for a while now and I am finding it more and more relevant to what I have seen happen during my career as a programmer.

The title is worth restating:

My Thinking is NOT for Sale

This is not so much a shouted response to all those times that good technical effort has been driven carelessly under the steamroller of prevailing economic needs – usually those of the money swallowing monsters that are most companies – than it is a statement of an underlying truth, if only I can express it well enough and in shorter sentences. So here goes…

If you pay for software you will not get what you need. In fact you CANNOT buy software because it is not a finished product. The current economic model we have just does not fit and I believe this is why there is so much trouble in this area.

What is important about good software development?

Over my 30 odd years of work the primary creative and energizing point has been the interaction between the developer and the actual user as a system has come into being. The best of it has been the conversation between the two as they navigate the area of the user’s needs. If the developer is skilled, both technically and personally, they help facilitate both parties in mapping an unknown area, probably only vaguely expressed in the “wants” that the user can currently identify.

This is a conversation of human discovery in thinking.

It is priceless.

It is a gift.

It is a Free process. Capital F.

It cannot be bought.
It cannot be sold.
It is NOT a product.

It only makes sense if the effort is freely given by the developer. The inner costs of doing this are so high that it requires a high level of motivation that can ONLY be internal. To try and shoehorn it into our current ways of thinking about money devalues the process and I think this is what is underlying the problems I have seen happen many times.

The kicker here is that it is likely that it can only be funded by gift money. That means that there can be NO LINK between the funding and the final “product”. I use quotes because that word is a misnomer of what is actually going on.

Unrealistic?

Just go and read a book called Turing’s Cathedral by George Dyson and you will see how the Princeton Institute for Advanced Study was funded by donation. This was where John von Neumann worked and developed the architecture that underlies modern computers.

The picture of how the whole current edifice of modern computing was birthed from gift money just blows me away. I find my thinking so bound up in the capitalist model that to separate the resource – i.e. the money to give time for people to think – from the product of that thinking in such a way shows up the illusion of the current funding models for such work.

Is that enough to allow you to see it? Truly?
If you can then maybe you might understand why I am having trouble sleeping because in my tossing and turning my feelings tell me it could change everything…

Or maybe this is all just a dream and I shall be sensible when I wake up.
Hmmmm.

Post-ACCU2014 Thoughts

My thinking has been working overtime since I attended and presented at the ACCU2014 conference in Bristol.

[The delay in producing another post has been due to a lot of rather extensive personal development that has been occurring for me. Add to this some rather surreal experiences with dance – clubbing in Liverpool being one particular – and you might understand the delay. But that will be the subject of a separate post on dancing – I promise!]

But back to thoughts subsequent to my attendance at ACCU2014…

The Myth of Certification

The Bronze Badge. Small but beautiful.
One experience that really got me thinking was a pre-conference talk by Bob Martin reflecting on the path the Agile software development movement has taken since its beginnings. He mentioned an early quote from Kent Beck that Agile was meant to “heal the split between programmers and management”, and that one of the important guiding principles was transparency about the technical process.

But then there was a move to introduce a certification for what are called ‘SCRUM Masters’, key personnel – though not project managers – in an Agile software development approach. The problem is that it is just too simplistic to think that getting a ‘certified’ person involved to ‘manage’ things will sort everything out. This is never how things happen in practice and despite early successes Bob observed that subsequently Agile has not lived up its original expectations.

The transparency that the Agile founders were after has once again been lost. I consider that this happened because the crutch of certification has fostered inappropriately simplistic thinking for a domain that is inherently complex.

My inner response to this was: Well what do you expect?

I very much appreciate and value the principles of Agile, but there is a personal dimension here that we cannot get away from. If the individuals concerned do not change their ideas, and hence their behaviour, then how can we expect collective practices to improve? As I experienced when giving my recent workshop, it is so easy to fall prey to the fascination of the technological details and the seeming certainty of defined processes and certified qualifications.

I remember a conversation with my friend and co-researcher Paul in the early days of embarking upon this research into the personal area of software development. We wanted to identify the essential vision of what we were doing. The idea of maybe producing a training course with certification came up. I immediately balked at the thought of certification because I felt that an anonymising label or certificate would not help. But I could not at the time express why. However it seems that Bob’s experience bears this out and this leaves us with the difficult question:
How do we move any technical discipline forward and encourage personal development in sync with technical competence?

The Need for Dynamic Balance

K13 being winch launched, shown here having just left the ground.
This was another insight as to why I enjoy ACCU conferences so much. There is always the possibility of attending workshops about the technical details of software development and new language features on the one hand, along with other workshops that focus on the more ‘fluffy’ human side of the domain.

I live in two worlds:

  1. When programming I need to be thoroughly grounded and critically attend to detail.
  2. I am also drawn to the philosophy (can’t you tell?) and the processes of our inner life.

Perhaps the latter is to be expected after 30 years of seeing gadgets come and go and the same old messes happen. This perspective gives me a more timeless way of looking at the domain. Today’s gadget becomes tomorrow’s dinosaur – I have some of them in my garage – and you can start to see the ephemeral nature of our technology.

This is what is behind the ancient observation that the external world is Maya. For me the true reality is the path we tread as humans developing ourselves.

Also we need to embrace BOTH worlds, the inner and the outer, in order to keep balance. Indeed Balance is a watchword of mine, but I see it as being a dynamic thing. Life means movement. We cannot fall into the stasis of staying at one point between the worlds, we need to move between them and then they will cross-fertilise in a way that takes you from the parts to the whole.

In our current culture technical work is primarily seen in terms of managing details and staying grounded. But as any of my writings will testify, there is devilry lurking in those details that cannot be handled by a purely technical approach.

Teacher As Master

So John - Do I have to wear the silly hat? Well Bill, only if you want to be a REAL glider pilot.
Another epiphany that I experienced at the conference was a deeper insight into the popular misconception that teachers are not competent practitioners. There is the saying that “Those that can – Do. Those that can’t – Teach”. So there I was in a workshop wondering if that meant that because I was teaching programming, was I automatically not as good at the programming? But then a participant highlighted the fact that this was not so in traditional martial arts disciplines.

Indeed – teaching was seen as a step on the path to becoming a master.

We – hopefully – develop competence which over time tends to become implicit knowledge, but to develop further we need to start teaching. This will force us to make our knowledge explicit and give us many more connections of insight, indeed helping us to see the essential aspects of what we already know. There may be a transitional time where our competence might suffer – a well known phase in learning to teach gliding – as well as being a normal learning process whenever we take our learning to a higher level.

So I think the saying needs changing:
Those that can Do. Those that are masters – Teach.

ACCU2014 Workshop : Imagination in Software Development

A week ago on Saturday 12th April I facilitated a workshop at ACCU2014 on Imagination in Software Development which I am pleased to say – thanks to the participants – went very well.

Before the workshop I thought I had bitten off more than I could chew, having read through a lot of Iain McGilchrist’s book “The Master and His Emissary” and realising that using analytical thinking for such an exercise is very difficult. However thanks to my long suffering team at work giving me the chance to do a dry run, I was able to get feedback about what did and did not work and so ended up making some rather last minute changes. The final workshop format ended up being completely different to the dry run.

Before moving onto the exercises I gave a half-hour talk about the links between phenomenology; software; and brain hemisphere function, most of which in hindsight could have been left until after the exercises. My main objective, however, was to raise self-awareness about the participants’ internal imaginative processes.

I thought it would be good to highlight some of the primary ideas that came from the exercises, both in terms of the workshop’s preparation and its execution.

The need to get away from the software domain

The exercises in the workshop involved:

  • Listening to a story excerpt from a book.
  • Watching a film clip of the same excerpt.
  • Performing a software design exercise individually.

Each exercise was followed by discussions in pairs. It became abundantly clear that if you give a bunch of programmers a technical exercise, it will behave like a strong gravitational field for any ideas and it will be very difficult to get them to focus on process instead of content. Indeed during the workshop I had to interrupt the pair-based discussions to make sure they were talking about their own inner processes instead of the results of the design exercise I had given them! By reading a story and watching a film clip first it did make it easier to highlight this as a learning point since it was much easier to focus on internal process for the story and film clip.

Individual working instead of in small groups

The trial run with my team at work used small 3-4 person groups. I found that the team dynamics completely overshadowed their individual awareness. I therefore changed the format to make the core design exercise an individual process, followed by discussions in pairs. This had the desired effect of bringing their internal processes into sharper focus. The more you know about an area the more difficult it can be to “go meta” about it.

Some great insights from the participants

STORY
When listening to the story 3 processes were identified which occurred in parallel:

  • Visual – Picturing.
  • Emotional.
  • Logical – Probing.

FILM

  • The film was much more emotionally powerful, to the point of feeling manipulative.
  • But it was felt to be ‘weaker’ due to the imagery being concrete.

DESIGN

  • When performing the design exercise the ideas were experienced as a story, but as a sequential process rather than a parallel one.
  • The logical analysis required thoughts to be made explicit by writing them down otherwise it was hard to hold them in awareness.
  • There was a more conscious awareness of past experience affecting current ideas.
  • The initial analysis was wide-ranging followed by focussing down to the core ideas.

So if any of the participants make it to this page – I would like to say a great big thank you for getting involved.

Slide set follows:

People & Technology : The Boundary Problem : Home Life

If you have seen my earlier post you will know that I am concerned about our lack of awareness of the subtle effects of computer technology on our lives. My deepest concern is about the effects on young children so in this post I am going to talk about the boundaries my wife and I imposed on computer (and TV) use within the home and some of our experiences.

I have been a computer professional since before the early days of the “Personal Computer” boom when we could hardly contemplate that everyone would have their own computer! Many certainly did not even dream of the phenomenal proliferation of “microprocessors” that would take place. That was the word that was used a lot: microprocessor – which highlighted the fact that it was just a super-chip for the electronic nerds like myself. You hardly hear the word mentioned nowadays, but they are still there, usually called just “processors” although thousands of times faster and more powerful and with more fancy names like Core i7 or Phenom.

I also remember sitting at a screen (which was not integral to the computer) typing in commands well into to the late hours at work. But at that time of day I was using an early computer game called “Adventure”, and if you really got into a pickle you would just keep getting the response: “You are in a maze of twisty passages, all alike.”, regardless of the command you typed. Such things used to happen at work since that was the only place where you had enough tech to run the games program. Remember no Personal Computer – or PC – yet!

So I was aware from the early days about the addictive nature of this particular beast. Not only was the game playing addictive, but the programming was (and is) addictive. 4 hours can pass in the blink of an eye if you get “in the zone”. According to folks like Mihaly Csikszentmihalyi, author of the book “Flow”, this is because so much attention is required for the task that we do not have enough attention for noticing the passage of time. This is a central facet of computer technology. It sucks in your attention. No wonder there are social problems. How can you give attention to other people if your computer or phone is taking it all? But I am getting ahead of myself…

Back to home life – my awareness of this addictive nature of technology was shared by my wife and we both decided that it was inappropriate for our young children to use them. I know a lot of the world does not agree with me (yet!), and we knew that we could not stop them playing with computers at their friends’ houses, but we decided on the following rules:

Rule 1: No computers/mobile phones/electronic games AT ALL until the children were 12 or 13.
Thats right – none. Occasionally my work would require me to bring one home, but this was closed away in the box room and the kids were not allowed near it. Especially NO COMPUTER GAMES. Of course when they went to their friends houses they did have games, but in our house it was traditional toys: wooden train sets, building blocks, Lego and so on. [Also, due to our involvement with the local Steiner school we preferred natural materials over plastic. Hence our preference for wooden toys. I currently think plastic toys are ok, but the wooden ones have a nicer feel.]

Well the kids seemed to be ok with not having computers – but the next rule definitely caused complaints…

Rule 2: No TV.
Eventually we would watch DVDs once the children were 10 or so, but NO TV. And no DVD watching in the bedroom. In fact this was when we got the first family computer in the house to watch the films. If we wanted to watch a film we would congregate around it and have our evening sandwiches watching the film as a family. Why only films? Mainly to place a boundary around our viewing – with TV it is too easy to keep on watching just the next programme, and the next, and the next, and so on… We still have no TV despite me working in the business, and my wife and I are quite happy with that state of affairs.

So what about our experiences with this regime?

Certainly there was some complaining from both our daughter and son about how all their friends had these games, or could watch TV. But we were quite firm and simply said something like: “Yes I know my loves, but we don’t agree with that for you at the moment.”

As I said above, the games issue was not a problem, possibly because (a) I was so sure it was a bad idea and was very firm about it, or (b) they really enjoyed their own games that they would make up themselves. They both have wonderful imaginations and we have many happy photos of them playing without a computer in sight.

The “No TV” was more difficult especially as we would go to their grandparents and they would be allowed to watch TV or a video. This was why we introduced watching family films at around 10 years, although with such great imaginations we had to be careful about the content, even though they were age-appropriate films and seemed innocuous to an adult, the children could get quite scared by some scenes. I think adults too easily assume that the consciousness of children is very similar to their own.

There is an important story about our experience related to the “No TV” rule:

One day a friend of the children came around to play and had a shock when he could not find the TV! He was quite bewildered. Meanwhile our children got stuck in and started putting the wooden train set together. He just sat and quietly watched what they were doing, initially without taking part, until my son and daughter pulled him in and started showing him how to play. I was amazed and later found out that at home he was allowed unlimited access to the video player and would keep rewinding and replaying his favourite scenes over and over again. This boy had partially lost the “knowledge” of how to play! In the past this would have been considered a pathological problem, and I am convinced this is becoming more of an issue for the children of today.

If we fast-forward to the present day, both kids are now at university, both have their own laptops, both have them in their own bedrooms, both watch DVDs in their bedrooms. It is now a different phase of their life and they need to be part of the current culture for it is to be their culture. We will see how it develops.

But perhaps some concluding thoughts:

Boundaries must be placed around our use of such gadgetry and in writing this post I have come to see that it is all related to Attention.

The Boundary Problem is giving rise to The Attention Problem.

Our social human communications should not take second place to our electronically mediated communications. You can see an earlier post where I talked about some of the problems inherent with the latter.

Attention is a special thing that we give to the world. Currently we are giving too much attention to our machines, when we need to give more of it to our fellow humans.