Breaking the ubiquity of Stream Mode

A blog post by Luis Suarez has served nicely as a catalyst to start crystallizing some thoughts from the last couple of weeks.


I’ve become increasingly aware of tensions I feel when I think about how I manage my personal sense-making. In hindsight the seeds were sown when taking Harold Jarche’s PKM in 40 days course. During that study I realised that although I “talk the talk” around PKM, mostly what I do is the “Seek” part of Seek-Sense-Share, with sharing only at the level of filtering a set of public bookmarks. My approach to sense-making is opportunistic, driven by the needs of the moment, and often quite ephemeral – knowledge is cast away to the depths of memory when not needed for the task in hand.

I’ve noticed a number of things, which I now suspect are related:

  • I become more and more convinced that email is toxic, yet find myself dragged back into using it by the unhealthy habits of those I work with. Although I find the collaboration in Luis’s #no-email Slack group to be a great support, I spent much of last year not participating
  • I’m increasingly aware of tensions whenever I think about long-form writing and thinking – a whole blog post feels like a lot of pressure! 🙂
    As an aside, I wonder if in fact my long-form thinking is being expressed in a different medium – code – although the job title might mislead you, I write quite a bit of code these days  – and “code is poetry” after all!
  • I find myself attracted to Federated Wiki – the combined timelessness of wiki ( = “no pressure”) with it being “my site”. The ability to quickly bang together related thoughts on pages which are never “finished” feels more accessible than the blank page, dated post, tyranny of the blog
  • I like the speed of Fargo (although the default blog style with dates is a bit reminiscent of timelines). Publishing via is baked in, if I plan to use this tool more I want my own server, and not just syndicate it.
  • Regardless of anything I say in this post, I’m still quite attracted by the immediacy of a Twitter timeline, the frequent updates on Facebook, the serendipitous comments that arise when I post the name of the film I am about to watch – I’ve realised that acknowledging that attraction is a key step in starting to build something else into my practice


Part of recognising an issue is to be aware of the feelings of discomfort, and part is having the right concepts to categorise what is happening.

I came across Mike Caulfield‘s multifarious online presences when I started looking into Federated Wiki. The ones that have seemed most useful in this context are where he has drawn out the definition of StreamMode, contrasted with StateMode  (bloghighlighted version). In his words,

“You know that you are in StreamMode if you never return to edit the things you are posting on the web”
(context, src)

Mike recognises that StreamMode may have some advantages, and Bill Seitz makes an interesting contrast link to Tim Kastelle  on Managing Knowledge Flow, not Knowledge Stocks


“We end up hitting Twitter refresh like sad Skinner-boxed lab rats looking for the next pellet instead of collaborating to extend and enhance the scope of human knowledge.”
(context, src)

Or in the words of Jack Dorsey – “Twitter is live, Twitter is real-time” – both its strength and its weakness.

The Firehose is addictive – it makes us feel “in touch with the pulse” at the same time as it weakens our ability to pause and take stock – it is the refined white sugar of the knowledge world.

What’s Next

Change relies on motivation and a plan.

So you have an addiction – do you really want to fix it?

Why would I want to reduce the hold that StreamMode has over my online interactions?

Put very simply, if I’m going to spend time online, I want it to be useful in some way – personally, professionally.

And the plan? Reinforce positive behaviours, and manage those which are inefficient, unhelpful or not-fun.

Although I complained above about being dragged back into the toxicity of email I have had some success with “Working Out Loud“:

  • Using a product management toolset that within the scope of a single user licence allows me to make my planning work visible across the company, and for colleagues to comment, and create their own ideas for change (sorry for the plug, but I really like the tool)
  • With internal and external technical teams, reinforcing the use of DVCS repositories, and using comments and pull requests as a way of documenting our design discussions
  • Wherever I can, moving more general internal discussions to a combination of Yammer and internal blogs

So a big part of my strategy is to keep using these, and move more and more of my “inside the firewall” conversations to them.

Outside the firewall, in many ways I feel I have been going backwards, not least because outside the firewall is where the stream is so pervasive.

I think the first step is to take back some control, and of all the things I have read, Luis’s approach to gripping Twitter by the throat and bending it to his will is the most appealing.

So I’m planning my own version of the “Great Unfollowing

That will do for a start. There will be more on evolving practices, but another secret to making a change is not to try too many things at once.

And change often needs a public commitment – here it is!


The Architecture of Personal Knowledge Management – 1

Back in July Harold Jarche posted a useful deconstruction of the processes involved in web-based personal knowledge management (PKM). Building on this, and in order to make a lot of implicit stuff in my head explicit, I’ve started developing the model into a full mapping of processes to tools.

I’ve chosen to use Archimate as a modelling language, and as I develop the model offline I will be posting views of it to pages liked from this wiki page.

Harold’s model looks like this:

As I began to unpick Harold’s seven processes I realised that although they are primarily focused on “self”, one key aspect to understand them is to identify the different roles that “self” (and “others”) play. This aspect of the model so far is shown in the Introductory View :

PKM Architecture - Introductory Viewpoint

Alongside the work of developing models for each of the processes, I began to develop a view of the key information artefacts manipulated by the PKM processes.

PKM Processes - Information View

I’ve also created pages on the wiki for the first iteration at modelling the  individual processes, linking them down to a core set of application services, and over the next couple of weeks I’ll write blog posts for those.

Comments welcome to help refine this modelling effort.

Links Roundup for 2007-01-03

Shared bookmarks for user Synesthesia on 2007-01-03

Links Roundup for 2006-03-22

Shared bookmarks for user Synesthesia on 2006-03-22

More about conversations and processes

I’ve a hunch that the conceptual models discussed in  Jeremy Aarons’ new paper, (as I summarised here) could be a useful lever for unpicking the dilemma I found when I wrote that I prefer conversation, but you need process.

In that post I was drawing on conversations with (amongst others) Earl, Taka, Jon  and Ton about the apparent conflict between the desire we all feel as empowered, “wierarchical” knowledge-workers to have systems that support a collaborative and improvisational working style, compared with the rigid, dehumanised processes that many companies see as a necessary corollary of delivering consistent service.

The particular paradox is that some of us (ok, me!) have on many occasions required companies (typically suppliers of services) to demonstrate those sorts of processes in order to satisfy our demands for clarity and measurability, even though we recognise that we may at the same time be preventing them from delivering the sorts of innovation that would truly delight us.

I find that the Davenport model helps me understand what is going on here – the underlying assumption of companies that apply prescriptive processes seems likely to be that the work involved is on the left-hand side of Davenport’s diagram – the Transaction and Integration models.


The underlying assumption has to be that the nature of the problems that are faced in these areas do not require interpretation, rather the application of rules and standards, possibly requiring multiple areas to work together but always within a set of rules. This is almost exactly the model under-pinning frameworks such as ITIL.

The other thing that strikes me as I read the contents of the boxes in the model are that they match closely with some of the criteria that are used in job grading systems. The boxes at the left of the model contain descriptions which are usually associated with lower-graded roles. This would seem to support my assertion from experience that companies which base their core competency around deployment of such rigid processes are primarily concerned with containing costs and at the same time guaranteeing minimum levels of service from a transient workforce.

Work that can be described by the right-hand side of the model (e.g. Collaboration and Expert models) is typically well-rewarded by job-grading schemes, pragmatic evidence that such skills are in relatively short supply. Professional services firms typically focus on reserving the efforts of these people for critical projects of areas requiring significant interaction. Such firms often also have (or desperately need) a core competence in taking the intellectual products of the right-hand side and “operationalising” them, i.e. turning them into formal processes and standards that can be scaled up and applied by the more numerous group of people paid lower wages to work “in the left-hand side”.

So far, so good – perhaps not a comfortable conclusion, but it would seem that the model works at least acceptably in certain situations. There is a certain basic business logic in reserving your most highly-skilled people for problems that need their attributes, whilst at the same time finding ways to manage the routine at a lower cost.

So where does the paradigm break?

I think there are at least two areas worthy of further exploration:

  • There is an assumption that the market such firms supply will largely pose routine problems which are amenable to a rules-and-standards approach – where does this break down?
  • Secondly, underlying the concerns that were expressed in the earlier conversation is a belief or hope that by finding a more integrative approach to knowledge work then there is the potential of finding ways that are more rewarding in either a commercial or human sense.

 Ideas for later posts…

Integrating thinking and doing

Jeremy Aarons has blogged the draft of a new paper, Supporting organisational knowledge work: Integrating thinking and doing in task-based support by Jeremy Aarons, Henry Linger & Frada Burstein.

They start by referencing Davenport’s classification structure for knowledge-intensive processes, which analyses knowledge work along the two axes of complexity and interdependence:


Davenport’s classification structure
Davenport (2005) via Aarons (2006))


However they then go on to criticise this as an analytic model on the grounds that much complex work often fits into more than one box. In particular, they suggest that work which (by the Davenport classification) is largely within the Integration Model often has elements requiring significant precision and judgement from indivduals – in other words mixes in work from the Expert model.

They suggest then that a more appropriate guiding framework is Burstein and Linger’s Task-Based Knowledge Management, which considers knowledge work as an inherently collaborative activity which mixes pragmatic “doing” work into a conceptual “thinking” framework. In this approach the focus is on supporting rather than managing knowledge-work. The authors express this using the following diagram:



A task-based model of work
(From Aarons (2006)

The rest of the paper is devoted to a case study within the Australian Weather Service which supports the mixed approach, and yields examples of failed business systems which focussed only on the forecast-production aspect of the forecasting task. These are compared with a successful and hugely-popular system which started as a maverick, ground-up project and which expressly addressed and supported the creation and maintenance of conceptual models of weather. This system, which is now the system of choice, only addressed the production of output forecasts as a piece of auxiliary functionality.

More on Business Strategy Patterns

Allan Kelly commented on my post from last year about the possibilities of using pattern languages to describe business strategies, to point out that he has done quite a bit of this already.

So far the only paper I’ve had a chance to read is Business Strategy Patterns for The Innovative Company, which is a set of patterns derived from “Corporate Imagination and Expeditionary Marketing” (Hamel and Prahalad, 1991). In this Allan derives:

  • Innovative Products
  • Expeditionary Marketing
  • Seperate Imaginative Teams

Apart from the patterns themselves there were two things I found interesting about this paper:

Firstly, Allan describes a rather rough ride he received at VikingPLoP 2004, where apparently a lot of negative attention was focussed on whether there was “prior art” for these patterns in the pattern field. I think there is something here that any autodidact will feel an empathy towards. Whereas the scientific community (rightly) puts a lot of emphasis on whether something is new knowledge, in the world of applications there is at least as much value in “new-to-me” knowledge, or even “applications of existing knowledge in a new context”. To me patterns and pattern language fall firmly into the camps of education, application and transference between domains; not the camp of new knowledge creation. Given that, an over-obsession with “prior art” would seem to be rather inward-looking.

Secondly, Allan goes on to elaborate how his understanding and view of patterns has developed and changed, especially as a result of reading “The Springboard” (Stephen Denning, 2001), and “Patterns of Software” (Dick Gabriel, 1996) and that he now sees them as a particularly-structured form of story about a problem domain. I find this an appealing viewpoint, as it harks back to the fundamental way that human beings pass on knowledge, through the telling of stories. Of course, the nature of stories is that each person who retells a story does so in a subtly different way, and over time the story changes. Extending the simile, patterns too will change over time in a two-way exchange of knowledge between the pattern and the environment of the current user, so to say that a particular pattern is derived from (but not the same as) an earlier pattern is merely to state that evolution has occurred.

Update: Allan’s latest paper Strategies for Technology Companies has more on his interpretation of patterns as stories.

Links Roundup for 2006-03-06

Shared bookmarks for user Synesthesia on 2006-03-06

I prefer conversation, but you need process

I think I’ve just caught myself out in a “one rule for me, another for you” attitude over something… A conversation across several blogs made me realise that I was facing both ways on an issue and hadn’t acknowledged it – oh the power of the internet!

Earl Mardle posted about Information Architecture as Scaffold based on a conversation with Ton (More on Ton’s position here). The gist of the view expressed by Earl and Ton is that all this “knowledge” that companies are seeking to “manage” is really only accessible through relationships, and once the relationship is established then the information that was part of the initial exchange is no longer relevant:

And that, my friends is what information does; it provides the scaffold that bridges the gap between people. A bridge that we call a conversation. And once you have built the bridge, you can take away the scaffold and it doesn’t make any difference, the conversation can continue because it no longer has any need for the information on which it was built, it has its own information; a history of itself, on which to draw and whenever the relationship is invoked, it uses any old bits of information lying around to propagate itself.

Earl then expands his view that in the real world of work, when you need to create some kind of output, you do it based on your own knowledge and the knowledge of your team, rather than through re-purposing some previous piece of corporate “knowledge”.

Several of us joined in the conversation in support of the view – in particular I made the point that the key thing that stands in the way of re-using the typical corporate knowledge artifacts (i.e. documents) is the lack of contextual information about why they were created in the way they were. A good provider of context would be a record of the conversations that happened around the document creation (e.g. through blogs and wikis) but that is still too difficult to add on if it requires people to learn new tools.

As a good counter to all this virulent agreement, Taka disagrees strongly with the concept of information as scaffolding around conversations – in his view the information is the conversation, the scaffolding is the network of relationships that enable the conversation. That’s probably a difference of opinion over the meaning of words, where it gets interesting is what Taka goes on to say:

This is what I call the McDonalds question: how do you get low-skilled, inexperienced trainees to consistently produce hamburgers and fries to an acceptable level of quality? Process. And it’s the same thing in a corporate environment: how do you get people, who generally don’t really give a toss about what they’re doing, to write proposals and reports and all the other guff to an acceptable level? Document templates and guidelines.

Coporate KM and other such initiatives are our typically short-sighted attempt to find technical solutions to what is actually a people problem. There are plenty of people selling solutions and processes and methodologies to “fix” the information management issues that exist within companies because it’s an easier problem to tackle than the real underlying issue: how do you get people to actually give a damn about what they’re doing?

Which Earl extends and restates;

Underlying what I was talking about in the other post is to make explicit that very fact; organisations that think of their people as fungible will be lead inexorably down the path of document management and “knowledge capture” solutions that will not help them survive, and they don’t deserve to.

The kicker for all this came from Euan Semple the other night who told me about a company rep who asked him, “how do you stop corporate knowledge leaving with the person?”

So, to reiterate a point that might have been a bit buried in the verbiage, organisations with a future do not need KM systems because they have active, engaged people who know what the hell they are doing.

And that is where I did the metaphorical forehead-slap.

Because I’m all for work practices based on conversation and shared context where they involve me or my colleagues – of course we are wonderful knowledge-workers who thrive in such an environment! But, as I realised, when it comes to speaking with suppliers of IT services, or designing how our organisation should inter-operate with their organisations, it’s always about process.

In part that’s about how they work, and when I am in that purchasing role it’s not directly my concern about how they can deliver good consistent service to the company I am representing, rather a matter of being sure what they deliver, but I’m sure we throw out quite a lot of baby with that bath water. We struggle to find ways of getting the sort of human, responsive service we want at a price we are prepared to pay.

So why is this a problem? The clue is in the words I used – “good, consistent service”. The whole world of out-sourced services companies is about consistency. The way services are usually measured –  “x% of faults fixed within y hours” – is about aggregation, statistics, removing variability. The companies who supply these services, in their turn, are looking for ways to meet those contractual arrangements that allow them to make a profit. The major costs in any service are the people who deliver it, so inevitably there is downward pressure on salaries and a drive to make everything a process that can be automated as far as possible.

In that sense, modern out-sourcers truly are the last bastions of Taylorism. Almost as a foregone conclusion, there is low job satisfaction in these bastions of “service”, leading to high turnover of front-line staff, leading in turn to increased management pressure for process and consistency.

I think there are several conflicts at work here:

  • Be consistent v. Delight the customer
  • Maximise productivity by using low-skilled staff v. Maximise productivity by supporting people to use all of their skills and knowledge
  • Protect the service against staff turn-over v. Protect the service by creating an environment where people want to stay and grow
  • Get the lowest cost service from suppliers v. get service that truly helps your business
  • and probably some more…

The simple answer to all of this seems to be “work in small teams” and only use small suppliers, but it’s not clear to me how that scales. When I think about small teams, I can see how a wirearchical approach works when there are several companies involved (in the limit, several individuals), but again, I feel various mental blocks when I think about scaling that. I’m still struggling with these, and other dichotomies, which is probably a good sign that it’s time to draw the CRT! Food for a later post I suspect.

Links Roundup for 2006-02-28

Shared bookmarks for user Synesthesia on 2006-02-28