How to run a technical workshop

I am frequently in charge of running workshops where a group of stakeholders, often experts in their field, come together to agree on how something works or is supposed to work. This happens in the context of language design, where domain experts try to find abstractions that should go into the language to faithfully cover the domain. But it also happens when trying to decide how a particular software system should be architected in order to fulfil certain requirements. Here are some thoughts on how you should approach running such a workshop.

People Stuff

As a moderator, you should do two things. The first one is actually running the meeting. This aspect is mostly about people. It involves

·       ensuring everybody gets to speak, noone dominates the discussion and intimidates other people from speaking,

·       making sure you all stick to the topic at hand

·       capturing decisions, results and open issues that should be covered later (potentially in another meeting)

·       detecting and shutting down activities of “sabotaging” the meeting for reasons of ego or company politics

·       and uncovering people’s attempts to mis-steer the discussion by putting up strawmen, unrealistic, simplified or generealized statements about the issue at hand.

This is a lot of work, but it is necessary to make sure the meeting is productive. Of course a clear agenda and appropriate preparation by everybody involved is another factor for a useful outcome. I think everybody agrees on this part of the text. It’s basic moderation skills.

The Moderating Analyst

Now, my text text here is not generally about meetings, it is about “workshops [that] come together to agree on how something works or is supposed to work”. In other words, the point of the meeting is an often non-trivial technical analysis. Running such a workshop puts a additional requirements on the moderator.

To put the bottom line up front, IMHO you have to be a “moderating” analyst”, not just a moderator. It is your responsibility that the result of the discussions makes sense, is consistent and — to the degree it can be known — correct. So what does this entail?

First, you have to be somewhat competent in the matter at hand. When running a domain analysis, you have to know at least the basics of the domain. When trying to come up with an architectural blueprint, you have to roughly know the requirements and I guess you have to know software architecture in general. This is necessary to be able to make sense of what people say, and to detect strawmen.

A potential risk of a moderator who is competent in the topic is that — because of their experience — they are not neutral. Or at least they might not be seen as neutral, which is just as bad. There might be actual or perceived bias. For example, when I run a domain analysis workshop, I am probably biased on whether one should use a DSL to capture the domain or not. I’d like to think I am objective enough not to suggest using a DSL when it is not appropriate, but even if the participants *think* I am biased, then I will have a hard(er) time arguing for using one. So, to minimize this problem, a moderating analyst should be open about their background and potential bias, and actively encourage people to help “detect” when they make statements that are (suspected to be) driven by this bias.

Good and Bad Quarrels

I have recently been a participant in several workshops such workshops, some of them had quite strong disagreements among participants about how a particular domain problem should be addressed. Whenever these disagreements started to flame up, the moderator stepped in and stopped the quarrel. From a “generic moderator” perspective, this makes sense. But from a technical perspective it did not, because a) some of these spirited disagreements led to additional insights about the domain, and b) cutting them off meant that some fundamental disagreements remained unresolved, and therefore continued burning, undetected, flaming up over and over again, disrupting the workshop over days. So what do you do?

If such arguments come up because of an ego/politics thing, you have to detect this and shut it down, especially if it comes from a loud minority. If the disagreement is legit (which you must be able to tell!), you should either try to resolve it using the techniques I outline below, or you might want to spawn them out into a smaller workshop with only the disagreeing parties.

As a corollary, once an agreement has been reached on something, make sure that everybody understands that there is an agreement, and on exactly what. For example, you could say: “Ok, so we agree on XYZ. Any objections?” This is really important. First of all, it gives everybody a moment to ponder whether they really do agree. More importantly, if and when the question is brought up again later (potentially in bad faith), you can refer back to the agreement: “Hey, we agreed on XYZ before. Why are you bringing this up again?” There are of course good reasons for bringing something up again: maybe there were different understandings about the generality of the agreement, maybe some corner cases were missed and the agreement is therefore not feasible. But quite often, such a re-questioning of an agreement is a sign of one of these undetected brushfires brushing up — caused by a different worldview (which needs to be addressed) or personal/political concerns (which must be shut down and then solved on another level).

So then: how do you ensure the outcome of the workshop is useful and “correct” — to the degree it is possible to tell?

Build a Model — and then challenge it!

I think as an analysing moderator you have to build a model of what it is you are agreeing on, a consistent and complete representation of the topic you are discussing. In this context I don’t necessarily mean a formal model in the sense of a language definition or a UML diagram, although this might be useful in some cases. Clear textual statements or diagrams are often enough.

As people explain things about the subject matter, it is your job to detect holes in the argument, uncover suspected corner cases and point out contradictions with what has been said or even agreed on earlier. A consistent model — both in your mind and in the form of diagrams, notes, an actual formal model or even a piece of prototypical software — helps you do this.

Models, by their very nature, are abstract. In my experience it is often the case that some workshop participants can’t deal with too much abstraction. So in this case you have to illustrate what the model means by giving examples. Importantly, you should also give examples of what the model does NOT imply. Both positive and negative examples should be driven by your understanding of the domain and previous discussions, contentious cases or disagreements. You have to actively (encourage other to) try to shoot down the model!

As you make progress on this model, this model should be what you agree on. Remember when I said above: “Ok, so we agree on XYZ. Any objections?” This is how you get to the XYZ!

Wrap Up

So, summing up. I suggest that in order to run workshops that are really technical (in the sense of a domain or a software engineering topic), a “normal” moderator is not enough. You have to be both a classical moderator as well as the “consolidating analyst”. This might conflict with a perceived degree of neutrality and “letting the team come to a decision”, but I find it is much more productive. This is especially true if the participants are not good at building such consistent and complete models — a situation I find myself in quite often.