As I understand your post, the key reson for the lack of negative experienmcwe reports has to do with not wanting to dewll in the past or “blame” others for what went wrong.
The reason why we have this feeling is that we are not able tol write reports that don’t blame people or organizations directly. That problem is related to fact that we have not found a pattern for a negative experience report that delivers new knowledge without blaming anyone. Are we missing something here? science has been doing negative result papers for ever. Lone opf the most famous failed experiments led Einstein towards the relativity theory. We badly need these nmegative experience reports and we may still need to learn how to deliver those in a way that focuses on learning not blaming…
There is an important difference between science experiments and software projects: the whole purpose of an experiment is to provide new information – if it does that, whether it’s positive or negative, it’s a success.
@Vasco: I hear you. And I do learn from my mistakes as you and anybody else. I think this is similar to how Einstein and Edison learned from their experiments.
Only I want to learn from your challenges and those from anybody else in the Agile community.
And those outside our community also want to know for what we have solutions, what we have tried and we feel agile does not have a solution yet, or agile is not a solution.
Another thing that struck me when reading your responses is that I prefer to focus on the good thing: what you give attention grows.
Another reason might be that it’s just no fun. There are thousands of ways to screw up a project, why would I want to hear about another one? What I really want to hear about is how to do better. And not just speculation, but real experience. And that would make it another success report, of course.
Thanks for this thoughtful post, Yves. I am still mulling it over, but here is a quick thought I had while reading: failure implies giving up, stopping, to me. With the Agile mindset, we learn from mistakes and move on, failing small-and-often and improving onward. Makes it difficult to know when it’s a real failure, no? (Also, to know when it’s a real success). Still, we gladly declare such a work-in-progress a “success” long before the story is over, don’t we? Something interesting I will be thinking about .
Simpler explanation: natural bias? Most agile conferences aren’t looking for negative experience reports since one of their aims is to promote agile. The same goes for individual agilists.
That being said I do recognize the problem. Failures are abundant in the industry and there are probably lessons to be learned there. Marginalizing or dismissing negative experiences is short term thinking. It prevents learning and creates the positive vibe you mentioned that some actually find repulsive. And that hurts the – for lack of a better word – agile cause.
Any healthy community should have a dash of self-reflection and integrity (self-esteem?) to recognize weak spots, acknowledge and improve on them.
@Ari: I don’t believe the explanation of natural bias. In the last year I discussed this at every agile conference I was (and that were a lot) and I know a lot of agile coaches (and newbies) want more of these.
And we do discuss this amongst ourself. Just not in confences sessions.
That’s good to hear! I am not a regular conference goer, but there has been similar talk in the Finnish agile scene.
I had a proposal for a talk at Agile 2009 that was about some good and bad experiences with a couple Agile implementations that included offshore distributed teams. It was not accepted, despite the few comments on the conference website being very encouraging. It would not have been all negative, but addressed problems I see in large organizations when they jump into Agile without little more than a few days of “training” and lack of sufficient management buy-in.
I do think negative reports are not well-received, publically, as the response I often hear is that the organization “didn’t do Agile right.” Of course, that is the same response I get when a waterfall-like project goes South: they didn’t follow the process “right.”
My own belief is that, in either case, they didn’t understand the process “right” in the first place, then tried to follow it, but “tailored” it, in some way, when it conflicted with established ideas they had about what was “right” or “practical.”
True enough for the learning, but “do it right,” for me, often covers up the root problem, which is “understand it.” If the latter is present, then “doing it right” means behaving congruent with understanding rather than just imitating the form. This allows reasonable adaptation to occur, avoiding both “purist” and “…but” approaches.
Perhaps the key example is how Royce’s original article where “waterfall” was diagrammed and actually rejected as useful was so misunderstood. Somehow, a phased, sequential model with little or no iteration became the “right” lifecycle to emulate. Showing “due diligence” in this lifecycle meant significant early plans and analysis.
@Scott I think part of why people misunderstood Royce is that the metaphor of a waterfall and his drawing where not in sync. (I know no waterfall that goes back up – except for the ones in an Escher drawings-)
Well in the negatice experience reports, I would be interested in hearing: I tried X, it did not work, we think because of Y.
So I would know that in circumstances with Y , I have to be carefull if I try X
My proposed talk was like that. It was about two large, distributed efforts to rewrite major sales support systems. Five Scrum teams in each one.
My planned presentation approach was to tell the story of each project’s early (first 3-4) iterations and, at the point a significant issue/decision point occurred, stop and let the audience suggest what they think might have best been done at that point. Then I would reveal the decisions made by the organizations, why they decided to do so, and what happened as a result.
It was going to be called “A Tale of Two Cities” and would have combined what I thought were beneficial and deterimental decisions.
My first inclination, with regard to attending negative or positive (subjective) experience reports is: unless they clearly and objectively lay out the whole context, I’ll never understand what really happened there, anyway.
None the less, I have been to a few of those sessions. So, I’m thinking about how I *listen* to such sessions. I don’t tend to buy the whole “do it this way because it works” success story, instead I’m looking for concepts, practices, (warnings?) to add to my tool kit, which I will use when they seem appropriate in my context.
It seems I can learn such tools from both success stories and failure stories. It’s not the success or failure that’s interesting to me, but the details of what they tried, how they did it. I’m looking for ideas.
It occurs to me: in this case, maybe we shouldn’t identify stories as success/failure, but rather categorize them by the kind of problems they needed to solve: pure research, big project, greenfield development, maintenance, distributed team. This would help me more, in choosing a session.
For XP Day London, we have only accepted experience reports about where things were difficult, how those difficulties were overcome and what lessons were learned. We avoid reports that just tout how great it is to adopt some by-the-book agile process. I find that to be a reasonable balance: the reports communicate how things can go wrong but are not entirely negative. They celebrate (if that’s the right word) the teams’ abilities to notice and solve problematic situations.
I had a similar experience to Scott – submitted an experience report at Agile 2009 that was rejected: “Adoptions Lessons from an Experienced Agile manager at a distributed startup”
I think it is useful to share failures: “Failure is instructive. The person who really thinks learns quite as much from his failures as from his successes.” – Oscar Wilde.
Here is a short description of the talk: This is the story of an Agile transition gone bad. I was a development manager at a startup with a distributed development team. Although I had lot’s of experience with Agile adoption at other companies, this transition stalled and became yet another case of scrumbut. You’ll learn about the challenges and warning signs. I’ll also talk about some of the challenges of coaching when you are the manager.