Developer! rid your wip!

October 14, 2010

I’m big on eliminating waste, I’m big on minimizing Work In Progress (WIP).

Here’s my story: a colleague asked me to pair with him because he broke the application and he didn’t know why. So we sat together. My first question was:

– “Did the app work before you made your changes?”
– “Yes”
– “Ok, show me the changes you made”

I saw ~150 hundred unknown files, ~20 modified or added files. I said:

– “That’s a lot of changes. I bet you started doing too many changes at once so you ended up with a lot of WIP. Can we revert all the changes you made?”
– “Errr… I guess so. I know what refactoring I wanted to do. We can do it again”
– “Excellent, let’s clean your workspace now”

We reverted the changes. We also discovered that there were some core “VCS ignores” missing (hence so many unknown files in the working copy). We fixed that problem by adding necessary “VCS ignores”. I continued:

– “Ok, the working copy is clean and the application works OK. Let’s kick off the refactoring you wanted to do. However this time, we will do the refactoring in small steps. We will keep the WIP small by doing frequent check-ins on the way.”
– “Sounds good”

Guess what. We didn’t encounter the initial problem at all. We completed the refactoring in ~1 hour doing several check-ins in the process. Previous attempt to the refactoring took much more time due to few hours spent on debugging the issue.

Here’s my advice for a developer who wants to debug less & smile more:

  1. Understand what is your work in progress; keep WIP under control; keep it small.
  2. Develop code in small steps.
  3. Keep your workspace clean, avoid unknown or modified files between check-ins.

the net value of tests

July 8, 2010

[Please, read me carefully. If at any point in time you think I’m preaching to stop writing tests then STOP reading further and FORGET what read so far. Thanks!]

One might think that tests only bring value, that is muchos dolares. If this is the case then why many teams stop maintaining their tests at some point? Or they @Ignore a test forever when it is too difficult to fix? Isn’t it like throwing away ‘value’. Why on earth you are throwing away $$$!?

Thing is that tests bear maintenance cost: keeping them green, refactoring along with the code, etc. Let’s talk about net value of a test: Net = Value – Cost. If you write crappy tests then the net value might be negative. Since I visited quite a few archeological sites digging in legacy code I saw many tests with negative net value.

When half of your team wants to run ‘svn delete‘ on the ‘FitnesseRoot‘ folder it might mean your tests’ net value is plummeting down. Obviously, fitnesse is just an example, any tool can be a blessing or a curse depending on your skill in misusing it. (Dear god of Agile, I swear I love fitnesse… but only with GivWenZen).

Let’s think about it. I’ve got 3 tests with a respective value I conjured using cosmic energy.

Test 1. Adds an article to the content db. Value: 50 $, Cost: 20 $
Test 2. Adds an article & edits the article. Value: 50 $, Cost: 20 $
Test 3. Adds an article & edits it & deletes it. Value: 50 $, Cost: 20 $

The cost is relatively high when compared with value because those tests are deep functional tests that require the whole system to be set up; they use funky frameworks to drive the web browser; they are kind of painful to debug & maintain.

Total cost is easy, just sum up the cost and be scared by 60$.

Total value is tricky. To your great surprise (hopefully ;)) it is not 150$… but 55$ !!! Why? Read again test #3 – it covers the same cases as test #1 & #2 so the value of those tests does not really sum up! I threw 5$ extra because by having the functionality in separate tests we have better defect localization when tests fail. Hence 55$.

This means the net value of tests is: -5$ (minus five dollars).

Funny enough, the tests mentioned are a real case from a gig I was few years ago. I argued with the QA lead that keeping those tests separate does not make sense. To my despair, I failed. 6 months later nobody was maintaining functional suite of web-clicking tests in this project.

My message distilled:

1. Understand the net value of tests.
2. Look after your tests as if they are your newborn daughter!

Now go back to the wilderness and keep writing sane tests!


getting agile via meetings

June 22, 2010

There was an 1-hour agile presentation I attended some time ago. I asked one of the other attendees for comments. She said: ‘the whole presentation was saying “communication is good“. Do we really need a hour to elaborate it?’. I don’t know answer but I like her punchline: ‘agile is about communication’.

So what do we do to facilitate ‘communication’? Well… we set up meetings. Meetings. Some more meetings. And more. Now we feel more agile.

Adding meetings to the process does not make the team agile. Remember the manifesto? Individuals & interactions over processes & tools.

Meetings don’t add value. Meetings are useful, sometimes extremaly necessary. Nevertheless they don’t add value. They are your coordination cost. What adds value, then? Coding a feature adds value… so long you code the right thing. Information discovery (aka testing) adds value. Meetings not.


The book is out!

April 1, 2010

Toni, Felix and the gang invited me to contribute to the brand new book. The Practices of the Proper Christian Programmer will be available on Amazon shortly!


remote retrospectives

September 21, 2009

Ideally, I don’t like doing retrospectives remotely but many times I just have to. I live in the era of distributed teams. If this is not enough, there are usually more prosaic reasons: people working from home, lack of decent conference room, etc.

Many times I have to help with remote retros where people dial in from their desks, homes and remote offices. They don’t see each other, they only listen to each other. And there’s a gotcha.

The quality of the retrospective depends a lot on communication. We know that large percentage of communication is contributed by… non verbal stuff (was it 70%? or 80%? Look it up yourself =). You can add problems with quality of phone lines, latency, etc. This practically means remote retros are pretty much a challenge. Let me share a couple hints on dealing with remote retrospectives. I’m going to focus mostly on tools and I would really appreciate if you point me to some even better tooling.

1. MS Office communicator and its shared white board. By the Big ‘M’ company that rules the world (Hopefully, I didn’t offend the Big ‘G’). The communicator is an example of non web-based tool for shared meetings. We started using it instead of twiddla (see #2). The communicator has fairly nice shared white board. The only downsides I encountered so far are:

  • occasionally someone cannot take part in a shared session.

Successful retrospective with MS IM might look like:

end of MS communicator facilitated retro screenshot

End of remote retro with MS communicator

I’m sure there are other peer-to-peer shared white board tools. Let me know if find something interesting.

2. Twiddla is an example of web-based, multi-user, shared white board. We actually tried it for several retrospectives with various success rate. The problems we encountered:

  • too many features, some of them deadly enough to remove everything from the screen
  • voting on items to discuss was somehow difficult
  • we had some minor usability problems and once we couldn’t set up a meeting for unknown reason

We had successful retrospectives with twiddla, for example:

End of twiddla-retro screenshot

End of remote retro with twiddla

You might want to try out twiddla or look for similar tools out there. There are bunch of web-based tools to facilitate shared meetings, shared sticky notes, shared white board etc.

There is one great benefit of using tools like #1 or #2 for shared meetings. It really does not matter if you use a web-based tool like twiddla or just an instant messenger with shared white board like MS communicator. Doing shared meeting with any of those tools allows unsure team members to speak up: they can participate almost anonymously by sticking items on the shared white board. Certain level of anonymousness increases the chance unsure/diffident team members actually participate.

3. Emails. That’s basically my plan B for remote retros. (Although my favorite plan B is forcing plan A). There are number of ways of doing the remote retro with emails. For example, I can ask everyone to spend the next 5 minutes on thinking of three :)’s and three :(‘s and send me via email. Then I say what are the items most people wrote about and we discuss it. I don’t waste time on sharing the contents of the emails. This gives me a unique chance of steering the retro a little bit. If I feel that there’s an item only few people wrote about but I think it is really important… I can lie just a little bit =)

One other thing I find useful is that sometimes it’s better not to discuss most voted items. Let’s say 70% of retro’s participants are developers based in the same location. There’s a big chance they have similar problems and those problems are technical. In that case, most votes might go to technical problems related to the very same group of the team. I don’t like it… Many times I even skip technical problems during the retro so that we don’t waste customer’s time talking about issues with build. (or am I just too tired of talking about maven?).

***

Finally, I wish your team develop a habit of continuous improvement so that you don’t need a lot of ceremonial retrospectives to get better. Improve daily but if you don’t know how then by all means do retrospectives regularly.


excuses for not doing dev testing

May 12, 2009

Yesterday I heart loads of excuses why the tests have not been written. I immedietaly remembered Misko Hevery‘s session at amazing geecon conference (be sure to attend next year!). Misko told a story about influencing developers to write more tests, testable code and TDD. I liked his slide where he collected all the excuses he heart from devs explaining why they haven’t written tests. Basically, Misko’s conclusion was that the only valid reason of not writing tests/TDD was lack of skill. TDD or ability to write tests is a skill like any other and it is no shame if you haven’t acquired it yet!

However, some of the excuses I heart yesterday were beyond any common sense:

  • this is not really worth testing
  • this is a small application
  • we don’t have time
  • (my favorite one) tests would make it harder to refactor

That was all bullshit and the only reason the tests were not written was that it was hard to write them. Devs didn’t know how to write tests for the DAO layer (how to ensure db is in correct state, how to maintain test database, etc.). Devs didn’t know how to write integration tests because of lack of experience with tools like Selenium. So they didn’t write any tests, not even plain jUnits.

Half a year ago, the same development team wrote loads of tests achieving high quality code base & high test coverage. Those days they wrote a django application. Django is a modern framework with built-in means of testing. It significantly contributed to the fact that developers wrote tests without cheating themselves with it’s-not-worth-testing kind of excuses (complete list above).

The interesting thing about the situation I encounter yesterday was this: junior developer wanted to write tests for some DAO code that did not have any tests (actually, there were zero tests at all…). So I asked senior developer responsible for architecture to help out setting up a TestCase that can prepare connection to test database, etc. Instead of helping, the senior guy started his mantra of excuses trying to convince junior developer not to write tests. Fortunately, I was there…


be careful with fundamentalism

February 8, 2009

One of my colleagues told me the other day:

“Szczepan, last year, when I started working in the X team someone warned me not to speak too loud about unit testing”

Apparently, there were feisty TDDers in the X team and they witch-hunted devs who not necessarily had written test-first. I guess it’s easy to become a fundamentalist of any methodology or technique. Personally, I don’t mind dogmatic coaching… so long it works. In the X team it clearly didn’t because devs were not feeling comfortable to even talk about testing.

At devoxx I met Jakub and he told me how he had introduced TDD to his team. Apparently, he used his tech lead mandate and just demanded test-fist. After a week or two of resentment, the developers started finding benefits of the new technique. They removed needles from the voodoo dolls, thanked Jakub for almost an eye-opening experience and continued to test first. Nice work! Hopefully I remember his story right – our conversation might have been after few beers.


coacheating

December 28, 2008

A bit about coaching, a bit about cheating this time.

A colleague asks me for help with some random problem about random technology.

I might say: “push here, run that, do this…”

Or I’d say: “I don’t know. What do you think? Hang on… I bet this tool should allow doing something like that. Let’s pair on this for a moment…”

A geek like me tend to enjoy showing off with how much he knows. Sometimes it’s really hard to shut up and pretend I don’t know. There is a reward, though. It’s when we finally work out the solution together and I feel we both made a little step ahead to be better software engineers.

One wants to get things done. One aims for fabulous architecture. Many times I seem to have quite different agenda. I’d rather make someone learn stuff. So I might say “I don’t know” even if it means things are not done as quickly or the design ends up imperfect. Is it clever or just… unethical?

Whoa, deep blogging. I bet it’s this Christmas time inspiration. This blog post is sponsored by Mariah C. (All I want for Christmas is… closure in java?).


talk indicator

November 7, 2008

I did an interesting experiment quite a long time so I I drafted a blog entry about it. Today I’m finishing this draft off.

The experiment was to visualize how much certain individuals speak. Why? Well, I felt that some team members didn’t speak up because they were not loud enough or there was no room for their opinions. Worth mentioning is that the subject really concerned the entire team. Half-way through the meeting the visualization on the white board became quite interesting. Red bars indicated how much one spoke:

Note that some guys didn’t speak at all yet! The interesting thing about the loud guys is that they are truly smart, experienced and creative. And yet the team would benefit more if they just shut up… It’s so easy for the silent guys to listen less and less, stop caring too much and finally disconnect. Fortunately, it didn’t go that badly and here is what we ended up with:

The loud guys didn’t talk too much after they saw how they monopolized the ether. Hopefully, the drawing on the white board let everyone speak up.

I wanted the “talk indicator” to help involving all the team members in an important meeting about our short-distance plans. Hopefully, it helped teaching some guys how to listen :)

I guess the experiment may have triggered a positive change. After all, we never needed to do it again…