opinionated or not

June 2, 2012

Opinionated framework on top of an unopinionated toolkit. If I had to choose my favorite engineering principle behind Gradle, there you go.

Sticking to the principle is challenging. Imagine a subdomain where the are no clear conventions, no popular, easy-to-harvest patterns. How do you come of with the “opinion” in such a field? We gather use cases and then try to resolve them in the best way we can. Sometimes the convention emerges organically and we can unanimously rejoice. Sometimes several conventions emerge and we are not quite sure which one to pick. We might choose the one most fitting or wait until we gather more knowledge, more use cases. Sometimes, we don’t see any reasonable conventions and we refrain from the opinion for the time being.

Good old ant doesn’t have many opinions. It is a toolkit. It lets me script the project automation in an imperative fashion. Kick starting a new software project almost always started with taking the build.xml from the previous project. The level of copy-paste in the build.xmls is legendary. Yet, back in the days, I loved ant as it allowed me to do amazing automation in projects. Thank you ant!

Maven delivers plenty opinions. It is a framework. However if my context does not quite fit maven’s opinion then I might be in trouble. It was a great technological leap from ant to maven. I can still remember my feverish efforts advocating maven2, pulling my conservative team mates into the new era of declarative dependencies and convention-over-configuration. Thanks, maven!

We want Gradle to offer the inspiring technological leap you might remember when you migrated your first project from ant to maven. This time, though, you will taste it when upgrading from maven. Some of you have enjoyed that feeling already and I am so glad to have you on board. Some of you will come along at some point I’m sure. We want to deliver opinionated conventions so that you can build your standard projects with a one-liner build scripts. We also want to provide an unopinionated toolkit so that you are never lock down with the design choices we have made.

I’ve just got back to Krakow from the amazing city of Berlin. I had a pleasure to brainstorm, debate and drink cytrynowka with the fellow Gradle gang members. Among many things that are crystal clear to me now is this one: The question is not IF Gradle will dominate the space of project automation but WHEN. Have a great weekend everybody :)


Mockito at googlecode

September 6, 2011

http://mockito.org domain fails to redirect to http://code.google.com/p/mockito. Please use the latter link for the mockito site at googlecode. Hopefully, it will be fixed soon. When it’s fixed this link should work: http://code.google.com/p/mockito.

Have a good one!


Develop incrementally!

July 16, 2011

– “There’s this feature X we should take on!”
– “Hmmm, we cannot deliver it now. We should refactor some parts of our app and deliver feature Y. Then it will be easy to add feature X and Z”
– “I guess we can wait. Feature Y is also interesting.”

I think the concept of incremental approach is somewhat missing here. There’re red warnings blinking when hear such conversation: features X, Y, Z may have different priorities among other features in the team’s backlog. Implementing them in one leap step means that we may not be working according to team’s priority queue.

To solve a problem the software engineer often has following alternatives:

  • quick & dirty solution that causes maintenance headache and other problems down the road
  • ultimate general solution that solves the particular problem… and a bunch or surrounding problems with varying level of priority

The first path is considered the easiest. The unethical and anti-craftsmanship way. The latter is much more respected in the engineering community. Nevertheless, it’s an easy path as well. The really challenging path, the one with the biggest potential value is the one that leads through the middle ground. It’s quite an engineering challenge to solve a particular problem without hacking, yet without embarking on solving too many problems at once. That’s the essence of incremental approach and that’s the software engineering skill of real importance.

Taking the incremental path does not mean that we want to avoid implementing the general solution. We still want the general solution, oh yeah. Yet, we want to get there step-by-step, maintaining the team’s capability to work according to priorities. I like to couple incremental approach with simple design. They both feed of each other. Simple design makes the code easy to change, refractor and accommodate new features incrementally. Simple design may mean some additional overhead down the road. Again, it’s an important engineering skill to keep this overhead minimal.

Finally, remember that context is king. There might be situations or working environments where going incremental may not be the best way to go. Nevertheless, if the team too often favor ultimate general solutions chance is they are not as effective as they could be.

Here’s my message for today: Architect your solutions so that it is possible to work according to priorities. Don’t let the the general solution get out of your sight. Reach it incrementally.


Building consistently

June 8, 2011

Last evening my friend told me a cool project automation story. Some guys came over to him:

-“Igor, our maven build fails. It works on my machine but doesn’t work on his.”
-“You don’t need me. Just make sure you have the same environment.”
-“I’m telling you, the environment is the same. We purged the local repo on both machines. The svn revision is the same. This is witchcraft and you need to exorcise our build”
-“Ok, tell what mvn -v says”
-“3.0.2”
-“Downgrade to maven 2 and stay happy”

Apparently, all team members use maven 2 and the build is not yet ready to upgrade the build client. Lessons learned:

  • It’s easy to build your legend in a big company. Find a problem some guys are struggling with for past few hours. Then fix it in 5 seconds.
  • It’s easy to fix this ultimate timeless IT problem “it works on my machine but not on his”. There’s a difference in environment and you simply have to keep looking until you find it. Just like the find-5-differences game.
  • It’s worth all the money to have an environment that is consistent by design. If developers in a single team work on consistent environments it is heaps easier to tackle many problems. Here’s how you can implement this pattern with this nice build tool called Gradle:

The wrapper feature is something I underestimated at the time when I started using Gradle a couple of years ago. I thought that the main purpose of the wrapper was to enable running build on environments that did not have Gradle installed. The feature was not very interesting to me because on my environments Gradle was already installed. It was sometime later I discovered the most important edge the wrapper gives. It helps a lot to be few steps ahead towards consistent environments. Suffice to say, if you use the Gradle wrapper you will never, ever face the problem I described at the beginning. Every developer will be using the same version of the build tool giving better, more reliable build environment in your team. I’m never tired of promoting patterns that increase reproducibility of the builds. That’s what I require from the build: consistent results for given VCS version regardless of the environment.

More over, the gradle wrapper makes it easy to upgrade the build environments for every developer. You update the version in your gradle/wrapper/gradle-wrapper.properties file and next time a developer updates his working copy, he will use the newly blessed version of the build tool.

Gradle wrapper is just one of the ways. You can achieve similar results using other build tools if you are creative enough :) I remember years ago I was helping a team that had notorious problems with the build stability. After one of the retrospectives the team decided that *entire* environment would be versioned. They’ve created a VCS root for stuff like jdk, maven, plugins. They’ve even kept separate local maven repo for their project. All those efforts to minimize the random build failures and lengthy debugging sessions caused by inconsistent dev environments. That may sound extreme, nevertheless that’s how the team attempted to solve the problem.

Design your build environment so that it is easy to keep it consistent across the team members. You’ll notice you’ll smile more :)

Build consistently every day. Peace.


IDE & patterns for huge maven project

December 11, 2010

I started working on a huge multi-module maven project these days. Trust me, working with ~150 maven modules at a time is a pain for a developer. On the other hand there’s a lot of room for me to improve the development environment & make dev cycles efficient.

Some more context: The team is pretty big; over 100 developers on a single codebase working in a ‘continuous integration’ fashion: little branching, keeping the integration gap small, automated testing, CI servers etc. Each developer checks-out entire trunk that contains ~150 maven modules linked via the default ‘x-SNAPSHOT’ dependency. Maven veterans probably flinch now because this way of managing internal dependencies in a large project is probably considered ‘lame’. Maven ‘by-the-book’ approach may put emphasis on keeping the internal dependency links to a specific version. I’m a pragmatic developer and I see pros & cons of both approaches. This blog post, however, does not dwell on the subject of internal dependencies. I want to focus on the most effective dev environment set-up and the choice of the IDE.

“Toggle IDE/local-repo dependency” pattern

Couldn’t find a good name for this pattern. Theoretically, it can give a nice dev performance boost in a huge multi-module project. Here’s the idea: out of the 150 modules we will work on the handful only. Therefore I want to be able to quickly decide which module should be a concrete project/module in the IDE and which should a mere jar dependency resolved via the local maven repo. It is a sort of a toggling dependency IDE/local-repo.

Here’s how I see it working:

1. The developer pulls in a smaller module set into an IDE by selecting a relevant sub-parent in the multi-module tree.
2. At some point the developer needs to perform changes in the module that is not pulled into the IDE. He pulls in the relevant module(s). Dependencies inside IDE are updated from jar-dependencies to: project-dependencies (Eclipse) or module-dependencies (IDEA).

Here’s how my team so far coped with the project and how the above pattern could be used.

eclipse:eclipse or idea:idea

At the moment I think more than half of the team uses maven eclipse:eclipse plugin to generate configuration that is consumable by Eclipse. In our environment it is not very efficient because after every svn update developer has to regenerate project configs (in case someone changed internal/external dependencies) and restart Eclipse (‘just’ refreshing Eclipse didn’t work well occasionally).

Team members who use IntelliJ IDEA never use idea:idea maven plugin. I cannot confirm that but I heard feedback that it was not generating the correct configurations for latest & greatest IDEA. However, most devs simply don’t need idea:idea because maven support built-in to IDEA 9+ is simply brilliant. I hardly imagine someone wanting to step back and generate configurations via idea:idea.

Toggling dependencies IDE/local-repo is possible but somewhat inconvenient. It requires specifying project names by hand at command line; then refreshing/restarting IDE.

M2Eclipse

I don’t recall anyone using it in my team. The plugin seems to be making progress – I heard several good reviews. In our particular environment developers complained on reduced perceived performance of Eclipse and they fell back to generating configs via eclipse:eclipse. I tested m2eclipse myself recently with our project; there were things I liked about it. Yet, I still found Eclipse choking (even If I fine-tuned the plugin).

Toggling dependencies IDE/local-repo is possible but I cannot comment how it behaves as we haven’t used m2eclipse too much.

IntelliJ IDEA

The built-in maven plugin for IDEA is getting more & more popular in my team. It does not seem to reduce the perceived performance of the IDE. However, IDEA tends to be a bit slower than Eclipse anyway. IDEA 9+ is fast enough for me – I tend to use it more often than Eclipse. My experience with IDEA 8- is limited because I rarely had enough patience to wait until it started. Spanking new IDEA 10 is marketed as faster than 9. I have not tested it yet but I want to extend my thanks to JetBrains for working on it!!!

Back to the maven support in IDEA and the context of our huge multi-module project. Options that we absolutely needed to tweak were:
1. Disable files/folders synchronization on IDEA frame activation. In our case, IDEA choked every time one ran any maven goal outside of the IDEA.
2. Disable automatic maven project reimporting (it’s the default anyway). It is related to the #1.

Also, be sure to read some of the posts on how to improve IDEA performance in general (most notable: choose wisely the plug-in set enabled at IDEA; get SSD drive; etc.)

In our project we found this feature very useful: the separation of 2 different actions: ‘reimporting maven projects’ and ‘updating folders’. The first one makes sure the project internal/external dependencies are linked well. The latter makes sure sources are generated for modules that need them (generate-sources).

Finally, toggling dependencies IDE/local-repo works like a charm in IntelliJ!!! Basically, all I need to do is “Import module from existing model”, point out the relevant pom.xml. IDEA takes care of updating the dependency links of my existing modules to use IDE links rather than local-repo links. How cool is that?!

I still keep myself using both Eclipse & IDEA as I work with various developers. However, it’s going to be harder to get back to Eclipse…

Disclaimer: Please notice the date of this blog post! I don’t know if the problems I mentioned with some of the tools will still be valid in future. Hopefully all those tools will be great, fast & very convenient for working with huge maven projects. Respect to all the people who work on the dev tools that we use today (Eclipse, IntelliJ IDEA, eclipse:eclipse, idea:idea, m2eclipse). Thanks a lot!!!


XML please, I don’t want to recompile

November 15, 2010

There’s one quite naive argument that sometimes determines the technology choice. I want to write about it because this argument I’ve been hearing on and off for years. It’s about favouring XML to avoid recompilation.

I remember in the late days of EJB2, one of the selling points of Spring was that you can reconfigure the wiring of your beans without recompilation. At that time I was quite enthusiastic about this feature… even though the argument was almost completely negligible.

My friend said today:

“Another advantage [of XML workflow] is you can do some modification to the flow without recompiling the whole app”

It’s true that you don’t need to recompile. However, if you want fully implement the change to the XML workflow you probably still have to:

  • run tests
  • validate the app works
  • check-in the change to your VCS (new revision created)
  • make sure the integration build on newly created revision works OK
  • validate the application (built with the new revision) works on the integration box

In most cases, updating XML by hand on the integration/prod box is not practical. Don’t fall for the ‘no recompilation’ argument because it is negligable. Usually you still need a build & QA cycle.

This post is not intended to claim that XML is bad for you (although I probably agree with that statement =)). This post claims that ‘no recompilation’ argument is void.

Nevertheless, If I have to use a workflow in my java app it will declared in java and not in xml. I’m great fan of fluent, type-safe & intellisense-ready configurations!


Code Retreat in Wroclaw

October 21, 2010

Next weekend there’s a Code Retreat in Wroclaw: http://coderetreat.wroclaw.pl. I’ll be there so If you want to cut some code together retreat with us =)


Developer! rid your wip!

October 14, 2010

I’m big on eliminating waste, I’m big on minimizing Work In Progress (WIP).

Here’s my story: a colleague asked me to pair with him because he broke the application and he didn’t know why. So we sat together. My first question was:

– “Did the app work before you made your changes?”
– “Yes”
– “Ok, show me the changes you made”

I saw ~150 hundred unknown files, ~20 modified or added files. I said:

– “That’s a lot of changes. I bet you started doing too many changes at once so you ended up with a lot of WIP. Can we revert all the changes you made?”
– “Errr… I guess so. I know what refactoring I wanted to do. We can do it again”
– “Excellent, let’s clean your workspace now”

We reverted the changes. We also discovered that there were some core “VCS ignores” missing (hence so many unknown files in the working copy). We fixed that problem by adding necessary “VCS ignores”. I continued:

– “Ok, the working copy is clean and the application works OK. Let’s kick off the refactoring you wanted to do. However this time, we will do the refactoring in small steps. We will keep the WIP small by doing frequent check-ins on the way.”
– “Sounds good”

Guess what. We didn’t encounter the initial problem at all. We completed the refactoring in ~1 hour doing several check-ins in the process. Previous attempt to the refactoring took much more time due to few hours spent on debugging the issue.

Here’s my advice for a developer who wants to debug less & smile more:

  1. Understand what is your work in progress; keep WIP under control; keep it small.
  2. Develop code in small steps.
  3. Keep your workspace clean, avoid unknown or modified files between check-ins.

expected exception in tests

July 26, 2010

I have a feeling too many people get it wrong so let me stress:

@Test(expected = SomeException.class) // IS BAD FOR YOU

I do enjoy jUnit a lot, thanks guys for sharing! Yet, I don’t like @Test(expected) feature of jUnit. In my TDD classes I simply teach to avoid using it. I admit the feature can potentially make few tests cleaner. However, it tends to bring more trouble than value. Newbies too often misuse it which leads to poor quality tests. Therefore:

Rule 1: don’t use @Test(expected)

My colleague (cheers Bartek!) pursues Master at the Uni. They teach him testing. One day a lecturer was presenting some test code:

@Test(expected=Exception.class)
public void testSomething() {
  //line 1: throws NullPointerException
  //lines 2-30: some crappy test code
}

The test is useless. Due to some bug in the setup the test method breaks in line 1. Therefore most of the test code is not even executed. The expected exception does great job in hiding this issue; the test is green! Therefore:

Rule 2: If you violate Rule #1 then at least make the exception very specific. Example:

@Test(expected=BreadOutOfStockException.class)
public void shouldBuyBread() {}

Even if you use specific exception then the test does not document exactly where the exception is thrown. Sometimes it may lead to subtle bugs. Therefore:

Rule 3: Don’t violate Rule #1 when the test method has multiple lines.

Instead you can simply write:

@Test
public void shouldBuyBread() throws Exception {
  //given
  me.goToBakery(bakery);
  bakery.setBreadAmount(0);

  try {
    //when
    me.buyBread();
    //then
    fail();
  } catch (BreadOutOfStockException e) {}
}

Above is my preferred way of documenting ‘exceptional’ behavior. There you go, it is The Ultimate Test Template for ‘exceptional’ behavior.

JUnit is getting better. New goodies for exception handling in tests arrive. @Rule ExpectedException better documents what action triggers the exception. Also, it nicely maps to //given //when //then style. The downside is that you need some extra code if you want to interrogate the exception, for example if you want assert if exception message contains a substring (api as of 07-2010). Nevertheless:

Rule 4: Favor @Rule ExpectedException over @Test(expected) when test method has multiple lines.

Rule 5: Keep an eye on new versions of your testing libraries.

If you are already at RI in TDD than consider my Rule #1 as a safety net. You’ll never know when a new junior team member joins. He may easily misuse the patterns he sees in the codebase. Javadoc for JUnit does not say how to safely use @Test(expected) (at the moment).

If you are learning testing don’t get too upset with my dogmatic rules. If only you write any tests then I am smiling already. You smile too, write tests and be happy. Soon you’ll get your RI rank in TDD.

Rule 6: Write tests & smile!


the net value of tests

July 8, 2010

[Please, read me carefully. If at any point in time you think I’m preaching to stop writing tests then STOP reading further and FORGET what read so far. Thanks!]

One might think that tests only bring value, that is muchos dolares. If this is the case then why many teams stop maintaining their tests at some point? Or they @Ignore a test forever when it is too difficult to fix? Isn’t it like throwing away ‘value’. Why on earth you are throwing away $$$!?

Thing is that tests bear maintenance cost: keeping them green, refactoring along with the code, etc. Let’s talk about net value of a test: Net = Value – Cost. If you write crappy tests then the net value might be negative. Since I visited quite a few archeological sites digging in legacy code I saw many tests with negative net value.

When half of your team wants to run ‘svn delete‘ on the ‘FitnesseRoot‘ folder it might mean your tests’ net value is plummeting down. Obviously, fitnesse is just an example, any tool can be a blessing or a curse depending on your skill in misusing it. (Dear god of Agile, I swear I love fitnesse… but only with GivWenZen).

Let’s think about it. I’ve got 3 tests with a respective value I conjured using cosmic energy.

Test 1. Adds an article to the content db. Value: 50 $, Cost: 20 $
Test 2. Adds an article & edits the article. Value: 50 $, Cost: 20 $
Test 3. Adds an article & edits it & deletes it. Value: 50 $, Cost: 20 $

The cost is relatively high when compared with value because those tests are deep functional tests that require the whole system to be set up; they use funky frameworks to drive the web browser; they are kind of painful to debug & maintain.

Total cost is easy, just sum up the cost and be scared by 60$.

Total value is tricky. To your great surprise (hopefully ;)) it is not 150$… but 55$ !!! Why? Read again test #3 – it covers the same cases as test #1 & #2 so the value of those tests does not really sum up! I threw 5$ extra because by having the functionality in separate tests we have better defect localization when tests fail. Hence 55$.

This means the net value of tests is: -5$ (minus five dollars).

Funny enough, the tests mentioned are a real case from a gig I was few years ago. I argued with the QA lead that keeping those tests separate does not make sense. To my despair, I failed. 6 months later nobody was maintaining functional suite of web-clicking tests in this project.

My message distilled:

1. Understand the net value of tests.
2. Look after your tests as if they are your newborn daughter!

Now go back to the wilderness and keep writing sane tests!