Coding Reflections – Part 2

Continuing with my attempt to increase the speed at which I master the craft of programming, here is my second week of reflection.

One part of the reflection process that I noted was that reminding yourself of the previous reflections is crucial. Toward the end of the week I found that rereading me reflections from earlier really helped me to not repeat those previous mistakes.

Week 2:

Always make the minimal amount of changes to go from working to not working. Yes, I know i started with this one last week, but it bears repeating. This week I did a lot of building new things and again found myself being slowed down by making too many changes and then having to figure out which one of those changes didn’t work as they were supposed to. Starting with the simplest thing takes discipline. I’m not sure why that is, but our brains seem to want to see big effects and use those to trip us up. In order to practice the simplest thing we need to be continually vigilant.

Pay careful attention to any error trace. Often the answer is buried deeply and even though after making changes it may look like the trace is the same as before, there could be changes in where the error came from a few lines down from the top of the trace.

Write bad code. One problem that I’ve been having (probably magnified by the fact that I’m working  in Ruby on Rails) is that everything I read really concentrates on writing and architecting really good code. The problem with that is that it is really hard to know what the best code is as you go along. As you code you get paralyzed by the different design choices. You then spend hours trying to make what you think may be good design work, only to throw it away because the direction of the code means that another design pattern actually makes more sense. Instead, if you write crappy code that just gets the feature complete as quickly as possible, you end up with a complete feature and time left over to refactor (of course this only works if you’ve written tests to ensure that everything stays working after the refactoring is finished). I think it’s a lot easier to see where to fix bad code than it is to ponder over what the best code is to write. Of course, a big part of this strategy is the commitment to go back and actually do the refactoring. Code review helps a lot as sheer embarrassment can be a great motivator for going back and making sure your code looks good!

Danbo conoce a Domo - Danbo meets Domo

 Follow the Law of Demeter. There are many complex aspects to this law, but the heuristic that I’ve been using is thinking about it as “don’t take your toys apart”. If your context has a class, then you can call any of it’s methods or attributes, but you cannot “break the toy apart” and call methods of that toy’s attributes. This article does a good job of explaining why breaking the Law would seem silly in a real world context. When you buy something, when the cashier asks you to pay, they don’t “break you apart” by grabbing your wallet (your attribute) and taking the money straight out. Instead, they ask you for the money and you give it to them. They don’t even have to know about your wallet, maybe you don’t have one and just shove your money into your pockets, that would make a waiter trained to take money from wallets very confused and would be a very awkward end to the evening. Throughout the week I found that code following the law was super easy to debug and refactor, code that didn’t wasn’t, with fun little errors popping up like “you called x on nil”.

 

No Trackbacks

You can leave a trackback using this URL: http://andremalan.net/blog/2012/02/20/coding-reflections-part-2/trackback/

3 Comments

  1. Not sure if I agree with your first one. “Always make the minimal amount of changes to go from working to not working”? I feel like I often make life harder on myself by doing exactly that. Instead of re-factoring the object-oriented structure to what it really should be, I’ll just do some hack with the existing API to get things running. That’s fine for one iteration, but when I continually do this it eventually gets to the point where I have a jumbled, poorly factored program which eventually requires refactoring ANYWAYS, but is now much harder to refactor by virtue of being a huge mess…

    (Although I guess the problem you discuss in your third point plays into this, too. You often don’t know the best factoring/oo-design from the start so maybe there’s no way to avoid making it up as you go along and eventually doing a ground-up re-design once your feature-space is better fleshed-out)

    Posted February 20, 2012 at 12:23 pm | Permalink | Reply
  2. Now that I read your third point more carefully I realize your said essentially exactly that…

    Posted February 20, 2012 at 12:24 pm | Permalink | Reply
  3. Andre Malan

    Hey Nick,

    Yeah, the first point was more about micro changes, the third part has more to do with architecture. I think there is a danger of making changes that are “too big” especially when you know where you ago going, because you flesh everything out before testing. The thing that I’m trying to get at is smaller and smaller circles when it comes to the doing things and then testing that they work.

    Posted February 25, 2012 at 10:21 pm | Permalink | Reply

Post a Comment

Your email is never shared. Required fields are marked *

*
*