Even poorly written code is good enough if it works well enough that you’re still using it years later: It’s a reasonable choice to focus your finite developer time on that other feature instead. 


Hey there, folks! Welcome back to the fifth installment of our ‘Coding Horrors’ blog series! The first four blogs got loads of engagement, so we’re keeping the adventure going! For those just joining us, Here’s what we do in this blog series: we ask prominent developers to share their tales in dealing with various aspects of software complexity. This adventure kicked off with our own coding experiences as developers, and we thought it was worth sharing.

This time, we’ve brought back Daniel Beck for the series because we love his stories! Daniel will be sharing tales about dealing with delayed feedback cycles and development uncertainty. With solid software development experience since the beginning, Daniel has tackled situations filled with uncertainty, change, and growth. This could mean startups transforming into established companies, department reshuffling, revamping old products, or launching new projects from scratch. Take a look at his blog:

Lessons Learned

Customer feedback is very important, but that feedback is, by definition, “too late” – because it indicates that you missed whatever bug they’re reporting.

Refrain from assuming user preferences based on sales, leadership, product design rumors, etc. You’ll wind up with a completely wrong impression of what the user actually asked for.

Old features are familiar to users:  ain’t broken, don’t fix, etc.  It is okay to leave code fallow for extended periods if it’s doing what it needs to do and what it needs to do hasn’t needed to change for a long time.

Even poorly written code is good enough if it works well enough that you’re still using it years later: It’s a reasonable choice to focus your finite developer time on that other feature instead. 

The bigger your company gets and the older its codebase. Most of the time, it actually is the right choice to focus on new stuff instead of refining the old stuff.

Miscommunication and conflicting requirements are just… always a thing: resolving that uncertainty in software development is what planning IS. 

Sometimes there are good reasons for the requirements to have changed: Communicate the reasons for the change, and make sure they’re actual reasons before you do so.

Deadlines encourage taking shortcuts and skimping on deliverables, they shut out new good ideas as “feature creep”. 

How did orgs you worked at deal with the uncertainty involved in receiving feedback too late?

I’m going to break this down into four distinct categories. 

External (customer) feedback is very important, but rarely a matter of “too late”.

The first kind of external feedback is immediate: bug reports, user complaints, or (worst case) customers abandoning your product. That immediate feedback is, by definition, “too late” – because it indicates that you missed whatever bug they’re reporting, or you wrote good code for a feature that misidentified the user’s needs.  

Mitigate this by working iteratively, don’t dump a whole lot of stuff on the user at once, let it trickle out and see how it works before you go too far down the wrong path.

The second kind of external feedback is the sort you have to actively seek out, by doing user interviews, usability studies, research, examining usage logs, and choosing when or if to act on what you learn.

Lots of companies don’t do this or don’t do it nearly enough, or don’t do it often enough.  It’s really easy to just assume you know what the user wants or to work based on rumor (the customer talked to the sales rep who talked to their leadership who talked to product leadership who pushed the feature request down to the product designers who work with the engineers who will of course after all that wind up with a completely incorrect impression of what the user actually asked for.)

Similar to external feedback, Internal feedback also comes in two types: it’s either changes in requirements, or it’s fallout from miscommunicated (or conflicting) requirements.

Miscommunication and conflicting requirements are just… always a thing.  You try to minimize them by communicating frequently and in detail; you talk through the conflicts until everyone agrees or someone makes the call.   Resolving that uncertainty is what planning is, really: you’re all combining and refining your various ideas of what the feature ought to be and how it should work.

Changes in requirements are another story. I specifically don’t mean by this the process of refining a feature during development, or new feature development that partially replaces old work. That’s good iterative development.   This is more like we decided to build one thing, got partway in, and then changed our minds about how it should work, and now that earlier work has to be redone or thrown out.

Sometimes there are good reasons for the requirements to have changed: maybe a competitor launched a feature that you now have to counter, or maybe there’s a new opportunity you didn’t know about in earlier planning.    Capricious or seemingly directionless changes in requirements, though, can give the team the impression that leadership is, well, capricious and directionless.  Ideally, you want your teams to be under the impression that they’re guided by people who know what they’re doing, so it’s best to avoid acting like you don’t.  Communicate the reasons for the change along with the change, and make sure they’re actual reasons before you do so.

What strategies or tools do you use to track the progress of features when feedback arrives intermittently or with delays?

Mostly whichever one your teammates are going to actually look at. That’s key.

Personally, I find living documents really valuable.  

For larger discussions, the wrangling out of what are we specifically building and how are we building it, I like to just set up one big shared document (on Confluence, Google Docs or Notion, or whatever tool your team is habituated to using.)

This doesn’t need much structure.  People wind up making these big complicated document templates for this sort of thing, but I mostly think it’s unnecessary.   You write out a description of what you’re all agreeing on building, and everybody just updates that document on an ongoing basis whenever that agreement changes due to any kind of feedback or event.

List out any open questions. Turn them into closed questions when they’re answered, so you don’t accidentally circle back to them.  Those lists, and the increasingly detailed description of what you’re building, is pretty much all the structure you need.

This becomes much less useful if it turns into a changelog, where people are adding the latest updates in a list at the end of the doc, or leaving earlier “versions” of ideas in place.  

You don’t need to track when decisions were made or who made them, you don’t need to keep careful track of what you used to think you were building, so changelogs and past versions are just clutter; just keep updating the current description of the present state of things for as long as is useful.  If you do it right, the planning document will accidentally turn into the documentation!

Have you encountered the ‘new feature bias’ in software engineering?  Basically, do you prefer new and roadmap features over technical debt and collecting and applying feedback on existing functionality?

Old features are familiar to users:  ain’t broke, don’t fix, etc.  It is okay to leave code fallow for extended periods if it’s doing what it needs to do and what it needs to do hasn’t needed to change for a long time.  It’s necessary to do this, in fact, because not every feature needs or wants constant adjustment.  

Even poorly written code is good enough, if it works well enough that you’re still using it years later (with maybe a patch or fix here and there along the way). It’s a reasonable choice to focus your finite developer time on other feature work instead.  

And this is all fine until the first time your company does that for a while, and then gets around to wanting to add a new subfeature to one of those features that haven’t been touched for a while, and discovers that nobody still employed at the company has ever touched that code, and suddenly all old code is considered “technical debt”.  

Old code is ‘debt’ because you need to relearn how it works, and because it may contain some decisions or assumptions that no longer match present reality, which now you have to decide whether to ignore, update, or replace.  

The bigger your company gets and the older its codebase, the more you start to have to think about old code as just part of the landscape. New features mostly have to fit into that landscape, and it’s rarely worth razing an entire ancient forest just to be able to add a new tree branch in a somewhat more convenient way.

Stretching the analogy, if that ancient forest is really overgrown with weeds you’re going to end up having to move a lot more slowly and carefully in it than if someone’s been in there pinching out the thorn-bushes before they have a chance to get spiky.  

But weeding is a lot of work, so more often this manifests as tolerating gradual overgrowth until it becomes intolerable, then wading in with a machete and a little bit of rage.

(Side note, you can maybe tell it’s midsummer and my yard is actively being taken over by brambles and vines?)

The most dangerous thorn-bushes, the ones you ought to rip out as soon as you spot them, are outdated pieces of code or logic that require ongoing, active workarounds; when lots of new code has to be written with special cases or complex logic to accommodate the quirks of the old stuff, the patches and workarounds just compound the original problem and make it even harder to root out.  Those are best corrected early on. Imperfect old code is fine, but old code that’s actively impeding you is not.

Letting things go too long is part of how Big Redesigns and Big Refactors tend to happen: you wait until things get intolerably bad,  and then you try to fix it all at once. Big reactors are also Big Risks. So a better path is to not wait that long, and fix things up before they need a Big Refactor.

Overripe analogies aside, honestly most of the time in most situations, it actually is the right choice to focus on new stuff instead of refining the old stuff.  Best if you consciously try to avoid setting traps for yourself in the future by taking shortcuts or baking in too-broad assumptions; but you can’t predict everything.  

How do you detect feature creep early on, reject most of these creepers, and postpone them until after some initial version is operational and deployed?

Try to approach this not as defending against feature creep in the first place. Planning is a prioritization exercise, not a battleground.

At the end of the day, a development plan is a to-do list.  A team can literally only do one thing at a time, so the only options are to decide how frequently they’re going to switch between different things, and in what order they do them.

No matter what methodology you’re following, one way or another it boils down to putting a list of tasks in order.  (And also accepting that you’re never going to reach the items near the end of any list because at some point some other list will take higher priority.)

If you think of your feature planning this way, the concept of feature creep just evaporates: every new idea is just one more new idea, and it’s either worth inserting towards the front of the line, or it’s worth adding somewhere in the middle.  (If a new idea gets prioritized all the way at the end of the to-do list, it’s not worth writing down, because you’re never going to reach it.)  

“Feature creep” is harmful only if you’re inserting new work in the list without acknowledging that that means everything below it is going to take that much longer to be delivered.   Teams can spend a lot of time thinking they’re arguing about feature creep when what they’re really arguing about is the schedule.”  

If you’re working on something that involves a lot of hard, fixed-in-time deadlines, you have to be very careful about prioritizing that list, making accurate predictions about how long each task will take, and rearranging the order of events if something is at risk of not meeting its hard deadline. That takes a lot of thought and effort and probably a fair bit of arguing.

But most of the time, unless you’re doing agency work with contractual due dates, there actually aren’t very many hard deadlines. 

“An API is going offline on a certain date? You’ve got to make your updates to stop depending on it before that date. That’s a hard deadline.   But “We planned on having Feature X launched in February” isn’t a deadline, and it’s harmful in many ways if you treat it as one.”  

Ideally, you’re just putting tasks in order, so the planning becomes a matter of everyone working together predicting which parts of X can feasibly be built in that timeframe. Or, sometimes, deciding that Feature Y needs to come sooner, so  X is going to wait until March.

This is not to say that the schedule doesn’t matter! Even when there are no hard deadlines, your sales team needs to know what they’re going to be selling, your marketing team needs to be able to plan and prepare based on your development plan.   So you do need to keep those things in mind as part of prioritizing your task and feature list.   But this is very different from “Feature X must be delivered on Feb 1.”   

Thinking in terms of deadlines like that is harmful: deadlines encourage taking shortcuts and skimping on deliverables, they shut out new good ideas as “feature creep”, and most importantly they encourage the false idea that a feature is “done” after the delivery date.  (It’s much better to deliver the bare minimum usable core of the feature, observe how users deal with it, and adjust your plan from there anyway!)

Happy Planning:) Planning is a prioritization exercise, not a battleground.

Share your tale: We’d love to hear it. Connect with us: Here

Spread the news:

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *