Why Innovation Fails

Back to insights

When entering a new product or market with a new software solution (be it SaaS or another), innovation of course has all the pitfalls of market fit, not the right product strategy, market timing etc. The concerned aspects generally focus on managing cost and risks, minimising upfront investments and being able to stay flexible to delivering Product and validation in the early days all before decisions can be made to invest in scale up and growth.  

Technology and its application play a pivotal role in supporting this innovation flow, from execution to technology selection to solution structure. More precisely, the day to day decisions that determine whether we can get Product to market fast.

There are three questions we must ask ourselves when deciding what to build, the answers to each of which need to take into account two main themes:

  1. Iterative and incremental: we usually apply this at the feature level when iterating a product, but it can also be applied on a day to day basis when iterating the technical solution for a feature.
  2. Daily micro-optimisation: we make multiple, daily decisions about how a feature should be realised. When building a feature, or part of a feature, we will routinely decide aspects like “we might as well build X because we know we need it in the future.” This is the essence of You Aren’t Gonna Need It (YAGNI). An old concept yet where this fails is in making hard decisions at the daily, micro level. Each micro decision on its own without the whole doesn’t seem like much in isolation…maybe even trivial.  However, put it together with all the other sub-optimal, often disconnected micro decisions and a significant, collective inertia in feature throughput is created that is extremely hard to detect.

With those in mind, let’s get stuck into the three questions.

Will I fail if I make this decision?

This is a key question I learned from a friend who started a SaaS company and sold it after 22 months for $26 million. I joined the company and he was taking me through the tech stack and, as he spoke about the fact he had chosen Tomcat as the application server it was as if he was pre-empting my question. Why choose Tomcat? It’s a good server, but it’s old. There are better options out there. You can scale up cheaper by making use of servers that use non-blocking I/O such as Netty and Vertx.  I didn’t get it. And then the lightbulb went on.  



In engineering we always want to use the latest and greatest and we have a collective amnesia about how frustrating it can be working with a new technology that has multiple issues.  Typically, there is scattered or little guidance on resolution on the web save for reading the source code. Tomcat was mature enough to not be one of those technologies.  Instead, my friend knew exactly what he was letting himself and his engineers in for when he selected Tomcat. It was a known quantity, with multiple resources on the web about how to fix common issues. The engineers had other, more pressing market differentiating problems to solve and could do with not wasting precious brain cycles on a part of the technology stack that should “just work”. 

So, would they fail if they chose Tomcat? No.  Will they get Product to market faster, thereby resulting in faster validation, compared with using another server that had endless issues? Yes. Sure, the solution may be more expensive to scale compared to using other servers…and that would be a good problem to have.

Bringing it back

Iterative and incremental: if it turned out Tomcat could not scale, they could migrate. Not an insignificant task, but with a suite of tests and full automation, not horrendous. 

Daily micro-optimisations: significantly reduce the effort involved in investigating issues with a supporting technology and redirect that effort to building the Product.

Do I need to build the whole thing now?

Usually when innovating, almost always no. We want to get something out in front of users as soon as we can.  What specifically? The minimum we need to prove the product has legs.  Does this mean building the highly sophisticated or most resilient or most performant solution first? Unlikely.  As long as we ensure our architecture is decoupled and component based behind clear (API) contracts, we will be able to isolate the complexity of the upgrade when the time comes.  

As an example: we want to publish some statistics for all users on a daily basis that they can view on their dashboard.  Our ambition is to on-board 1 million users within two years with thousands of statistics each with cross-user/global statistics for the entire user base. However, in the first six months we only expect around 100,000 users with a few, user-centric (as opposed to cross-user) statistics as we test the water.  The solution for the former probably requires a huge amount more compute and a technology that can handle the vast quantities of data making up those statistics (such as Google BigQuery), but the latter can probably get away with a few SQL statements that cover the spectrum of that user’s data only.  It would be a wasted effort to build the 1 million user solution. Build the 100,000 users solution and, at the same time, know what would need to be done to  scale up to the 1 million user solution without doing anything that would preclude that path. Building in optionality is key and it’s a skill that takes time to master.

Building the simpler solution requires less effort and gets the Product to market faster.

Where this doesn’t work: foundational aspects.  The core architecture and infrastructure required to deliver the product: the software build pipeline, the platform the product will run on (did you choose Kubernetes? Serverless? Something else?) or any other aspect that is good practice to ensure confidence in what was delivered does work and that documents how to allow different engineers to work on different parts of the platform seamlessly.

Bringing it back

Iterative and incremental: build the simple solution and iterate as we determine value and increase complexity.

Daily micro-optimisations: the simpler requirements are likely to be met with the current technology stack. Decrease the overhead of learning a new technology, establishing vendor relationships, integrating the technology into other aspects of the platform (e.g. tracing), build/test/deploy mechanisms etc.

What can I get away with NOT doing?

As engineers, we’re used to always creating the solution that works both with success and failure scenarios while taking into account a myriad of other system properties and activities, such as: penetration testing, real time security analysis of the production system, performance testing in all its forms, availability requirements, resilience engineering activities, defining associated metrics such as Meant Time To Recovery.  This is not an exhaustive list.  

If we fail or there is a security breach, what material impact will it have on the product so early in its lifecycle?  Most likely very little. If we have no users, the effect is almost nil.


What we should be thinking first and foremost is, how do we get users? How do we keep them? Do they like the product we’re building and does it impact their lives in a positive way?


Don’t forget about these failure scenarios altogether, though. Some activities are almost free and are a known quantity – such as scanning open source libraries for security vulnerabilities with (near-) automatic remediation should one be available.  Additionally, make sure the architecture can support scaling not just teams (h/t Domain Driven Design) but the ability to handle increases in workloads. Provision the ability to build in resilience without re-architecting the entire system. As a simple example, store data in a single database instance but use a technology that has options to run in a resilient configuration later (e.g. active/passive, active/active).  Again, building in optionality is the key principle here.

Now the question is, when do we actually do something about the activities we’ve kicked down the road? This is a co-operative business and technical decision where we decide at what point we have hit a critical mass of users, or a failure/inability to scale would hurt our brand or revenue, or maybe another metric deemed important for success. Then we can justify the cost (read: effort to build, test and validate) and ROI. I like to use the concept of deferring a decision to the last responsible moment.

Use the effort that was saved from not doing both expensive and inexpensive activities to build more Product. 

A note here: data integrity can never be compromised. Without integral data, a product will not work and engineers will spend many hours chasing down hard to diagnose bugs. Do whatever is necessary to maintain this property and other similar properties that are non-negotiable for your product solution.

Bringing it back

Iterative and incremental: Build in resilience, performance, security etc as the risk of not doing them increases and as iterations of the solution.

Daily micro-optimisations: We’ve prioritised what we have to build over what we want to build. Keep efforts on the end goal. Be outcome driven.

So where does that leave us?

The original title of this blog is why innovation fails.  From a technology solution point of view, I have seen it fail through doing more than is necessary to build and validate the Product.  Engineering effort is expensive, technology is complex and changes fast. Be hyper focussed on what needs to be done, build in optionality and defer decisions and/or activities to the last responsible moment. Deciding what not to do is as important as deciding what to do.  Use that freed up effort to build more Product. 

Focus on the key property of Agile development of iterative and incremental. Truly applying this can mean delivering the simple solution as fast as possible, then iterating, then incrementally improving not only the delivered feature, but the operational features, such as security and availability, that support it.

Remember, getting to the moment when large numbers of users are clamouring to use your Product is more important than figuring out a resilient active/active data replication set up across countries.

Do you need help to get your product to market faster – and in the same time, reduce the risk of failure? Get in touch.

Mik Q circle
Written by Mik Quinlan
Read more from the author