Validating startup ideas - our experience:
In June 2023, Alex Freas and I teamed up to build Tangential.app. This is my retrospective on the journey and our learnings to become better at validating ideas.
Much has been written about founding startups. While some information is useful, it is still hard to internalise these lessons unless you experience the situations yourself. By writing this, I hope to retrace the steps of our startup Tangential and share what I’ve learned based on real experiences. Perhaps it could help someone out there to avoid one of the many pitfalls a founding journey represents.
In this particular piece, I have decided to focus on the birth phase of a startup - validating ideas. I will do my best to write about everything that followed on our startup journey, but for now, let’s dive into validating and testing hypothesis.
Co-founder - Swiping right.
Like many relationships nowadays, ours began online. I met my wonderful co-founder Alex Freas on the YC co-founder matching platform. During our second meeting, we bonded over diverse topics over some beers. It was on that evening that we converged on the topic of engineering and our deep appreciation for the complexity hiding behind every software product. For every great product, there is a balancing act of how humans, processes, technology come together to build something new. Based on that conversation, we knew that we wanted to build that would improve software development.
Being exhausted from a rather dry 3-month ideation phase, I was happy to find a space we both were passionate about. Passion for a space and users is definitely a requirement for a founder.
An easy way to an idea - incremental ideation.
Alex was already deep into a project named Truffle, which would become our starting point for ideating. Have you wondered how efficient a company could get if everyone had the knowledge of the whole organisation on their finger tips? Truffle did exactly that: It is a super clever Slack tool which answers questions based on past conversations in Slack channels. Technologically, Alex realised this by using a combination of a search engine and Large Language Models (LLMs) to answer questions with striking relevance.
Despite its cool factor, we knew Truffle lacked a specific target persona and use case. Without having critical focus, it is hard to argue why potential customers should use the software. Therefore, we agreed that start with the rough idea of Truffle and find a specific user problem.
We dove into our software development experiences, searching for a problem that Truffle could solve. However, our approach was backwards; we were trying to fit a problem to our solution, not the other way around. We were already starting to look for things in software development which could be solved by having the collective knowledge available in some kind.
In software development, teams juggle a lot of moving tasks. These could be integrations with different services (e.g. payments) or dependencies introduced by other teams. The common struggle we identified was managing that complexity. Teams were often in the dark about things happening else where in the organisation. This led to duplicate work, missed dependencies, and all kind of fun inefficiencies.
From our observation, companies try to manage this challenge by having meetings or writing update messages. Feels inefficient, doesn’t it? That is what we thought - so we deduced that with so much time lost in engineering teams, this challenge could turn into a real business opportunity.
Mom-testing and hypothesis setting.
Now, we needed to test our assumption and decided to mom-test it. We did not write it down at the beginning (should have) but our assumption looked something like this:
Product Managers and Engineering Managers want a better way to stay on top what is happening in adjacent teams.
Looking back, this hypothesis is lacking in multiple dimensions.
We should have spent more emphasis on the specific problem. The truth is, people will only work with a scrappy startup if their problem is big enough and the downstream effects are truly painful.
Additionally, we mainly relied on deductive thinking. Based on our core assumption of existing inefficiencies, we deducted that our persona cares about inefficiencies. This type of thinking led us to wishful optimism and wrong assumptions.
Ideally, we should have spent more time on creating a cleaner hypothesis. A cleaner hypothesis should focus on validating the core assumption about the pain point and the willingness to change. If the pain point can’t be validated, its time to move on with the idea.
A first indication that we were chasing a fake problem was the response rate. Some of the interviews came from our network but we acquired most using automated LinkedIn outbound. Here, the connection acceptance rate was around 5%. There could be a lot of reasons, but I believe that the we did not nail a pain point in the message. I will write more about learnings on user outreach in a separate article.
From this message, it is painfully clear that we were already solution focused. If we had written down our hypothesis and been honest about it, we should have added that we were already thinking about solving specific inefficiencies. Concretely, the inefficiencies resulting from text-based communication and discoverability of team updates.
As you can see, not setting a clear hypothesis around the pain point and willingness change can lead to ineffectiveness and fuzziness in the validation process.
Take-away 1:
Make sure to get your hypothesis right. Focus on the problem you are trying to validate.
Getting lucky and devalidating the first idea.
We got lucky and were still able to devalidate our first hypothesis. We were more diligent in the questions we asked in interviews. One question we asked is “Were there any recent instances where things went wrong resulting from incomplete communication and big picture?”. As we did around 25 interviews, it was still easy to spot the lack of interest. It materialised in answers like “I don’t recall an instance where…” or “I usually know what is happening in our team/squad.” Also, a VP of development told us that this whole cluster of problems did not exist as they had a great software architecture practice and used the C4 model communicate the details. So, it turned out that most EMs and PMs did not manage an overwhelming number of topics and therefore did not have the problem.
In the end, I think we fell into a very typical trap many founders face. The potential pain point is solved in many companies by adhering to a good processes. In this case, good processes and architecture made handling complexity easier.
In early July, we had enough signals to understand that we were chasing a phantom pain point. This in when we started noting down our hypothesis and our results.
For me, there are a few more take-aways.
Be aware of deductive and incremental ideas. It is easy to carry over wrong assumptions and solution bias.
Clarify the user’s problem in the hypothesis. Iterate until the problem is so easy to communicate so that a kid understands it. Otherwise, you might not be drilling deep enough.
We were now in love with the idea that someone in software development must have the challenge of managing the complexity by better understanding the big picture. While we hypothesised that it might be more of an issue for the higher-ups, one tip brought us to the next iteration. One engineering director we talked to mentioned that this problem usually exists for technical program managers (TPMs) and it did! I will write more about how we ended up building something there and how validation could have been accelerated by better outreach and stronger push for commitment.
If you are going through validation yourself, please feel free to reach out to me. Falling into many of the traps in validation is extremely challenging and I am happy to share more of our experiences with fellow founders.