By this point you may understand that building or enhancing your features with generative AI would increase the value delivered to users and help you fend off existing or new competitors, but are wondering, how do you make a go no-go decision on this or prioritize it above other items on the roadmap.
If after considering the technology and features that can come from it, it is clear that it would not deliver any meaningful progress towards your goals then most likely you should not be looking at investing in it at this time. Less likely but something to note, there is a small chance that the goals themselves may no longer be appropriate and that is something to consider.
I appreciate that above it may seem like I have skipped the details on how to judge meaningful progress, however every team does this differently and we’re not here to change that.
This is what we’re looking to explore in this document.
First off, whilst the technology is so new and powerful that it can seem magical if demoed in the right environment we need to remove the shine and return to first principles to evaluate its utility and implementation.
So replace generative AI with the white labeled “new technology” in your mind. This could be a smartphone, or Google Home for all we care, we just need to get comfortable with properly evaluating a new piece of technology.
Now imagine you are running your same company without any knowledge of this new technology and a PM comes to you and says “We need to think about integrating this technology into our product”. How do you respond?
Hopefully, you’ll see that your response would be effectively the same no matter what technology is brought to you.
You need to evaluate the new technology against the existing vision, then strategy, then goals of what you are working on.
Every item in your backlog and roadmap should have a justified existence with purpose and be moving your product forward in order to achieve your vision and strategy as measured by your goals.
For most teams, the reality should be that no-one can know a feature to move towards a goal only to research and estimate it so. Every build can be treated like a well educated experiment, and any new technology feature experiment should be evaluated against the other experiments fairly.
For many teams, that takes the form of an ICE (Impact, Confidence, Effort) score or something similar and that is what we would recommend in this case.
The key takeaways from our opinion on generative AI are this: