Looking back over the past 10 years we can see a graveyard of technology that received huge hype, but ultimately never became “a thing”. Whether it failed to cross the chasm or its adoption is still simply too early you can visualize exactly what I am talking about.
What’s more, every time a new hype cycle started there were people in every organization screaming that the product was going to be left behind and the organization was doomed if it didn’t embrace the wave.
Years on, can you imagine what a complete flop your product would have been if you had gone all in on experiences in VR, issuing NFTs for the most inconsequential things or even fully integrated bitcoin into your checkout experience?
While it would be easy to deal with the distraction of generative AI, whether annoying or anxiety inducing, by mentally putting it in this graveyard, it would be foolish.
Pandora’s box is open. Anyone with an hour of time, google and some curiosity can see that this new generative AI technology is:
Changing technology, constant principles
Much like the tablet, smart phone, phone and pc before it. Our underlying technologies are constantly changing, whilst the underlying principles of users and their problems remain the same.
Innovation comes from the discovery of user problems that users themselves value, and solving them in faster, cheaper and more effortless ways.
The core problem of getting from A to B was solved better by the car than the horse, but the problem remained the same.
What this means for generative AI is that the paradigm of problems that can and or should be solved needs to shift.
What was previously so impossible and unaffordable it didn’t make sense to think about, can now be done through an API in a couple of minutes.
All of a sudden the painful problems that have been too hard to solve, require your attention
Even though, as established, generative AI is real and already here - there are loud voices from the top and bottom screaming that you need AI in everything everywhere all at once or that you should be ignoring it, “it’s a distraction”
As with all new product initiatives, be careful. Reacting impulsively to these voices can be very public and in many cases miss the core strategic process that we believe all product teams should be going through.
To make sure you don’t ship dud projects, focus on the principles and maintain expectations that you are experimenting with AI. After all - your product still may make it too early to market or some unforeseeable circumstance can lead to a failure.
We are sure there are loud voices inside of the blogging CMS platforms screaming that they need to add one click language models to an author’s blog archive such that any blog reader can “chat” with a blog.
But does that make any sense?
Building such a feature would surely be a decent time and attention investment, however what will likely happen when shipped?
Our bet - crickets.
Do users primarily read personal blogs for specific technical answers? Likely not. Users will read the blogs of people they admire, yes to learn but also to understand the author’s perspective, to be entertained by their curation of ideas and opinions and to stay up to date with recent developments.
The user doesn’t know what to prompt an AI about in a chat interface.
Now would a technical blog on the mechanical intricacies of cars create value in providing a chat interface - it is more than likely, as we would bet most visitors come to the blog looking for specific answers to specific questions. Questions that could be asked to a chat interface.
If you ship the former out of pressure rather than the latter, you’ve ignored the underlying principles and shipped a dud. The quieting of voices from delivering what was asked will only be temporary before you’re on the spot for being so reactive with your roadmap.
Model for AI strategy
The first step of the model for reviewing or building your AI product strategy is completely non-negotiable. You need to know your users and understand their problems. Going with your gut on what you think you know without having done a store visit or a user interview in a couple quarters will lead to failure here. Be honest with yourself, act and continue.
Once you have this knowledge you need to explore and document the jobs-to-be-done for your customer. We’re not necessarily advocating you go all in on JTBD if it is not native to your existing organization, but questioning and documenting why users have chosen you and to get what done is fundamental here.
With an outline on what you know your customer wants you to do for them, you need to evaluate how well your product or organization is performing in completing this job or task, and by what measure.
And the final step is to understand whether AI can increase this performance or render the problem/job completely irrelevant.
An example of the final step would be - can you use AI to better deliver a single sign on experience or can you use AI through a users webcam to continually authorize access to applications (think always on Face ID) and remove the need for SSO at all.
For many organizations if AI is to render the problem/job completely irrelevant, they will not seek to be the replacement as it can be a hot button political issue. A Kodak moment if you will. Now is the time to avoid disaster and raise this issue if you would like to continue innovating.
If it seems that generative AI can enhance how your product/organization completes the job for the user, a product leader needs to be asking “Can AI impact this measure in a material way”. The critical element of materiality comes from understanding by what measure a user is measuring success with your product.
Granularity of evaluation
Up until this point we have been specifically vague as to what the product or size of product is that you are evaluating the strategy for.
That’s because PM’s all the way up to the C-suite need to be working on the AI strategy for what is under their purview, whether that be a customer app or a whole swathe of departments. Many products that PMs work on will be made completely irrelevant by a higher up strategy to replace a department.
Again whilst this may be an uncomfortable topic to think about, the market is moving and as practitioners we have a duty to consider the impact of AI.
The evaluation levels likely look this this:
21/5/2023 01:24:47 pm
It's fascinating to read your perspective on generative AI and its impact on product strategy. You rightly emphasize the need for product teams to understand their users' problems and evaluate how well their products address those needs. The notion that generative AI can render certain problems irrelevant opens up new possibilities for innovation.
Leave a Reply.