This is a really strong breakdown of why AI products struggle to stick, especially around trust and predictability.
One thing I keep seeing, though, is how much of the conversation centers on automation, when that’s only one slice of what AI can actually do. A lot of the trust issues come from trying to force AI into fully autonomous roles before people are ready to rely on it that way.
In practice, some of the most valuable use cases are the ones that don’t try to replace the human at all. They help people think, surface patterns, pressure test decisions, or make sense of complexity faster. Those uses tend to build trust more naturally because they support judgment instead of bypassing it.
If we expand how we think about where AI fits, not just what it can take over, we probably see a very different adoption curve.
I think what people will always underestimate is that humans will always need critical thinking when inputting the prompt into AI to direct its thought. And sadly students are not developing critical thinking in academia anymore because AI is writing all their essays.
The trust tax shows up inside organizations too, not just consumer products. An AI tool makes one bad recommendation in front of a client and the whole team stops using it. The demo worked, the pilot looked good, and then one incident undoes months of adoption work.
In luxury especially, where the margin for error with clients is close to zero, trust is built slowly and lost instantly. AI that cannot be reliably right about what matters most is worse than no AI at all. It adds noise without adding signal.
Agreed, this post will become a staple at my business going forward. As far as automation, list if not all, of that can be done via programmatic algorithms, bypassing the need for ai entirely. But it's the hot new thing and most didn't give it the due diligence it needs to know what not to do.
This is a really strong breakdown of why AI products struggle to stick, especially around trust and predictability.
One thing I keep seeing, though, is how much of the conversation centers on automation, when that’s only one slice of what AI can actually do. A lot of the trust issues come from trying to force AI into fully autonomous roles before people are ready to rely on it that way.
In practice, some of the most valuable use cases are the ones that don’t try to replace the human at all. They help people think, surface patterns, pressure test decisions, or make sense of complexity faster. Those uses tend to build trust more naturally because they support judgment instead of bypassing it.
If we expand how we think about where AI fits, not just what it can take over, we probably see a very different adoption curve.
I think what people will always underestimate is that humans will always need critical thinking when inputting the prompt into AI to direct its thought. And sadly students are not developing critical thinking in academia anymore because AI is writing all their essays.
The trust tax shows up inside organizations too, not just consumer products. An AI tool makes one bad recommendation in front of a client and the whole team stops using it. The demo worked, the pilot looked good, and then one incident undoes months of adoption work.
In luxury especially, where the margin for error with clients is close to zero, trust is built slowly and lost instantly. AI that cannot be reliably right about what matters most is worse than no AI at all. It adds noise without adding signal.
Really useful content. Keep up guys
Just an old animators perspective... https://growingupaspen.substack.com/p/the-responsibility-behind-ai-innovation?utm_campaign=post-expanded-share&utm_medium=web
Agreed, this post will become a staple at my business going forward. As far as automation, list if not all, of that can be done via programmatic algorithms, bypassing the need for ai entirely. But it's the hot new thing and most didn't give it the due diligence it needs to know what not to do.