Discussion about this post

User's avatar
Sherry Heyl's avatar

This is a really strong breakdown of why AI products struggle to stick, especially around trust and predictability.

One thing I keep seeing, though, is how much of the conversation centers on automation, when that’s only one slice of what AI can actually do. A lot of the trust issues come from trying to force AI into fully autonomous roles before people are ready to rely on it that way.

In practice, some of the most valuable use cases are the ones that don’t try to replace the human at all. They help people think, surface patterns, pressure test decisions, or make sense of complexity faster. Those uses tend to build trust more naturally because they support judgment instead of bypassing it.

If we expand how we think about where AI fits, not just what it can take over, we probably see a very different adoption curve.

Jake Powell's avatar

I think what people will always underestimate is that humans will always need critical thinking when inputting the prompt into AI to direct its thought. And sadly students are not developing critical thinking in academia anymore because AI is writing all their essays.

4 more comments...

No posts

Ready for more?