Imagine sitting on the roof of a tall building, looking down. A friend waves at you. You want to go down and meet them. The only options that cross your mind are taking the stairs or the elevator. Jumping off the rooftop is not an option you ever consider, and for a good reason (call me gravity). The thing with AI is that those restrictions no longer apply. Jumping is now an option, because with AI you have the power to fly.
Being an engineer is a double-edged sword when it comes to using AI. On one hand, you know how to ask the right questions and guide AI toward a solid implementation. A non-technical user might never think about the security implications of an implementation, or about concepts like reusability, maintainability, and extensibility. They are likely to create something that might work, but at the same time it might likely be fragile, buggy, and vulnerable.
On the other hand, knowing so much about how elevators and stairs work can make engineers too narrow-minded, blinded to the crazy possibilities AI brings. In our example, they never consider jumping as an option. Gravity pins not just their bodies but their minds to the ground.
That is the unexpected advantage of a non-technical user. These users are like aliens teleported to that rooftop straight from their home planet. They do not know what a stair or an elevator is, and they do not fully grasp the concept of gravity on this foreign world. So when they want to get down to the street, their natural instinct is to jump, seeing it as the simplest path forward.
The real challenge for an engineer is not only to avoid building fragile systems but also to make the mental leap from what has always worked to what could be possible now. Engineers are trained to reason from constraints, to optimize within known boundaries, and to avoid risks. Those habits are valuable, yet they can become a form of self-imposed limitation when the environment itself changes.
With AI, some constraints vanish or shift. Tasks that once required complex, rule-based pipelines can now be reimagined as data-driven, adaptive systems. The first step is a mindset shift: treat existing constraints as hypotheses to test rather than as immutable laws.
That means designing small experiments to check whether a constraint still applies, building quick prototypes to demonstrate new possibilities and learn from real feedback, pairing early with curious non-technical thinkers who suggest unexpected approaches while using engineering judgment to make those ideas safe, and creating controlled sandboxes where creative exploration can happen without endangering production systems. It also means documenting assumptions explicitly so you can revisit them as the technology evolves. The goal is to combine the engineer’s discipline with a willingness to explore unconventional solutions.
Engineers do not need to abandon their strengths; they need to add one more skill, the ability to suspend a few long-held assumptions long enough to see whether a radically different path works better. When discipline meets curiosity, AI stops being a threat to good engineering and becomes a multiplier for it.