#1 Intro
Back in the days, I spent 2.5 years writing a master's thesis about how cities transform themselves. I had the time of my life doing that. Then I went into practice, the thick of it: masterplans, presentations, budgets, construction sites, stress of missing deadlines and arguing with consultants. Every time I had an option to “chill and just write” I was pulled back into real life.
But writing is fun, it’s good for you, maybe somebody will read it and you’ll have a discussion. So, I'm starting a personal challenge: writing about design; writing of that kind where you sit with an idea until it becomes yours.
Here's why now: in the era of AI-generated everything, opinion and taste are the currency of the future. Looks like anyone can produce an image or generate a plan. The thing that can't be automated is a point of view formed by years of building things, fighting for design intent or watching ideas die in coordination meetings.
I'm going to write about that.
#2 Fundamental Flaw of AI in Design
Designers often can't describe the image in their head, and that's not really a lack of skill, it's the way creativity flows through your mind. And it's exactly why AI image generation has a fundamental flaw baked into it.
What doesn't work is typing a paragraph describing your vision and expecting anything real to come out right away. To be fair, sometimes it works, and honestly, that's when I start to breathe faster and check unemployment rates. But mostly it doesn't, because if you're really after a creative process - unfortunately for our productivity (but fortunately for us human beings) - there's no way to describe the image in your head until you put it in front of you. That image exists as a feeling, a spatial instinct, that only becomes real when your hand hits paper (or the cursor hits the endless digital canvas).
Donald Schön called this "reflection-in-action" - a conversation with the material where you draw, see what emerged, and respond to it. Gabriela Goldschmidt took it further: her research on sketching showed that designers literally discover ideas inside their own drawings they didn't consciously put there. Early design sketches do not depict a finished mental image, but help to generate it; that matters for AI, because the weakness of language-first image generation is a mismatch with how design ideas often form in the first place.
So where does AI fit? I see two useful ways to nanobanana, at least for now:
First: the What-If. Feed the machine a mash-up like "boho aesthetics cafe themed for millennial culture" and react to what comes back, not as a final image, obviously, but as friction. The tricky part is to listen to yourself and hand-pick and articulate what's good or what's not, that's where the experienced eye and cultural awareness kick in (aka Taste and Opinion).
Second: the Loop. Start with what's given, sometimes a "white box". Describe one layer of elements, see what comes back and refine your description. then, once you're happy with it, move to another. Every step is a micro-conversation - reflection-in-action, like peeling an onion in reverse. The AI doesn't replace the designer's eye (aka Experience), it makes the loop faster.
Writing a prompt from pure imagination is a myth and a path to disappointment with technology. You need to see first, then describe, then refine. In the era of AI, the designer's job isn't generating images, it's knowing what to feel when you look at one.
What doesn't work is typing a paragraph describing your vision and expecting anything real to come out right away. To be fair, sometimes it works, and honestly, that's when I start to breathe faster and check unemployment rates. But mostly it doesn't, because if you're really after a creative process - unfortunately for our productivity (but fortunately for us human beings) - there's no way to describe the image in your head until you put it in front of you. That image exists as a feeling, a spatial instinct, that only becomes real when your hand hits paper (or the cursor hits the endless digital canvas).
Donald Schön called this "reflection-in-action" - a conversation with the material where you draw, see what emerged, and respond to it. Gabriela Goldschmidt took it further: her research on sketching showed that designers literally discover ideas inside their own drawings they didn't consciously put there. Early design sketches do not depict a finished mental image, but help to generate it; that matters for AI, because the weakness of language-first image generation is a mismatch with how design ideas often form in the first place.
So where does AI fit? I see two useful ways to nanobanana, at least for now:
First: the What-If. Feed the machine a mash-up like "boho aesthetics cafe themed for millennial culture" and react to what comes back, not as a final image, obviously, but as friction. The tricky part is to listen to yourself and hand-pick and articulate what's good or what's not, that's where the experienced eye and cultural awareness kick in (aka Taste and Opinion).
Second: the Loop. Start with what's given, sometimes a "white box". Describe one layer of elements, see what comes back and refine your description. then, once you're happy with it, move to another. Every step is a micro-conversation - reflection-in-action, like peeling an onion in reverse. The AI doesn't replace the designer's eye (aka Experience), it makes the loop faster.
Writing a prompt from pure imagination is a myth and a path to disappointment with technology. You need to see first, then describe, then refine. In the era of AI, the designer's job isn't generating images, it's knowing what to feel when you look at one.
#3 Jony Ive knows something
I used to sketch all the time, and then I joined a big office and quietly stopped, not as a conscious decision, more like the environment just didn't ask for it. Big offices have their own momentum and logic, and honestly a lot of it is genuinely impressive. The coordination and the ability to push enormous projects forward - I respect that, but somewhere along the way the sketch was lost.
Perhaps, I didn’t want to be too loud. Sketching carries a pressure that digital work doesn't: when you draw you have to decide what matters. Reality around us is endless with detail but a sketch forces you to pick the essence, what's enough of the space (or idea), and that's uncomfortable.
However, I cant’s stop thinking about this: neuroscientist Giacomo Rizzolatti, who spent decades studying mirror neurons, talks about one profound idea in Mirrors in the Brain: our motor system - the part of the brain that moves the body - is how we understand the world in the first place. We perceive space through what our bodies can do with it, how we can reach into it, move through it, feel it with our hands. The motor system is not a servant of cognition; it is cognition at its most fundamental layer.
Reality is physical, bodily, gestural and, follow the hand, our hands are guides into the territory of the deepest layers of the brain.
Draw the space and you discover what your body already knows about that space, the weight of it, the tension, the proportion that makes you feel a certain way before you can explain why. A prompt can't access that, because words are not a spatial act.
Now I'm pushing myself back into a somewhat uncomfortable zone of picking up the pen again. And the thing is, AI makes this more urgent. The more content we can generate effortlessly, the more critical it becomes to know what to generate.
When I read about Jony Ive (allegedly) working on AI pen, I thought omg, that guy knows something.