Build in Public #17: "She Looks Frumpy"
My mother-in-law looked at her AI-generated book and said she looked frumpy. She was right. It exposed a blind spot I had about age, identity, and what AI actually assumes about older people.
I build Memolio, a personalised book for grandparents made from real memories and photos. Every Monday I write about what’s actually happening: the things that ship, the things that break, the things I didn’t see coming.
My mother-in-law looked at her illustrated book and her reaction was immediate. Not “oh it’s lovely” and not “I have some notes.” Just: “I look frumpy.”
I found it funny, honestly. It wasn’t the kind of feedback I was expecting from a user test. But then my wife looked over her shoulder and agreed. And at that point I stopped finding it funny and started paying attention, because two people who know this woman well had the same reaction to how the AI had drawn her.
I went back and looked at the code.
The problem was hiding in plain sight
Every grandparent in every Memolio book — regardless of their actual age, regardless of anything the user had told us about them — was being described to the image model as “an elderly woman” or “an elderly man” in “age-appropriate casual clothing.”
Those two phrases together. That’s basically a prompt recipe for someone hunched in a cardigan.
The “how did I not see this” feeling hit pretty hard. I’d been so focused on getting the faces right, the illustration style consistent, the story prompt producing the right scenes, that I’d never stopped to think about what the model was actually being told about how this person dressed. It had never been an explicit decision. It was just a default that crept in, and I’d never questioned it.
The thing is, I had an image of “older people” in my head that came from my own grandmother. Her generation. Her clothes. And I’d just... baked that into the product without realising it.
A generational shift I’d completely missed
Here’s the thing about someone who is 65 or 70 right now: they grew up in the 60s and 70s. For a lot of people that age, fashion wasn’t just something they wore. It was a core part of their identity. They had a look. They had opinions about it. Some of them still do.
The “age-appropriate casual clothing” default had no way of knowing that. It was treating every grandparent as a generic category of person rather than as an individual who has spent decades figuring out how they want to present themselves to the world.
The fix was straightforward once I saw it: ask the user. What’s this person’s clothing style? We added a simple preference input — smart and polished, casual and relaxed, sporty, artistic, that kind of thing. Now a woman in her late 50s who describes herself as “smart and polished” gets illustrated in tailored trousers and crisp shirts instead of shapeless knitwear. Four files, about thirty minutes of work. The kind of change that probably matters more to customer satisfaction than anything else I’ve shipped this month.
But the real lesson is bigger than the fix
What I keep coming back to is how the model ended up with those assumptions in the first place. It didn’t make up “elderly in age-appropriate casual clothing” from nowhere. It learned from images. And images, across decades of media and stock photography, have a very specific idea of what an older person looks like. Dignified. Soft. Comfortable. Sensible shoes.
AI reflects the assumptions baked into the culture that produced the training data. That’s obvious when you say it out loud. It’s much less obvious when you’re in the middle of building something and just trying to get a feature to work.
This is why talking to actual users matters so much. Not surveys, not analytics, not me reviewing test books on my laptop. My mother-in-law looking at a picture of herself and saying “I look frumpy” is information I could not have generated any other way. My blind spot about what “older” looks like is a human blind spot, shaped by my own experience. The model has the same blind spot, at scale, shaped by decades of the same cultural assumptions.
The only way to find those gaps is to put the thing in front of real people and let them tell you what’s wrong.
The harder problem is still ahead
The fix I shipped is good but not complete. Asking people to describe their clothing style works when they have a clear sense of their own aesthetic. A lot of people don’t, or they describe themselves in ways that don’t map cleanly onto what the image model thinks “smart casual” or “artistic” looks like. Most people don’t neatly fit into a box.
The ideal version of this is probably more granular: specific items, specific eras, specific references. But that’s a longer form to fill in and a harder prompt to write. For now, having any preference at all is a massive improvement over a universal cardigans-for-everyone default.
Small change. Meaningful impact. One mother-in-law who might now look like herself in the book her family is making for her.
If you’re building something where AI has to represent real people, I’d genuinely like to know how you’re handling this. It feels like an unsolved problem across a lot of products. Drop a reply or find me on the blog.
And if you have a grandparent whose story deserves to be told, join the waitlist for early access when Memolio opens up properly: https://blog.memolio.io/subscribe
Memolio creates personalised illustrated books for grandparents from real memories and photos. Not yet publicly purchasable. Join the waitlist: https://blog.memolio.io/subscribe
