Sitemap

Member-only story

LLMs are neither smart nor dumb. Period. They’re tools.

4 min readJun 16, 2025

--

Both in your toolkit — and, sometimes, like the guy who doesn’t know he’s being used.

Let me tell you a story.

Recently, I was working on localizing one of my apps into several Asian languages — specifically Korean, Japanese, and Chinese. For Korean, I could give some feedback since I happen to know a little. Even Japanese wasn’t totally alien; surprisingly, it shares some structure with Korean. But Chinese? To me, it just looked like… Chinese. (Pun fully intended.)

Normally, for tasks like this, I turn to my go-to coding sidekicks — Claude and ChatGPT — and they usually perform like productivity powerhouses. But this wasn’t a regular prompt-response session. It was localization — hundreds of keys, deeply contextual language, and nuances that only make sense when embedded properly. So I figured I’d take advantage of Gemini Pro’s much-touted massive context window — up to a million tokens. If I’m not wrong, that’s enough to fit all of Chekhov’s short stories and still leave room for a vacuum cleaner manual.

For a while, it worked. We juggled hundreds of localized keys and got pretty decent results. But gradually, cracks began to show. Gemini started skipping parts of the prompt. Then, it repeated earlier sections I didn’t ask for. Something was… off. Not dramatically broken…

--

--

Dragos Roua
Dragos Roua

Written by Dragos Roua

Story teller, geek, light seeker and runner. Not necessarily in that order.

No responses yet