Bilingual register in AI scout messaging — keigo, plain form, and the failure modes
Most failed AI-generated Japanese scout mails don’t fail on grammar — they fail on register. The grammar can be technically correct while the register is wrong for the candidate’s seniority, the company’s culture, or the relationship distance the message is establishing. This guide walks through the register mechanics that determine whether a scout mail produces 3% reply or 0.3% reply, the keigo pitfalls AI consistently mishandles, and what the platform’s drafting layer needs to get right before unedited operation makes sense.
Japanese scout-mail register depends on four mechanics that English-trained AI consistently mishandles: keigo level (尊敬語/謙譲語/丁寧語 are not interchangeable for a stranger reaching out), formal opener choice (拝啓 / 敬具 vs simple greeting), paragraph break density (Japanese business mail runs longer between breaks than English-translated equivalents suggest), and JD-to-candidate hook translation (English JD bullets translated literally into Japanese rarely read as compelling hooks). Get all four right and the reply rate sits in the 2–4% range against passive-receive Japanese candidates. Miss any one consistently and the reply rate compresses below 1%. The structural difference between platforms isn’t language proficiency — it’s register awareness at the model layer.
Mechanic 1 — keigo level for the cold reach
A recruiter contacting a candidate they don’t know operates in a specific register space — formal but not rigid, respectful but not subordinate, professional but warm enough to invite a reply. In Japanese, this maps to consistent 丁寧語 (teineigo) for the body, 尊敬語 (sonkeigo) when referring to the candidate’s actions and accomplishments, and 謙譲語 (kenjougo) when referring to the recruiter’s own actions of contacting and proposing. The three are not interchangeable. A scout that uses 丁寧語 throughout reads as friendly but flat; one that overuses 尊敬語 reads as obsequious; one that misuses 謙譲語 to refer to the candidate ("お伺いいただく" vs "お聞きする") reads as comically confused.
English-trained AI tends to default to a single register tier across the message — usually 丁寧語 — because that maps cleanly to neutral-formal English. The result is grammatically correct Japanese that reads as flat. The fix at the model layer is explicit register-tier handling per clause: who is the actor in this clause, are we describing the candidate’s action or the recruiter’s action, what’s the appropriate register tier for each. We tune for this in the platform’s drafting layer; many other AI scout systems don’t, which is why their unedited Japanese output reads as machine-translation-grade even when the grammar is correct.
Mechanic 2 — formal opener choice
Japanese business correspondence has a formal-opener convention — 拝啓 (haikei) at the open and 敬具 (keigu) at the close — that signals high formality. For a recruiter scout, the opener convention is genuinely contested: some recruiting cultures use it for executive-level cold reaches; some skip it for mid-career; almost none use it for tech-segment cold reaches where the candidate would read it as overly stiff. The decision is contextual.
Most AI scout systems either use 拝啓/敬具 universally (over-formal in 80% of cases) or skip it universally (under-formal for the 20% where it’s expected). The right answer is conditional: use it for senior bilingual roles in traditional Japanese industries (finance, manufacturing, certain consulting), skip it for mid-career bilingual roles in tech and digital, and tune the threshold based on the candidate’s tenure pattern (a candidate with 15 years at one mega-bank reads as expecting more formality than a candidate with three startup tenures averaging 18 months each). The platform’s drafting layer makes this call from the JD’s industry signal and the candidate’s tenure-pattern signal; the call is wrong roughly 5–8% of the time, which is one source of recruiter override.
Mechanic 3 — paragraph break density
An English business email runs roughly 60–80 words per paragraph in cold-reach context. A Japanese business email runs roughly 100–150 words (5–7 lines on a typical mobile reading width) per paragraph. AI translating from English to Japanese tends to preserve the English paragraph density, which produces a Japanese message that reads as choppy — too many breaks, too many discrete thoughts surfaced separately rather than flowing together.
The fix is paragraph-density target setting at the generation layer rather than at the translation layer. The model generates Japanese with explicit paragraph-density targets (typically 4–6 paragraphs total, 100–140 words each, totaling 500–700 words for a senior-level scout) rather than mirroring the English source’s structure. This shifts the failure mode from "sounds machine-translated" to "reads like a real Japanese business mail." The reply-rate impact is meaningful — production data shows roughly 1.5–2× reply rate gain from this mechanic alone vs density-mirrored translation.
Mechanic 4 — JD-to-hook translation
An English JD bullet might read "Lead a team of 8 backend engineers building the next-generation pricing platform." A naive Japanese translation reads as the same bullet structure with no narrative bridge to why the candidate would care. The hook in Japanese needs to be reframed — not just translated — into the question or proposition that would actually engage the candidate. Something like "プライシングプラットフォームの次世代基盤を、8名のエンジニアと共に率いるリードロールにおいて、Senior Engineering Manager として技術的な意思決定に責任を持っていただくポジションです" — same content, candidate-perspective framing, complete sentence rather than bullet.
AI systems that translate the JD bullet by bullet produce hookless Japanese scouts. AI systems that re-narrate the JD into candidate-perspective Japanese sentences produce hooks that work. The latter requires the model to actually understand what about the role would be the proposition for the candidate, which depends on the JD being parseable into role-attractive components rather than just a feature list. The platform’s drafting layer attempts this; the failure mode when the JD is too sparse to parse this way is one of the named drafting limits — sparse JDs produce flat scouts even at the model layer.
What this means for unedited operation
The 16-week 2026 production cohort ran 123,675 candidates contacted with platform-drafted bilingual scout mails, no human review of the AI output. Reply rate landed at 3.13%, which is in the upper range for senior-bilingual cold reaches in Japan and well above the typical agency baseline of 0.5–1.5% on template-substituted mails. The four mechanics above are why. The model gets keigo level right per-clause, makes a contextual call on formal-opener convention, generates with native paragraph density rather than translating English structure, and re-narrates JD components into candidate-perspective hooks.
What this doesn’t mean: that AI-generated bilingual scout mails are perfect. They’re not. Roughly 5–8% of platform outputs hit a register edge case where the model picks the wrong keigo tier or the wrong formal-opener convention. The recruiter override on those cases is appropriate; the production cohort numbers are based on no override at all, which is the harder test the platform was deliberately exposed to. The reply rate is what the platform produces in unedited operation against a Japan-bilingual passive-receive cohort; it is not what perfect human writing would produce, but it’s a meaningful baseline above what most agency teams are running today.
Frequently asked
Can a non-Japanese-fluent recruiter actually use this without supervision?
Yes for the unedited workflow, with the caveat that the recruiter should still spot-check the first 30–50 outputs and read them through with a Japanese-speaking colleague to confirm the register feels right for the role types they’re sending to. After that initial calibration, the unedited workflow holds. The 2026 cohort included recruiter populations who don’t write fluent Japanese themselves — the platform’s drafting layer carries the register burden. We don’t recommend unedited operation for senior bilingual searches above ¥20M base salary without periodic register-quality review by a fluent reviewer; the failure-cost asymmetry is too high there.
What's the failure cost of getting register wrong on a senior cold reach?
Two costs. First, the immediate reply-rate hit — a register-wrong scout to a senior bilingual candidate produces near-zero reply rate against a register-right scout’s 3–5% range. Second, the reputational hit — senior Japanese executives talk to each other; a recruiter or platform that consistently sends register-wrong messages can develop a reputation that dampens reply rate across an entire candidate network. The senior-tier register quality matters disproportionately because the network compounds the cost. We back-test register quality on senior outputs at higher frequency than mid-career for this reason.
How does the platform handle gendered language in scout mails?
Default to gender-neutral phrasing throughout. Japanese has fewer gendered grammatical forms than European languages but real conventions exist in honorific structure and certain modal expressions. The platform’s drafting layer is tuned to neutral defaults; recruiter override is available when the candidate’s prior interaction history (rare in cold reach) or the role’s required register suggests otherwise. Gender-neutral default is the production posture.
What about register for English-language scout mails to non-Japanese candidates in Japan?
English business register in Japan recruiting context is different from US-equivalent. The candidate is a non-Japanese professional working in Japan, often for a Japanese employer — they’re calibrated to a specific cross-cultural register that’s slightly more formal than US standard but less formal than UK old-school. The platform’s English drafting calibrates to this register. American-trained AI tends to default to US-direct, which reads as casually presumptuous to this audience. The platform’s English output sits in a register space that’s better described as "Tokyo bilingual professional" than as US- or UK-equivalent business English.
Can I see actual sample outputs?
Yes — the platform’s free-credit signup includes generating a sample scout mail in EN and JA against a JD you provide. That’s the test that ends most evaluations one way or the other. We don’t publish marketing samples in static documentation because the register quality is best evaluated against a JD you actually care about, not a synthetic example.
Sources
Reply-rate figures from the 16-week 2026 production cohort: 123,675 candidates contacted, 3,868 replies (3.13% reply rate), 32.57% reply-to-meeting conversion, all on platform-drafted bilingual scout mails with no human review of AI output. Operated by ESAI Agency K.K. Register-mechanic data and override-rate figures (5–8%) are from internal model-monitoring runbooks and recruiter-rating divergence analyses. Native-density paragraph targets (100–140 words per paragraph for senior-level scouts) are from internal copy-quality review against the production cohort. Methodology, sample sizes, and statistical methods on our methodology page. Production-cohort details documented in the 17.2× ROI briefing.
Generate a sample scout for your role
Ten free credits at signup, no card required. Generate a bilingual scout against your JD; review the register quality directly. That’s the test that ends most evaluations.