Editorial standards
How we write, who reviews it, and what we will and will not publish.
Most writing on the internet about AI recruiting in Japan is unsigned, undated, and unsourced. This page documents the opposite posture — how Headhunt.AI’s articles are produced, who is named on them, what we cite, and what we deliberately do not publish.
Last updated: 2026-05-03
Named authorship
Every article on this site is published under the byline of a named human author with a public profile, professional credentials, and a documented track record on the topic. The named author is the person responsible for the argument, the data references, and the conclusions in the piece. We do not publish under generic "team" bylines, ghost-author bylines, or content-marketing pseudonyms.
Our three named authors are Ken Charles (CEO, ExecutiveSearch.AI K.K.), Cody Pettit (Co-Founder, Head of Data Operations, ExecutiveSearch.AI K.K.), and Gary Schrader (Partner and Board Director, ESAI Agency K.K.). Each author publishes only on the topical territory documented on their author page. We do not extend bylines to topics outside an author’s stated areas of expertise.
AI-assisted drafting and human review
We use AI tools — including large language models — to assist with research, structuring, drafting, and editing. We disclose this openly because (a) it is true, and (b) it is part of how we ship at the volume and quality we ship at. AI assistance does not replace authorship; it is a writing tool, the way a word processor is a writing tool.
Every article is reviewed and edited by its named human author before publication. The named author is the person responsible for the article’s accuracy, framing, and conclusions. If a reader finds a problem with a piece, the named author owns it. We will not hide behind "an AI wrote it." If our name is on it, we wrote it — using whatever tools we used to do so.
Trade-secret protections — what Cody can and cannot write about
Cody Pettit’s authorship territory covers AI candidate scoring methodology, production AI operations, and the data architecture of AI-first recruiting in Japan. Within that territory, an explicit boundary exists between what is publishable and what is not.
Publishable: high-level methodology (the architectural approach, the kinds of signals scored, the validation approach), aggregate outcomes (production reply rates, conversion metrics, ROI math, quarterly performance bands), the rationale behind specific design choices (why we chose to consolidate API calls, why we chose hands-off as the base rate), and observations about where the system works and where it does not.
Not publishable: the production prompt text, specific scoring weights or thresholds, model identifiers and exact context-window configurations, the internal candidate-matching algorithms, the format of internal AI-evaluation rubrics, third-party tooling identifiers used in our outbound or enrichment infrastructure, and any artifact that would let a competitor replicate the engineering work that took eight years to develop. These are the company’s 営業秘密 (trade secrets).
The test we apply: if a reader finishes Cody’s article understanding what we do and why we get the results we get — but could not replicate the system without significant independent engineering work — the article is properly bounded. If a reader could replicate it, the article has leaked IP and should not have been published in that form.
Source verification policy
Every numerical claim in our articles ties to a source, and the source is named inline or in an explicit Sources section at the end of the piece. We prefer original sources to aggregators: company filings (TSE disclosures, securities filings), regulatory aggregates (MHLW, PPC, METI publications), peer-reviewed research, and named industry data over secondary write-ups of the same material. Where we cite our own production data, we declare it as our production data with a sample size and time window — not as anonymous "industry data."
When we cite a Japanese statute, regulation, or ministerial guideline, we cite the article number and the responsible regulator (e.g., 個人情報保護法 第28条 / Personal Information Protection Commission). For court precedent or regulatory enforcement actions, we cite the case name, the date, the issuing body, and where reasonable, a link to the public record.
Expert review for YMYL articles (監修 protocol)
"YMYL" — your money or your life — is the search-industry term for articles that, if wrong, could materially harm a reader. For Headhunt.AI, this category includes any article in Japanese that opines on the application of Japanese law to a specific situation: APPI compliance for AI sourcing, Employment Security Act registration obligations, the labor-law treatment of AI-driven hiring practices, or the criminal exposure attached to non-compliant operation.
For Japanese YMYL articles, our protocol is: the named author drafts the piece, the article is reviewed and signed off by a qualified Japanese legal supervisor (弁護士 監修), and the supervisor’s name and 弁護士登録番号 (bar registration number) are displayed on the published article. The supervisor reviews each piece against the relevant Japanese statutory framework, the relevant precedent (e.g., the 2019 Rikunabi DMP follow case for AI scoring), and the current ministerial guidelines.
Status as of May 2026. Our retained 弁護士 supervisor for Japanese YMYL content is currently being identified and engaged. Until that engagement is in place, we do not publish new Japanese-language YMYL articles. English-language articles touching the same subject matter are written and published by Ken Charles and clearly labeled as educational discussion of Japanese law rather than legal advice; we still note that consult with qualified Japanese counsel is necessary for any specific compliance question. Our existing compliance briefing on the AI sourcing stack in Japan remains the canonical English-language resource on this site for the regulatory framework.
Update cadence
Articles are not static. Each article displays a "Last updated" date and carries a Changelog at the bottom listing material changes. Our standing schedule:
- Cornerstone pillar articles (Hub master pages) — reviewed and refreshed quarterly at minimum.
- Legal and regulatory articles — refreshed within 14 days of any relevant statutory amendment, ministerial guideline update, or material enforcement action.
- Statistics-driven articles — refreshed every 6 months at minimum, with a fresh data window where applicable.
- Quarterly market reports — full rewrite each quarter; the previous quarter’s report is archived under its original URL with a clear cross-link to the current period.
Correction policy
If we publish a factual error, we correct it in place, log the correction in the article’s Changelog with the date and a brief description of the change, and — if the error materially affected the conclusion of the piece — append a "Correction" note at the top of the article.
If you find a factual error in any article on this site, write to editorial@executivesearch.ai. We acknowledge correction requests within one business day and respond with a determination — including the reasoning, even if we decide not to make the change — within five business days.
What we will not publish
A short, explicit list of things this site will not publish, even when the topic is in our editorial territory:
- Articles naming individual customers of Headhunt.AI or ESAI Agency K.K. without that customer’s prior written consent. Our customers’ relationships with us are confidential by default.
- Persuasive content attributing fabricated quotes to real public figures.
- Comparative content that names a specific competitor in a way that could constitute defamation; comparisons we do publish are factual, sourced, and where possible draw on the competitor’s own public filings or marketing claims.
- Trade-secret-class technical detail per the boundary above.
- Content generated solely by AI without named-author review and accountability.
Why this page exists
Search engines, AI search engines, and human readers all need a way to evaluate whether a publisher’s writing can be trusted. The honest answer is to document the editorial process, name the people responsible, and be explicit about both what we do and what we deliberately do not. That is the function of this page.
Questions on a specific article?
Editorial questions, correction requests, source verification.