Methodology
Where the numbers in our writing come from.
When Headhunt.AI articles cite production figures — reply rates, ROI, conversion data, market shifts — those numbers come from operating data inside ExecutiveSearch.AI K.K. and ESAI Agency K.K. This page documents what we measure, what we publish, what we deliberately do not, and why our data is representative of the Japan recruiting market the articles describe.
Last updated: 2026-05-03
Where the data is collected
All production figures cited on this site come from one of two operating environments: the Headhunt.AI platform itself (candidate scoring volumes, reply rates, conversion metrics) or the ESAI Agency K.K. recruiting business that runs on Headhunt.AI as its first customer (placement fees, meeting-to-placement ratios, hiring-funnel conversion at our corporate clients). The two environments are operationally separate, but they share a long timeline — every figure we publish has been generated under live commercial conditions, not in a lab study.
When an article cites a number, the article identifies which environment the number came from, the time window from which it is drawn, and the relevant denominator. Numbers presented without a denominator and a window do not appear in our writing.
The published datasets
Three primary datasets back the bulk of the figures cited across the Insights briefings and the /learn articles. Each is a representative published sample drawn from our internal operating data, not a complete record of firm performance. Aggregate figures and methodology are public; the full underlying production data, the complete placement record across all clients, and per-client breakdowns are confidential. We share what’s clean enough to share without compromising client confidentiality or competitive position.
Each dataset is summarized below at the cohort level. The underlying records remain confidential to the operating entity in question; the aggregate figures shown are the published sample.
ESAI Agency 2026 — 16-week autonomous outreach
16 consecutive weeks of fully autonomous candidate outreach run on Headhunt.AI inside ESAI Agency K.K. 100% of candidates were sourced and scored by the platform; 100% of scout messages were drafted by the platform; no human review occurred between scoring and send. The 17.2× return on credits cited in the Trusting the AI briefing is calculated against the cohort’s measured average placement fee (¥4,266,675) and trailing meetings-to-placement ratio (39.625), producing ¥107,676 of expected revenue per qualified meeting.
ExecutiveSearch.AI Corporate Funnel — 25 months
A representative 25-month sample of stage-by-stage placement funnel data drawn from ExecutiveSearch.AI K.K. corporate clients in Japan. This is the published validation slice — a representative subset we share for external scrutiny. The firm’s complete placement record across the full client portfolio is not disclosed. The published sample documents the 49% → 33% decline in 2nd-to-Final advance rate (Mann-Kendall non-parametric trend test, z = −2.42, p = 0.015) and the simultaneous rise in Final-to-Offer conversion from 40% to 58%. Used in the Decision Gap briefing and in /learn articles addressed to corporate hiring leaders.
ExecutiveSearch.AI · 8-year AI-first agency operation
Eight years of operational data from running an AI-first recruiting agency in Tokyo, beginning Feb 1, 2018 and continuing under Monstarlab Inc. (TSE: 5255) ownership from October 2023 onward. Provides longitudinal context for claims about market structure shifts (RPO migration, in-house TA absorption, licensed-firm count growth, agency bankruptcy patterns) — when those claims are sourced from public filings (MHLW aggregates, Tokyo Shoko Research, TSE-listed company disclosures) we cite the public source; when sourced from our own observation across multiple cycles, we say so.
Anonymization policy
Our customers’ relationships with us are confidential by default. Articles citing production data refer to clients only at the cohort level — for example, "ExecutiveSearch.AI K.K. corporate clients in Japan, March 2024 through March 2026" — and never name an individual client without that client’s prior written consent. The same posture applies to candidates: where a case study or pattern is illustrative, we describe the role family, seniority band, language signal, and outcome, but we do not identify any individual candidate.
Where a Japanese public-company filing or court precedent identifies a third party (for example, the For Startups, Inc. timely disclosure cited in the Database Tax briefing, or the Recruit Career corrective recommendation cited in the compliance briefing), we cite the third party as named in the public document because the public document is itself the primary source. We do not anonymize public-record information.
Statistical methods
When we make a statistical claim — for example, that the 2nd-to-Final decline in the corporate funnel dataset is significant rather than noise — we name the method. The Decision Gap briefing uses a Mann-Kendall non-parametric trend test on rolling 6-month conversion rates, chosen because it does not assume normality and is robust to outliers; reported result z = −2.42, p = 0.015. The same dataset’s breakpoint analysis tested every possible split in the monthly series; strongest break at July 2025 (before: 52% average, after: 37% average; Welch’s t-test on the split, t = 1.32, p = 0.20). The breakpoint individually does not reach p < 0.05; the overall trend does. We report both numbers because both are relevant.
When we report a percentage that is the result of a small numerator, we report the numerator and denominator. When we report a rate that compounds across stages of a funnel, we show the funnel itself. When we report a number from public filings or third-party industry data, we name the source on the same line.
Why this data is representative of Japan mid-career and senior recruiting
ExecutiveSearch.AI K.K. and ESAI Agency K.K. have operated in the Japan mid-career and senior bilingual recruiting market continuously since 2018, working across the full range of segments where AI sourcing produces clean signal — bilingual finance, IT, sales, commercial, supply chain, product, marketing, HR, legal, GTM, and most engineering disciplines. The candidate database the platform scores against is 4M+ Japan-focused profiles, built primarily from public LinkedIn data through commercial licensing arrangements with global data providers, with additional public social signals from X (formerly Twitter), GitHub, Facebook, and Instagram layered in where candidates have visible activity on those surfaces. LinkedIn is the majority of the professional surface for the Japan mid-career and senior segment; the additional sources fill in the picture for engineers (GitHub), public-thinking practitioners (X), and candidates whose public footprint is partly off LinkedIn. The cohort sizes documented above are large enough to support stable rate estimates at the funnel-stage level and at the full-funnel level for the segments covered.
The data is not representative of every segment of Japanese hiring. AI scoring genuinely struggles in narrow technical specialties where expertise is invisible from a public profile — certain hardware engineering subfields, deep regulatory niches, some compliance specialties. It is also less informative for volume mid-market hiring filtered primarily by active intent, where job-board databases of registered candidates carry signal our public-profile-based dataset does not. We try to be explicit in articles when a finding applies to a specific segment versus when it generalizes.
What we deliberately do not publish
Per our editorial standards, certain categories of information remain unpublished: the production prompt text used by Headhunt.AI’s AI scoring, specific scoring weights, internal candidate-matching algorithms, the internal evaluation logic used to validate candidate scores, third-party tooling identifiers used in our outbound or enrichment infrastructure, and any artifact that would let a competitor replicate the engineering work that took eight years to develop. The aggregate outcomes are public; the inside of the system is not.
Caveats and limits
Three honest caveats. First, the windows above are finite: 16 weeks for the 2026 outreach cohort, 25 months for the corporate funnel cohort. Quarterly or longer aggregations are the minimum reliable forecasting unit; weekly or monthly figures inside these windows can vary materially without indicating a trend. Second, our cohorts skew toward bilingual mid-career and senior segments where AI scoring works well. We are explicit when extrapolating to other segments and we recommend treating those extrapolations with skepticism. Third, the 2026 outreach cohort represents the base rate of a hands-off autonomous configuration; results above the base rate are achievable with selective recruiter override but those override-improved figures are not what we publish — we publish the base rate because base rates are what a business can plan against.
Questions on the methodology
If you are evaluating a citation in one of our articles, doing your own analysis, or considering Headhunt.AI for procurement and need additional methodological detail, write to editorial@executivesearch.ai. We respond within five business days with whatever further detail can be shared without crossing the trade-secret boundary above.
Read further
The editorial standards governing every article. The named authors writing them. The Insights briefings the datasets above support.