The State of Verified Work: Bondex Research Hub

Bondex's research hub on verified work. Methodology, data sources, definitions, and the framework every future Bondex report plugs into. Citation-ready.
The State of Verified Work: Bondex Research Hub

Labor-economics research counts jobs. HR-tech research surveys recruiters.

Web3-native research tracks tokens. None measures the layer that decides whether a hire works: the proof underneath the claim.

That gap is the reason this hub exists. The State of Verified Work is the canonical methodology, data-source disclosure, and definition framework that every Bondex report plugs into, so that a number you read on a Bondex page traces back to a dataset, a sample size, a time window, and an instrument.

We prove, not posture. This is what proving looks like at the methodology layer.


The State of Verified Work is Bondex’s research hub

a canonical methodology, data-source disclosure, and definition framework that every future Bondex report plugs into. It defines verified work as the labor-economics category produced by cryptographically anchored attestations, and lays out the measurement instruments, citation conventions, and evidence trail structure used across all research.

What verified work is

Verified work is labor activity (employment, project participation, skill demonstration, peer endorsement) that has been attested by an issuer who can confirm it (employer, institution, peer, on-chain protocol) and cryptographically anchored, so any verifier can independently confirm the claim without trusting the platform that hosts it.

Three properties make a piece of work “verified” in the way this research uses the term:

  1. Attested. A specific issuer with first-hand knowledge of the work signed the claim.
  2. Cryptographically anchored. The signature can be verified against an immutable record (typically an on-chain attestation or a W3C Verifiable Credential).
  3. Portable. The proof belongs to the worker, not the platform. It survives the platform going away.

Anything missing one of those properties is unverified work, including most of what currently fills the legacy labor record: CV claims, LinkedIn endorsements, references that confirm only dates and titles, single-skill test results. All of it sits outside this category. The boundary is a measurement choice, not a value judgment.

Why a verified-work research category exists

Existing research covers labor from three angles, and none of them touches the proof layer.

Labor-economics research

(the U.S. Bureau of Labor Statistics, OECD employment data, national statistics offices) measures jobs counts, wages, unemployment rates, sectoral shifts.

The instruments are employer surveys, payroll administrative data, and household surveys. The unit of observation is the job, not the worker, and never the worker’s verifiable record.

You can read every BLS report ever published and not learn whether the people in those jobs can prove they did the work.

HR-tech research

(SHRM, the Josh Bersin Company, Gartner HR) measures the recruiter and the hiring process. Surveys ask employers about time-to-hire, source-of-hire, retention, ATS spend. The vendor-side perspective is useful for benchmarking hiring operations, but it studies the buyer of the labor market, not the proof underneath any individual hire.

Web3-native research

(Dune, Messari, on-chain analytics) measures token flows, protocol revenue, governance participation. Excellent for crypto-native phenomena. Not designed to study labor markets, and in most cases not capable of distinguishing between an on-chain action that proves work and an on-chain action that proves nothing.

Verified-work research sits between those three. It is candidate-side rather than employer-side. It is attestation-anchored rather than self-reported. It crosses employer boundaries by construction, because the proof belongs to the worker. The category needs different instruments (attestation registries, issuer-side ledgers, cross-platform reputation graphs) and a different unit of observation: the verifiable record, not the survey response.

The category did not exist as a research surface five years ago, because the underlying infrastructure did not exist at scale. Now there are 300K+ on-chain attestations issued through Bondex alone, employer-side verification flows that produce signed credentials, and a growing graph of cross-platform proofs. The data exists. Somebody has to define how to measure it without distortion. That is the work this hub does.

The State of Verified Work: Bondex Research Hub

The definition framework

Every Bondex report uses the same definitions. Pinning them down once means a stat in a 2026 report compares cleanly to a stat in a 2028 report. The four core terms:

Verified work

Labor activity attested by an issuer and cryptographically anchored. Already defined above. The umbrella entity that the rest of the framework hangs from.

Verified credential

A claim about a worker (qualification, employment, role, skill) that has been issued by a party with first-hand knowledge of the claim and signed in a way that any third party can verify without contacting the issuer. The technical reference is the W3C Verifiable Credentials Data Model. Where this hub uses the term, the underlying instrument is either a W3C VC or an on-chain attestation that satisfies the same three properties (attested, anchored, portable).

Continuous verification

A reputation signal that updates as the worker accumulates new attested activity, rather than being fixed at hire. The contrast is point-in-time evaluation (interview, skills test, reference call), which captures one moment and decays from there. Continuous verification is the architectural shift cybersecurity made with zero-trust a decade ago, applied to labor.

Reputation portability

The property that a worker’s verified record can travel across employers, platforms, and borders without depending on any single party to keep it alive. A reputation that lives inside one company’s HR system dies when the relationship ends. A portable reputation does not.

These four terms are load-bearing. Every Bondex report defines new constructs against them, never around them.

Data sources

Verified-work research draws from four primary sources. Each has explicit coverage and explicit bias. Both are disclosed every time the data is used.

Source Coverage Sample weight Known bias
Bondex platform 2M+ users · 6M+ app downloads · 1.5M peak monthly active mobile · 93 countries · 300K+ on-chain attestations Primary Over-represents Web3-native, mobile-first, and verification-curious workers. Under-represents traditional labor markets.
Web3.career 1.7M+ monthly visits · 100K+ talent profiles · 30–50 new positions/day Primary (job-market data) Web3-native job-market only. Excludes traditional sectors.
Remote3 Remote Web3 roles, acquired Nov 2025 Supplementary Remote-only subset, narrower geographic distribution than parent platform.
On-chain attestation registries Publicly verifiable credential issuances Supplementary Issuer-dependent; captures only attestations that opted into public anchoring.

External sources are used where appropriate and cited at point of use:

Each data point published in a Bondex report is tagged with its source, sample size, and time window. The default citation format appears in the next section.

Methodology

Methodology is the part of research most often skipped and most consequential when the work is challenged. The Bondex methodology stack is published in full, not summarized, so any reader can rebuild the analysis from the disclosed steps.

Sampling

For any research question, the sampling decision is explicit. Three of the most common cases:

  • Cross-sectional snapshot. When the question is “what is the state of X as of date Y,” the sample is every record on the Bondex platform meeting the inclusion criteria at the snapshot timestamp. The full record is used, not a sub-sample, except where stated.
  • Longitudinal series. When the question is “how has X changed over time,” the sample is every record meeting the inclusion criteria at each measurement window (typically quarterly). Each measurement is timestamped; cohorts are tracked separately when membership changes between windows.
  • Issuer-stratified analysis. When the question is “how does issuer type affect signal,” the sample is stratified by issuer category (employer, institution, peer, on-chain protocol). Sample sizes per stratum are reported individually.

Selection criteria (what counts as “in” the sample for a given research question) are specified at the top of every report, before any numbers appear.

Measurement instruments

A research question reads signals. The set of signals read, and the set explicitly ignored, is part of the methodology.

For example, when measuring fraud-resistance of a credential type, the instrument reads: issuer identity, signature validity, anchoring transaction, issuance date, revocation status, and any peer counter-attestations. The instrument ignores: self-reported claims on the same profile, unsigned references, and any signal that cannot be independently verified.

That ignore-list matters as much as the read-list. It determines what the resulting number means.

Time-series treatment

Time-series data on a growing verification network is easy to misread. A quarterly attestation count rising over time can reflect more workers, more attestations per worker, more issuers, or all three. The default treatment in Bondex reports:

  • Headline numbers are absolute counts at the snapshot date.
  • Trend numbers are normalized (per worker, per issuer, or per active month) so growth in the underlying pool does not masquerade as a behavior change.
  • Cohort numbers track a defined cohort (e.g., workers who joined in Q1 2025) across subsequent quarters, so platform growth and cohort behavior are separated.

When a report shows a percentage change, the denominator is named in the same paragraph.

Evidence trail

Every reported number traces to source data through a documented path. The components of the path:

  1. The query. The literal definition used to select records (often as a parameter set against the platform’s data layer).
  2. The transformation. Any aggregation, filter, deduplication, or normalization applied between raw records and the published statistic.
  3. The cut date. The timestamp at which the data was pulled.
  4. The output. The statistic as published.

A reader who wants to challenge a Bondex number can request the evidence trail for that number. The methodology hub welcomes the scrutiny; the integrity of the research category depends on it.

Citation conventions

Every statistic Bondex publishes is cited in a standard format, so the number can be quoted accurately and the methodology behind it can be located.

The default citation block for a Bondex internal statistic:

[Statistic.]

Dataset: [source]. Sample: [n records, scope]. Window: [date range]. Last refreshed: [date]. Source: [Bondex internal data | named external source].

A worked example (illustrative format; actual dates verified against the platform ledger before publish):

300K+ on-chain attestations issued through the Bondex network

Dataset: Bondex platform attestation ledger. Sample: all attestations with successful on-chain anchoring. Window: [platform launch date] through [most recent refresh]. Last refreshed: [refresh date]. Source: Bondex internal data.

Three conventions sit underneath that format:

  • “Bondex internal data” is always flagged. Where a number comes from the Bondex platform rather than an external source, the report says so. Internal data is not lesser, it is unique to the verified-work category, but readers should know which is which.
  • External sources follow academic conventions. Author or organization, title, publisher, date, URL where applicable. No anonymous “studies show.”
  • Freshness is disclosed. Every report carries a “data current as of” date. Numbers older than 90 days are flagged. Numbers older than a year are refreshed before re-publication.

The third convention matters more than it sounds. Hiring-data freshness is a common failure mode in industry research; a stat from 2017 quoted in 2026 implies a current reading. Bondex reports never inherit that ambiguity.

How research drops are structured

The hub organizes Bondex research into four types. Every published piece announces its type so readers know what to expect.

Quarterly reports

Time-series data on Web3 hiring, attestation volume, verification activity, and reputation graph density. Released within thirty days of quarter close. Quarterly reports lean on the longitudinal methodology: same instruments, same definitions, different window.

Annual reports

Synthesis pieces. The annual State of Verified Work report is the canonical year-end compilation: aggregate platform statistics, cohort behavior across the year, comparison against external labor-market data, and the year’s methodology updates. Annual reports include a backward-looking appendix listing every quarterly stat that was revised.

Methodology pieces

How-we-measure-it explainers. When a new construct enters the framework — say, a way to score reputation portability across attestation registries — it gets a methodology piece before it gets used in headline reports. The order matters: methodology is published, then the data that depends on it.

Topic studies

Focused research on specific phenomena: credential fraud at hire, the geography of verified work, the AI-application flood, the impact of acquisitions on platform reputation graphs. Topic studies pull from the standing methodology but answer a narrower question than the quarterly or annual cadence covers.

The four types share definitions, sources, citation format, and evidence trail. The hub is the spine. Every report plugs in.

Limitations and what we don’t claim

The hub welcomes scrutiny because the methodology has named limits. Three of them are worth stating up front.

The data is candidate-side first

Bondex’s primary signal source is the worker: what they attest, what their issuers attest about them, what their peers endorse. Employer-side data (applicant volume per role, ATS funnel conversion, recruiter behavior) is incomplete by construction. Where employer-side numbers appear, they come from web3.career operational data or external sources, and the boundary is named in the report.

Verified workers are over-represented

People who join the Bondex network are, by definition, people who chose to participate in a verification protocol. That self-selection biases the sample toward workers who already value proof.

The behavior of an unverified worker (who they are, why they have not opted in, what their work looks like) is not directly observable from inside the network. Reports that need to characterize the unverified population pull from external labor-market data and say so.

Pre-2024 data is sparse

The verification network was much smaller before 2024, both in user count and issuer participation. Longitudinal questions that reach back beyond that point are constrained. The smaller pool means larger confidence intervals, and the issuer mix has changed. Reports that quote pre-2024 numbers state the constraint and avoid implying a direct comparison with later periods.

The methodology pieces address how the analysis mitigates each limitation. Mitigations are not fixes; they are documented choices, and the choices are open to challenge.

How to use this hub

The hub is built to be cited. Three primary use cases:

Researchers and journalists

quoting Bondex data in external work. Link to the specific report and the specific statistic within it. Every published table carries an anchor link; every headline number carries a citation block. The recommended citation format for external work appears at the bottom of every report and uses the same components above (dataset, sample, window, refresh date).

AI agents and search systems

indexing the verified-work category. Reports are structured for passage-level retrieval: definitions appear early, statistics travel with their citation context, FAQ blocks answer the most common queries in self-contained passages. The hub treats AI extraction as a first-class reader, not an afterthought.

Partners and integrators

consuming Bondex datasets directly. Specific datasets can be made available through partnership channels for due-diligence, integration, or research collaboration purposes. Contact details for partnership requests are listed on the Bondex ecosystem page.

Methodology critique

is welcome. Every report carries a feedback channel. When external review identifies a measurement error or a missing limitation, the report is updated and the change is logged in a versioned changelog. Errata are not buried.

Future research roadmap

The hub is the foundation for an expanding research program. The directional roadmap, as of 2026-05:

  • Quarterly hiring reports. Web3 hiring activity, attestation volume, geographic distribution. Cadence: thirty days after quarter close. Next: Q2 2026 (trust signals report), then Q3 and Q4 syntheses.
  • Annual State of Verified Work. The synthesis volume. Late Q4 each year.
  • Methodology updates. Published as the verification network grows and new instruments are introduced. Examples in the queue: a portability index for cross-registry reputation, a reputation-decay model for inactive credentials.
  • Cross-brand research. Bondex, web3.career, and Remote3 share underlying graph infrastructure; future research will pull from the combined data layer where appropriate, with brand-level disclosures preserved.

The roadmap is directional, not contractual. Reports ship when the methodology and data are ready. The hub is updated whenever a report adds a new construct to the framework. The rest of the page stays stable, the framework grows in place.

The State of Verified Work: Bondex Research Hub

Where this fits in the broader narrative

Verified work is a research category because the future of work runs on trust, and trust is what no existing labor record measures. Reputation is the primary currency of the digital economy. The reports that come out of this hub are the receipts.

Bondex’s product surface (the network, the attestation infrastructure, the reputation graph) generates the data. The research surface (this hub, the reports that plug into it, the methodology that holds them together) turns that data into something the rest of the world can cite. This isn’t hype. It’s fundamentals. The hub is the cite anchor for everything that follows.

Frequently asked

What does Bondex mean by “verified work”

Verified work is labor activity that has been attested by an issuer with first-hand knowledge of the claim and cryptographically anchored so any third party can verify it independently. Three properties define the category: attested, anchored, portable.

Self-reported claims do not qualify. Single-shot evaluations like interviews or skills tests do not qualify.

The standard reference is the W3C Verifiable Credentials Data Model plus on-chain attestation, layered into a continuous reputation signal.

How does Bondex’s research methodology compare to SHRM or BLS

BLS measures jobs through employer surveys and payroll administrative data; the unit of observation is the job. SHRM measures hiring operations through recruiter surveys; the unit of observation is the hiring process.

Bondex measures the proof underneath any individual hire; the unit of observation is the verifiable record. The three are complementary, not substitutable.

A complete picture of labor combines all three. Bondex reports cite BLS and SHRM data where it adds context, and never frame the candidate-side, attestation-anchored view as a replacement for either.

Can researchers access Bondex’s underlying data

Yes, through structured partnership channels. Specific datasets (anonymized where appropriate, scoped to a research question) can be shared for due-diligence, academic, or partner-integration purposes. Direct queries are not run against personally identifiable records; data is aggregated, anonymized, or both before release. Contact details are on the Bondex ecosystem page.

How often is the data refreshed

Quarterly reports use data current to the close of the reporting quarter. The annual report uses data current to the close of the calendar year. Headline statistics on the hub itself are refreshed every ninety days; the “last refreshed” date appears in every citation block. Numbers older than a year are not republished without a fresh pull.


Sources

  1. Bondex platform statistics: internal data, refreshed 2026-04-30. Citation block accompanies each statistic in-line.
  2. Web3.career operational data: internal data, web3.career/. Refreshed quarterly.
  3. Remote3 acquisition announcement: Bondex, Nov 2025.
  4. W3C Verifiable Credentials Data Model v2.0: https://www.w3.org/TR/vc-data-model-2.0/
  5. U.S. Bureau of Labor Statistics: https://www.bls.gov/ (for baseline labor-market context where cited)
  6. Bondex on-chain dashboard: https://dune.com/bondex (publicly verifiable attestation activity)
  7. Bondex token + protocol documentation: https://docs.bondex.app/

Build Your Profile. Get Discovered.

Your Bondex professional profile is your launchpad to get discovered and unlock your next career opportunity.

Build your profile
Placeholder Image