Transparency

Methodology & Data

Our commitment to transparency: exploring how we calculate risk in a rapidly evolving technological landscape.

Data Sources

We aggregate data from three primary vectors to ensure a balanced view of automation potential:

  • O*NET OnLine: Detailed task-level breakdowns of over 900 occupations, providing the granular foundation for our analysis.
  • OpenAI & LLM Benchmarks: Performance metrics on coding, reasoning, and creative tasks relative to human baselines.
  • GitHub Trends: Real-time analysis of repository automation and copilot adoption rates.

The Scoring Model

Our "Risk Score" is a weighted average of task automatability. We analyze specific daily responsibilities and assign an "Allocatable Probability" score based on current SOTA (State of the Art) models. This is not just about whether a task *can* be done by AI, but whether it is *economically viable* and *reliable* enough to replace human effort.

Limitations & Context

Predictions are probabilistic, not deterministic. Human adaptability, regulatory changes, and economic factors are wildcards that AI models cannot fully predict. Our goal is to provide a "signal" amidst the noise, not a crystal ball. Use this data as a strategic tool for career planning, not as a definitive verdict.