Financial Services Task Inventory

A comprehensive inventory of 1,545 financial services tasks mapped to a 4-level taxonomy — providing the foundation for understanding what work gets done, what skills it requires, and how roles are constructed from tasks.

1,545
Tasks Inventoried
15
Business Functions
368
Activity Categories
Explore the Inventory
Tasks, Skills & Roles

Understanding the building blocks of work — and why tasks are the right unit of analysis.

Most workforce discussions start with roles — job titles, org charts, headcount. But roles are composites. To understand work at its most fundamental level, you need to decompose roles into their constituent parts: the tasks people perform and the skills those tasks require.

The Three Building Blocks

Task

A discrete unit of work with a clear verb-object structure: "Reconcile daily trade settlements," "Conduct annual credit reviews." Tasks are observable, measurable, and assignable. They are the atomic level of work.

This inventory captures 1,545 tasks at the L4 level — the most granular layer of the taxonomy.

Skill

A learned capability required to perform a task. Skills can be technical (financial modeling, SQL, KYC/AML), cognitive (analytical reasoning, judgment under ambiguity), or interpersonal (client advisory, negotiation). A single task typically requires 2–5 skills.

Each task in this inventory lists its required skills, enabling skill-based workforce analysis.

Role

A bundle of tasks assigned to one person or job title. Roles exist because organizations need to group tasks into manageable work packages. But role boundaries are often inherited from legacy structures rather than designed from first principles.

The inventory maps tasks to primary roles and O*NET occupational codes for cross-referencing.

How They Relate

Tasks → Skills: Every task demands specific skills to execute. By inventorying tasks, you implicitly map the skills landscape of your organization. Skill gaps become visible when you can see exactly which tasks require capabilities your workforce hasn't developed.

Tasks → Roles: A role is defined by which tasks it contains. Two roles with different titles but overlapping tasks may be candidates for consolidation. A role whose tasks span wildly different skill profiles may be a candidate for splitting.

Skills → Roles: Shared skills create natural job families — groups of roles with overlapping competency requirements. Career mobility is highest within a job family because the skill transfer cost is lowest. This is the foundation of career pathing.

Why Start with Tasks?

Tasks Are Stable

Job titles change with reorganizations, mergers, and market trends. But the underlying work — reconciling ledgers, assessing credit risk, advising clients — persists. Tasks are the durable unit.

Tasks Are Comparable

A "Relationship Manager" at one bank may do very different work than a "Relationship Manager" at another. But a task like "Conduct annual credit reviews and covenant compliance checks" means the same thing everywhere.

Tasks Are Measurable

Each task can be independently assessed for complexity (Bloom's taxonomy), frequency, regulatory burden, cross-functional scope, and — critically — its exposure to technological change including AI.

Tasks Enable Redesign

When you understand work at the task level, you can reassemble it: combine tasks into new roles, identify which tasks can be automated or augmented, and design job hierarchies grounded in what people actually do rather than inherited structures.

The 4-Level Taxonomy

This inventory organizes financial services work into a structured hierarchy that moves from the broadest organizational level down to individual tasks:

L1

Function

15 business functions
(e.g., Retail Banking, Risk Management)

L2

Process

92 process groups
(e.g., Consumer Lending, Branch Sales)

L3

Activity

368 activity clusters
(e.g., Mortgage Origination, Credit Underwriting)

L4

Task

1,545 discrete tasks
(e.g., Originate Residential Mortgage Applications)

Navigating this tool: The Dashboard gives you a statistical overview. The Explorer lets you filter, search, and drill into individual tasks. The advisory sections — Role Mapping, SWP, and Job Hierarchy Redesign — show how to apply task-level data to organizational decisions.
Inventory at a Glance

Key metrics and distributions across the financial services task inventory.

1,545
Total Tasks
4
Median Bloom's Level
61.3%
Regulatory-Driven
51.6%
Cross-Functional
50.3
Mean AI Exposure Score
1185
Augment Disposition

Cognitive Complexity (Bloom's)

Tasks by Function & Disposition

AI Exposure Score Distribution

Defense Line Composition

Key Insight: The inventory spans the full spectrum of cognitive complexity, from Bloom's level 1 (recall and data entry) to level 6 (strategic creation and design). 61.3% of tasks are regulatory-driven, reflecting the heavily governed nature of financial services. The AI exposure scoring provides one lens for understanding each task's characteristics — alongside complexity, frequency, skills, and cross-functional scope.
Methodology & Data Sources

A transparent accounting of how this inventory was constructed, enriched, and validated.

1

Taxonomy Construction

Built a 4-level hierarchy: 15 L1 business functions, 92 L2 processes, 368 L3 activities, and 1,545 L4 tasks. Anchored to O*NET 30.2 occupational data (279 financial services occupations, 5,317 task statements) and validated against Canadian banking operations.

2

Task Enrichment

Each task is characterized across multiple dimensions: cognitive complexity (Bloom's taxonomy 1–6), business importance (1–5), frequency, regulatory classification, cross-functional scope, defense line, required skills, and primary roles. These attributes support diverse analytical use cases — from skills gap analysis to organizational design.

3

AI Exposure Assessment

As one analytical lens, each task is scored 0–100 for AI exposure using a 7-factor engine, and assigned a strategic disposition (Automate, Augment, Restructure, No_Change). This is one of many ways to analyze the inventory — useful for technology planning but not the sole purpose of the data.

Scoring Engine: 7 Weighted Criteria
CriterionRangeHigh Score TriggersLow Score Triggers
Input Structure0–18Structured data (transactions, records)Unstructured inputs (conversation, negotiation)
Output Determinism0–18Deterministic outputs (approve/deny)Subjective outputs (recommend, advise)
Judgment Requirement−10 to +15Bloom's 1–2 (remember, understand)Bloom's 5–6 (evaluate, create)
Data Availability0–13Enterprise system data (accounts, trades)Limited or unavailable data
Creativity Needed−3 to +10Routine, formulaic tasksCreative, strategic tasks
Regulatory Constraint−8 to +8No regulatory requirementsHeavy regulation (FINTRAC, OSFI)
Current Digitization0–9Fully digital/platform-basedManual, in-person work
Data Sources
SourceDescriptionContribution
O*NET 30.2U.S. Dept of Labor occupational database; 279 FS occupationsBaseline task statements & SOC mapping
Bank-SpecificCanadian banking domain expertise not in O*NETInstitution-unique processes
RegulatoryFINTRAC, OSFI, Basel III, IFRS 9, TCFD frameworksCompliance obligations
CertificationFINRA Series 7, CFA, FRM, CISSP outlinesProfessional knowledge standards
AI-EraEmerging tasks from AI/ML adoption in bankingMLOps, responsible AI, bias testing
Task Schema: 18 Fields
FieldTypeDescription
task_idStringUnique ID encoding taxonomy path (e.g., RB.DEP.ACT.001)
task_nameStringVerb-object task name
task_descriptionStringFull description with regulatory/business context
L1_functionStringBusiness function (1 of 15)
L2_processStringProcess group within L1
L3_activityStringActivity cluster within L2
onet_soc_codesArrayO*NET Standard Occupational Classification codes
primary_rolesArrayJob titles that typically perform this task
importance1–5Business criticality rating
frequencyStringHow often the task is performed
cognitive_complexity1–6Bloom's taxonomy level
regulatory_drivenBooleanWhether driven by regulatory requirement
cross_functionalBooleanWhether spans multiple functions
ai_exposure_score0–100Composite AI exposure assessment
ai_dispositionStringAutomate, Augment, Restructure, No_Change
skills_requiredArrayKey skills needed
defense_lineStringRisk governance (1st, 2nd, 3rd, NA)
sourceStringData provenance category
Important: This is a reference model based on external data. It is not derived from any specific institution's internal data. Scores should be validated against your organization's actual operating model.
Task Explorer

Filter, search, and drill into 1,545 financial services tasks. Click any row to expand full details.

Business Function

AI Disposition

AI Exposure Score

0
100

Bloom's Complexity

Defense Line

Showing 1,545 of 1,545 tasks
ID Task Function Disposition AI Score Bloom
Mapping to Organizational Roles

A practical guide for connecting this reference inventory to your organization's actual job architecture.

This inventory uses generic role titles. Your organization will have different titles, structures, and task bundles. The mapping process below helps you translate between the two — revealing how tasks cluster into roles, what skills each role requires, and where role boundaries may need to shift.
1

Build Your Role–Task Alignment

List your actual job titles within each business function. For each role, identify which L2 processes and L3 activities they touch, and estimate the percentage of effort in each area.

Example

Your Role: "Client Service Associate — Branch"
L1: Retail Banking
L2 Processes: Deposit Products (60%), Consumer Lending (25%), Branch Sales (15%)
Reference Tasks: ~45 tasks from those L3 activities apply

2

Build a Role Complexity & Skills Profile

For each role, aggregate the task-level attributes to understand the role's overall character.

What You Can Derive

Complexity Profile: Distribution of Bloom's levels across the role's tasks — is this a primarily execution role (Bloom 1–2), analytical role (3–4), or strategic role (5–6)?

Skills Footprint: Union of all skills_required across the role's tasks — what is the full capability set this role demands?

Regulatory Burden: What percentage of the role's tasks are regulatory-driven? This affects change velocity and training requirements.

3

Analyze Role Composition

With tasks mapped and profiled, several analyses become possible:

  • Task Overlap: Which roles share significant task overlap? These may be candidates for consolidation or clearer boundary definition.
  • Skill Adjacency: Which roles share skill requirements? These form natural job families and career mobility paths.
  • Complexity Span: Does a role bundle tasks across too wide a Bloom's range? Roles spanning 4+ levels may need to split into tiered positions.
  • AI Exposure (Optional): Overlay AI scores and dispositions to understand which tasks within a role are most affected by technology change.
Tip: Export filtered task data from the Explorer, then map against your HRIS headcount data to produce headcount-weighted profiles for each role.
Strategic Workforce Planning Integration

How to integrate task-level data into your SWP cycle — for skills planning, capacity modeling, and organizational change.

1

Assess Current State

Use Role Mapping to establish a baseline of your workforce's task composition.

  • Pull headcount from HRIS by job title and function
  • Map job titles to reference tasks using the Role Mapping framework
  • Profile each role's complexity distribution, skill requirements, and regulatory burden
  • Identify roles with the highest task diversity (spanning many L2 processes) — these are your most complex workforce planning targets
2

Identify Gaps & Risks

Compare the desired future state against current capabilities across multiple dimensions.

  • Skills Gap: Which skills appear in high-complexity tasks but are underrepresented in your current workforce?
  • Capacity Risk: Are critical tasks concentrated in too few roles or individuals? What happens if those roles turn over?
  • Regulatory Exposure: Which roles carry heavy regulatory task loads? These require specialized succession planning.
  • Technology Impact: Use AI exposure scores to identify which tasks (and therefore roles) are most affected by technology change, including AI adoption.
3

Model Scenarios

Build scenarios to bound workforce evolution under different strategic assumptions.

Scenario Dimensions

Organizational Change: What if you consolidate roles within an L2 process? Model headcount and skill implications.

Technology Adoption: What if tasks with AI score >75 are automated within 24 months? Where does freed capacity go?

Regulatory Shift: What if new regulations add compliance tasks? Which roles absorb the load?

4

Implement & Monitor

Execute workforce transitions with measurable indicators.

  • Leading: Reskilling enrollment, internal mobility rate, time-to-fill for redesigned roles
  • Lagging: Productivity per role, cost-to-serve, customer satisfaction, error rates
  • Governance: Monthly reviews, cross-functional steering, union consultation where applicable
Job Hierarchy Redesign

Using the task inventory to rethink how roles, job families, and organizational layers are structured — grounded in the Hay Method for job evaluation.

The task inventory reveals what people actually do at the L4 level. This makes it possible to challenge existing job boundaries, identify where roles can be consolidated or split, and design a future-state hierarchy grounded in real task clusters rather than inherited org charts.

The Problem with Current Job Hierarchies

Most financial services job hierarchies evolved organically — roles were added, titles inflated, and boundaries hardened around legacy processes. When the underlying work changes (through technology, regulation, or market shifts), the hierarchy itself may no longer reflect the actual nature of the work being done. Two symptoms emerge:

Fragmented Roles

A single end-to-end process is split across 3–5 job titles, each owning a narrow slice. The result: duplicated skills, unclear accountability, and roles that lack the critical mass to justify a distinct grade.

Bloated Roles

A single title bundles unrelated tasks from different L2 processes. The role holder is a generalist by accident, not design — making it difficult to evaluate the role consistently or plan career progression.

The Hay Method & Task-Level Data

The Hay Method (Korn Ferry) is the most widely used job evaluation framework in financial services. It evaluates jobs on three core factors: Know-How, Problem Solving, and Accountability. Traditionally, these are assessed through job descriptions and interviews — a subjective, time-consuming process. The task inventory provides an empirical foundation for each factor.

Know-How

The sum of knowledge, skills, and experience required to perform the job competently.

Inventory fields that inform Know-How:

  • skills_required — directly enumerates the technical and interpersonal skills each task demands
  • cognitive_complexity — Bloom's level indicates the depth of knowledge application (recall vs. analysis vs. creation)
  • regulatory_driven — regulatory tasks typically require specialized, certified knowledge (AML, OSFI, Basel)
  • onet_soc_codes — links to O*NET's detailed knowledge and education requirements per occupation

Hay Application

Aggregate skills_required across all tasks in a role to measure the breadth of know-how. Use the maximum Bloom's level to gauge depth. Count distinct L2 processes to assess management breadth.

Problem Solving

The thinking required to analyze, evaluate, reason, and arrive at conclusions within the job's environment.

Inventory fields that inform Problem Solving:

  • cognitive_complexity — Bloom's taxonomy directly measures thinking demand: levels 1–2 (routine/guided), 3–4 (analytical/applied), 5–6 (evaluative/creative)
  • task_description — verb patterns reveal the thinking environment (e.g., "execute" = well-defined; "assess" = semi-variable; "design strategy" = abstract)
  • cross_functional — cross-functional tasks require navigating ambiguity across organizational boundaries
  • ai_exposure_score — lower scores often correlate with tasks requiring more novel, unstructured thinking

Hay Application

Map the role's Bloom's distribution to Hay's Thinking Challenge scale. Use the proportion of cross-functional tasks to assess the Thinking Environment (how much guidance or precedent exists).

Accountability

The answerability for actions and their consequences — encompassing freedom to act, magnitude of impact, and directness of impact.

Inventory fields that inform Accountability:

  • defense_line — 1st line (direct execution/ownership), 2nd line (oversight/monitoring), 3rd line (independent assurance) map directly to freedom-to-act levels
  • importance — business criticality rating (1–5) indicates the magnitude of impact if the task fails
  • regulatory_driven — regulatory tasks carry external accountability to supervisors, auditors, and regulators
  • L1_function / L2_process — the organizational scope of the task indicates whether impact is local (branch) or enterprise-wide

Hay Application

Use defense_line to assign Freedom to Act. Weight importance scores by frequency to calculate Magnitude. Assess whether the role's tasks have direct (1st line) or indirect/advisory (2nd/3rd line) impact.

Four-Phase Hierarchy Redesign Process

1

Task Cluster Analysis

Start by grouping tasks from the inventory into natural clusters based on shared attributes, rather than inheriting current role boundaries.

  • By L2 Process: Which tasks belong together because they serve the same process end-to-end?
  • By Cognitive Complexity: Separate high-judgment (Bloom 4–6) from routine execution (Bloom 1–2) — these correspond to different Hay grades and should often be different roles.
  • By Skills Required: Tasks sharing common skill profiles are natural candidates for a single role family. Shared skills = shared Know-How = same Hay job family.
  • By Defense Line: Tasks on different defense lines carry fundamentally different accountability profiles and should not be combined in the same role.
Use the Explorer to filter by L1, Bloom's level, and defense line. Export the results and sort by skills_required to see natural groupings emerge. Each cluster is a candidate role.
2

Role Boundary Redefinition

Once task clusters are identified, draw new role boundaries around them and evaluate each using Hay criteria:

  • Critical Mass Test: Does the cluster contain enough tasks to justify a full-time position? If not, merge with an adjacent cluster that shares the same Know-How profile.
  • Hay Coherence: Do all tasks in the proposed role land within 1–2 Bloom levels (Problem Solving), the same defense line (Accountability), and overlapping skill sets (Know-How)? If not, the role is trying to span too many Hay grades.
  • Span of Complexity: Roles spanning more than 2–3 Bloom levels should split into tiered positions (e.g., Analyst vs. Senior Analyst). This directly maps to different Hay evaluation points.
  • Cross-Functional Alignment: Tasks flagged as cross_functional=true may indicate roles that should sit in a shared service or center of excellence, which changes the Accountability profile (broader magnitude, more indirect impact).

Example: Retail Lending Hierarchy

Current: Mortgage Intake Clerk → Mortgage Processor → Underwriter → Closing Coordinator → Post-Close Auditor (5 roles, 3 layers)

Hay Analysis: Intake and Processing tasks are Bloom 1–2 with overlapping skills. Underwriting is Bloom 4 with distinct regulatory know-how. Audit is 3rd-line with different accountability. Three natural Hay clusters, not five.

Redesigned: Origination Advisor (client-facing, Bloom 4–5, 1st line) + Lending Operations Specialist (process + exception, Bloom 2–3, 1st line) + Credit Risk Reviewer (2nd/3rd line, Bloom 4–5). Three roles, two layers, each internally coherent against Hay criteria.

3

Job Family & Career Level Architecture

Organize the new roles into job families and define Hay-aligned career progression:

  • Job Families: Group roles by shared Know-How domains (skill overlap). Families might be “Client Advisory,” “Risk & Control,” “Data & Intelligence,” “Regulatory Operations” — defined by the skills that their constituent tasks share.
  • Career Levels (Hay Grades): Within each family, Bloom's levels provide a natural grading structure. Bloom 1–2 tasks define entry/associate grades (lower Know-How, guided Problem Solving). Bloom 3–4 define mid-level grades (analytical Problem Solving, broader Accountability). Bloom 5–6 define senior/leadership grades (evaluative/creative thinking, enterprise-wide impact).
  • Progression Paths: Career mobility between levels is defined by which new tasks the next level adds. This makes promotion criteria objective: can the person perform the higher-Bloom tasks that define the next grade?
  • Compensation Banding: Hay evaluation points (derived from task-level Know-How, Problem Solving, and Accountability) provide a defensible, data-backed foundation for pay banding rather than market-matching by title alone.
4

Transition Mapping & Governance

The new hierarchy is a target state. Getting there requires managed transitions:

  • Current → Future Role Map: For each existing role, define the target role(s) it maps to. Export the Role Mapping Template from the Export Center and populate with your org's current titles.
  • Hay Re-Evaluation: Use the task data to draft Hay evaluation profiles for each new role. This accelerates the traditionally manual evaluation process because the Know-How, Problem Solving, and Accountability inputs are already captured in the inventory.
  • Skill Gap Analysis: Compare skills_required of the future role against current role holders. The delta defines training and reskilling needs.
  • Phased Rollout: Sequence by business impact. Start with functions where role fragmentation or bloat is most severe, and where the Hay re-evaluation reveals the largest gap between current grading and task-based grading.
  • Governance: Hierarchy redesign crosses HR, Compensation, business lines, and risk. Establish a cross-functional steering group with sign-off authority on role and grade changes.
Key Principle: The hierarchy should be designed around task clusters evaluated against Hay criteria — not inherited titles or current headcount. Let the tasks define the roles, and let the task attributes define the grades.

Using the Inventory for Hay-Aligned Hierarchy Analysis

Identify Consolidation Opportunities

Filter by L2 process and examine how many distinct primary_roles appear. If 4+ roles share the same L2, similar Bloom levels, and overlapping skills, they occupy the same Hay territory and consolidation is likely warranted.

Detect Grade Misalignment

Sort by cognitive_complexity within an L1 function. If roles at adjacent Bloom levels have identical task types and defense lines, they may be graded differently but doing the same work — a Hay evaluation would merge them.

Map Know-How Clusters

Export tasks and group by skills_required. Roles that share >70% of their skill footprint belong in the same job family. Roles that share <30% may be misclassified in the current hierarchy.

Validate Accountability Structures

Ensure the redesigned hierarchy maintains separation of duties. No role should mix 1st-line and 2nd/3rd-line tasks — the defense_line field makes this auditable and maps directly to Hay's Freedom to Act dimension.

Summary: The task inventory provides the raw material that Hay evaluations require — but captured systematically at scale rather than through role-by-role interviews. By aggregating task-level Know-How (skills, Bloom's, regulatory knowledge), Problem Solving (Bloom's distribution, cross-functional scope), and Accountability (defense line, importance, impact scope), organizations can draft Hay-aligned role evaluations directly from the data, dramatically accelerating the job architecture redesign process.
Implementation Runbook

How to recreate and extend this analysis internally, combining your organization's proprietary data with the external reference inventory.

This inventory was built entirely from external, publicly available data. Its value as a reference model is that it provides a validated starting point — a comprehensive task taxonomy, scoring engine, and enrichment schema — that any organization can adapt without starting from scratch. The sections below describe exactly how to do that.

Reusable Artifacts from This Analysis

The following outputs from this project can be used directly in your internal implementation. They represent significant upfront work that does not need to be repeated:

4-Level Taxonomy Structure

The hierarchy of 15 L1 Functions → 92 L2 Processes → 368 L3 Activities provides a ready-made classification framework. Your organization can adopt it as-is or modify branches to reflect your specific operating model.

Export: Full JSON from the Export Center. Extract unique L1/L2/L3 combinations to get the taxonomy tree.

Reference Task Library (1,545 tasks)

Each task is a verb-object statement with a full description, skills, roles, and classification metadata. Use as a starting checklist: walk through each L3 activity and confirm which tasks exist in your org, which need rewording, and which are missing.

Export: Full CSV. Filter by L1 function to produce function-specific worksheets for SME validation.

18-Field Task Schema

The schema (task_id, task_name, task_description, L1–L3, SOC codes, roles, importance, frequency, Bloom's, regulatory, cross-functional, AI score, disposition, skills, defense line, source) is designed for analytical versatility. Adopt it as your internal data standard.

Export: JSON schema is self-documenting. See the Methodology tab for field definitions.

AI Exposure Scoring Engine

The 7-factor scoring algorithm (input structure, output determinism, judgment requirement, data availability, creativity, regulatory constraint, digitization) can be applied to any task inventory. Weights are transparent and adjustable.

Export: The scoring logic is documented in the Methodology tab. The runbook below provides pseudocode for reimplementation.

O*NET SOC Code Mappings

Each task is linked to O*NET Standard Occupational Classification codes, connecting the inventory to the U.S. Department of Labor's occupational database (knowledge requirements, education levels, wage data, projected growth).

Export: SOC codes are included in every CSV/JSON export. Cross-reference against the free O*NET 30.2 database.

Hay Method Mapping Framework

The Job Hierarchy Redesign tab documents how inventory fields map to Hay's three evaluation factors (Know-How, Problem Solving, Accountability). This mapping template accelerates Hay-aligned job architecture work.

Export: Conceptual framework documented in the Redesign tab. Apply it to your org-specific task data.

Internal Data Sources to Integrate

To move from a reference model to an org-specific analysis, you need to overlay your proprietary data. Here are the key internal sources and what they contribute:

Internal SourceWhat It ProvidesHow It Integrates
HRIS / WorkdayJob titles, headcount, grades, compensation bands, reporting lines, org structureMap job titles to reference tasks (Role Mapping step 1). Headcount-weight the analysis to show FTE impact, not just task count.
Job Descriptions (JDs)Official role responsibilities, qualifications, competency requirementsValidate and customize the reference task list. Add org-specific tasks not in the external inventory. Confirm Bloom's levels match internal expectations.
Process Maps / SOPsDocumented workflows, system touchpoints, handoff pointsValidate L2/L3 taxonomy alignment. Identify tasks that are split across roles differently than the reference model assumes.
Time & Motion / Activity DataHow staff actually spend their time (if available from workforce analytics tools)Replace estimated effort weights with actual observed data. This is the single highest-value internal dataset for this analysis.
Learning Management System (LMS)Training records, certifications, competency assessmentsMap to skills_required to identify existing capability vs. gaps. Feeds directly into SWP skill gap analysis.
Hay / Korn Ferry EvaluationsExisting job evaluation scores, grade structures, point profilesCompare current Hay grades against the task-derived grades from the Hierarchy Redesign framework. Identify misalignment between current grading and actual task composition.
Incident / Issue RegistersOperational errors, compliance findings, audit issuesCorrelate with task-level data to identify which tasks (and therefore roles) are highest risk. Informs importance scoring and defense line validation.
Technology InventorySystems, platforms, automation tools currently in useInforms the “current digitization” scoring factor. Tasks performed on modern platforms score higher for AI readiness.

External Data Sources (Publicly Available)

These are the external sources used to build this reference inventory. All are freely or commercially available:

SourceAccessWhat to Extract
O*NET 30.2 DatabaseFree download: onetonline.orgTask statements, knowledge domains, skills, abilities, education requirements, and wage data for 1,000+ occupations. Filter by SOC codes relevant to financial services (13-xxxx, 15-xxxx, 43-xxxx).
Regulatory FrameworksFINTRAC, OSFI, Basel III/IV, IFRS 9, TCFD — all published onlineCompliance obligations that generate regulatory-driven tasks. These define the “non-negotiable” task layer that cannot be eliminated.
Professional CertificationsCFA Institute, GARP (FRM), (ISC)² (CISSP), FINRACertification body-of-knowledge outlines define the skill and knowledge standards for professional roles. Use to validate skills_required fields.
Industry Job PostingsCareers pages, Indeed, LinkedIn, GlassdoorReal-world role descriptions and responsibilities. Useful for validating that the inventory covers actual market roles (see the BMO Coverage Analysis for an example of this validation).
Bloom’s Taxonomy ReferenceStandard educational framework (widely published)Provides the 6-level cognitive complexity scale: Remember (1), Understand (2), Apply (3), Analyze (4), Evaluate (5), Create (6). Used to score each task.

Data Scientist Runbook

The following is a step-by-step technical guide for a data scientist to build the internal integration pipeline. Each step includes the inputs, outputs, and pseudocode logic.

1

Load & Validate the Reference Inventory

Start by loading the reference data and confirming its structure.

# Step 1: Load reference inventory import pandas as pd, json with open('task_inventory.json') as f: ref = json.load(f) ref_tasks = pd.DataFrame(ref['tasks']) print(f"Reference: {len(ref_tasks)} tasks, {ref_tasks.L1_function.nunique()} L1 functions") print(f"Schema: {list(ref_tasks.columns)}") # Validate: no nulls in key fields assert ref_tasks[['task_id','task_name','L1_function','L2_process','L3_activity']].notna().all().all() # Extract taxonomy tree for reuse taxonomy = ref_tasks[['L1_function','L2_process','L3_activity']].drop_duplicates().sort_values( ['L1_function','L2_process','L3_activity'] )
2

Load Internal HRIS Data & Build Role–Task Map

Pull your HRIS export and map each internal job title to reference tasks. This is the most labor-intensive step and typically requires SME input.

# Step 2: Load HRIS and build role-task mapping hris = pd.read_csv('hris_export.csv') # columns: employee_id, job_title, department, grade, fte_count print(f"HRIS: {hris.job_title.nunique()} unique titles, {hris.fte_count.sum()} FTEs") # Option A: Manual mapping worksheet (SME-assisted) # Export reference tasks by L1, have SMEs mark which tasks apply to each job title role_task_map = pd.read_csv('role_task_mapping.csv') # columns: job_title, task_id, effort_pct # Option B: Automated fuzzy matching (augments manual mapping) from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity # Match JD text against reference task descriptions jd_texts = load_job_descriptions() # your JD corpus ref_descs = ref_tasks['task_description'].tolist() vectorizer = TfidfVectorizer(stop_words='english', max_features=5000) tfidf = vectorizer.fit_transform(ref_descs + jd_texts) ref_vecs = tfidf[:len(ref_descs)] jd_vecs = tfidf[len(ref_descs):] similarities = cosine_similarity(jd_vecs, ref_vecs) # For each JD, get top-N matching reference tasks above threshold THRESHOLD = 0.25 for i, jd_title in enumerate(jd_titles): top_matches = similarities[i].argsort()[::-1][:20] matched = [(ref_tasks.iloc[j].task_id, similarities[i][j]) for j in top_matches if similarities[i][j] > THRESHOLD]
3

Add Org-Specific Tasks

The reference inventory covers the industry broadly but won't capture every organization-specific process. Add custom tasks using the same schema.

# Step 3: Add org-specific tasks # Use the same 18-field schema custom_tasks = [] custom_tasks.append({ 'task_id': 'CUSTOM.RB.001', # prefix with CUSTOM to distinguish 'task_name': 'Process Internal Transfer via Proprietary Platform', 'task_description': 'Execute inter-branch account transfers using [OrgSystem]...', 'L1_function': 'Retail Banking', 'L2_process': 'Deposit Products & Services', 'L3_activity': 'Transaction Processing', 'onet_soc_codes': ['43-3071.00'], 'primary_roles': ['Branch Operations Specialist'], 'importance': 3, 'frequency': 'Daily', 'cognitive_complexity': 2, # Bloom's level 'regulatory_driven': False, 'cross_functional': False, 'ai_exposure_score': None, # will be scored in Step 4 'ai_disposition': None, 'skills_required': ['Core Banking System', 'Transaction Processing'], 'defense_line': '1st', 'source': 'Internal' }) # Merge with reference all_tasks = pd.concat([ref_tasks, pd.DataFrame(custom_tasks)], ignore_index=True)
4

Apply the AI Exposure Scoring Engine

Score any new or modified tasks using the 7-factor algorithm. The engine is deterministic and keyword-driven — it can be applied to any task that follows the schema.

# Step 4: Score AI exposure (7-factor engine) def score_ai_exposure(task): score = 0 desc = task['task_description'].lower() bloom = task['cognitive_complexity'] regulatory = task['regulatory_driven'] # Factor 1: Input Structure (0-18) structured = ['process','calculate','reconcile','verify data','enter','monitor metric', 'balance','payment','transaction','filing','report','extract','validate'] semi = ['analyze','review','evaluate','assess','investigate','examine','monitor'] unstructured = ['negotiate','advise','design','develop strategy','build relationship', 'coach','present','persuade','formulate','orchestrate'] if any(kw in desc for kw in structured): score += 18 elif any(kw in desc for kw in semi): score += 10 elif any(kw in desc for kw in unstructured): score += 3 else: score += 10 # Factor 2: Output Determinism (0-18) deterministic = ['file report','generate report','calculate','reconcile','balance', 'process payment','execute transfer','data entry','verify'] probabilistic = ['analyze','assess risk','evaluate','score','classify','screen','model'] subjective = ['recommend','advise','negotiate','design','structure deal','judgment', 'strategy','create','formulate'] if any(kw in desc for kw in deterministic): score += 18 elif any(kw in desc for kw in probabilistic): score += 10 elif any(kw in desc for kw in subjective): score += 2 else: score += 10 # Factor 3: Judgment Requirement (-10 to +15) — based on Bloom's score += {1:15, 2:15, 3:8, 4:0, 5:-5, 6:-10}.get(bloom, 0) # Factor 4: Data Availability (0-13) data_rich = ['transaction','payment','account','balance','report','metric','portfolio', 'statement','records','system','database','position','trade','holdings'] if any(kw in desc for kw in data_rich): score += 13 elif any(kw in desc for kw in ['analyze','review','evaluate','model']): score += 8 else: score += 3 # Factor 5: Creativity Needed (-3 to +10) routine = ['process','execute','file','submit','verify','reconcile','monitor','track'] creative = ['design','develop strategy','create','customize','structure','innovate'] if any(kw in desc for kw in routine): score += 10 elif any(kw in desc for kw in creative): score -= 3 else: score += 5 # Factor 6: Regulatory Constraint (-8 to +8) if regulatory: heavy = ['fintrac','sanctions','approval decision','lending authority', 'compliance testing','enforcement','prospectus','osfi'] score += -8 if any(kw in desc for kw in heavy) else -3 else: score += 8 # Factor 7: Current Digitization (0-9) digital = ['digital','online','system','automated','platform','api','software','ai'] manual = ['on-site','field','branch','vault','physical','in-person','face-to-face'] if any(kw in desc for kw in digital): score += 9 elif any(kw in desc for kw in manual): score += 2 else: score += 5 return max(0, min(100, score)) def assign_disposition(score, bloom, regulatory): if score >= 75: return 'Augment' if (regulatory and bloom >= 4) else 'Automate' elif score >= 50: return 'Augment' elif score >= 30: return 'Augment' if bloom >= 3 else 'Restructure' else: return 'No_Change' if bloom >= 5 else 'Restructure' # Apply to all tasks missing scores for idx, row in all_tasks[all_tasks.ai_exposure_score.isna()].iterrows(): all_tasks.at[idx, 'ai_exposure_score'] = score_ai_exposure(row) all_tasks.at[idx, 'ai_disposition'] = assign_disposition( all_tasks.at[idx, 'ai_exposure_score'], row['cognitive_complexity'], row['regulatory_driven'] )
5

Build Role-Level Profiles

Aggregate task data up to the role level to produce the analytical outputs described in Role Mapping and Job Hierarchy Redesign.

# Step 5: Aggregate to role-level profiles merged = role_task_map.merge(all_tasks, on='task_id') merged = merged.merge(hris[['job_title','fte_count','grade']].drop_duplicates(), on='job_title') role_profiles = merged.groupby('job_title').agg( task_count=('task_id', 'count'), fte_count=('fte_count', 'first'), current_grade=('grade', 'first'), # Know-How proxies (Hay) unique_skills=('skills_required', lambda x: len(set(s for sl in x for s in sl))), max_bloom=('cognitive_complexity', 'max'), mean_bloom=('cognitive_complexity', 'mean'), l2_breadth=('L2_process', 'nunique'), # Problem Solving proxies (Hay) bloom_std=('cognitive_complexity', 'std'), pct_cross_functional=('cross_functional', 'mean'), # Accountability proxies (Hay) primary_defense_line=('defense_line', lambda x: x.mode().iloc[0]), mean_importance=('importance', 'mean'), pct_regulatory=('regulatory_driven', 'mean'), # AI exposure weighted_ai_score=('ai_exposure_score', lambda x: (x * merged.loc[x.index, 'effort_pct']).sum()), pct_automate=('ai_disposition', lambda x: (x == 'Automate').mean()), pct_augment=('ai_disposition', lambda x: (x == 'Augment').mean()), pct_no_change=('ai_disposition', lambda x: (x == 'No_Change').mean()), ).round(2) # Derive suggested Hay career level from Bloom's def suggest_level(max_bloom): if max_bloom <= 2: return 'Associate' elif max_bloom <= 3: return 'Analyst' elif max_bloom <= 4: return 'Senior / Specialist' else: return 'Lead / Director' role_profiles['suggested_level'] = role_profiles.max_bloom.apply(suggest_level)
6

Generate Hierarchy Redesign Outputs

Use the role profiles to produce the deliverables described in the advisory sections.

# Step 6: Analytical outputs # A. Skill gap matrix current_skills = load_lms_data() # skills held by current staff required_skills = merged.explode('skills_required').groupby( ['job_title','skills_required']).size().unstack(fill_value=0) gap_matrix = required_skills.subtract(current_skills, fill_value=0) # B. Role consolidation candidates from itertools import combinations for (r1, r2) in combinations(role_profiles.index, 2): tasks_r1 = set(merged[merged.job_title==r1].task_id) tasks_r2 = set(merged[merged.job_title==r2].task_id) overlap = len(tasks_r1 & tasks_r2) / len(tasks_r1 | tasks_r2) if overlap > 0.6: print(f"Consolidation candidate: {r1} + {r2} (overlap: {overlap:.0%})") # C. Hay evaluation draft for title, profile in role_profiles.iterrows(): print(f" --- {title} (current grade: {profile.current_grade}) ---") print(f"Know-How: {profile.unique_skills} skills, " f"max Bloom {profile.max_bloom}, {profile.l2_breadth} L2 processes") print(f"Problem Solving: mean Bloom {profile.mean_bloom:.1f}, " f"std {profile.bloom_std:.1f}, {profile.pct_cross_functional:.0%} cross-functional") print(f"Accountability: {profile.primary_defense_line} line, " f"importance {profile.mean_importance:.1f}, {profile.pct_regulatory:.0%} regulatory") print(f"Suggested level: {profile.suggested_level}") # D. SWP scenario model def model_scenario(roles_df, automate_threshold): # Model headcount impact if tasks above threshold are automated. at_risk = roles_df[roles_df.pct_automate > automate_threshold] fte_impact = at_risk.fte_count.sum() * at_risk.pct_automate.mean() return {'roles_affected': len(at_risk), 'estimated_fte_impact': round(fte_impact)} conservative = model_scenario(role_profiles, 0.5) balanced = model_scenario(role_profiles, 0.3) aggressive = model_scenario(role_profiles, 0.15)
7

Validate & Iterate

Quality assurance steps before presenting results to stakeholders.

  • SME Review: Have business leads review the role–task mapping for their function. Flag tasks that are missing, misattributed, or obsolete.
  • Bloom's Calibration: Spot-check 10% of tasks per L1 function to confirm cognitive complexity ratings match SME judgment.
  • Hay Cross-Check: Compare derived Hay profiles against existing evaluations (if available). Large discrepancies indicate either scoring issues or genuinely misgraded roles.
  • Coverage Test: Confirm that every internal job title maps to at least 5 reference tasks. Titles with <5 matches may indicate gaps in the inventory or misclassification.
  • Defense Line Audit: Verify no role mixes 1st-line and 2nd/3rd-line tasks. Flag violations for review with Risk and Compliance.

Pipeline Architecture Summary

Data Flow: Reference Inventory (JSON) + HRIS Export + Job Descriptions + LMS Data → Role–Task Mapping (manual + fuzzy match) → Org-Specific Task Inventory (merged) → Scoring Engine (AI exposure + Bloom's) → Role Profiles (aggregated) → Analytical Outputs (Hay evaluation drafts, skill gap matrices, consolidation candidates, SWP scenarios)

Estimated Effort

4–8 weeks for a mid-sized bank (<20k FTEs). Primary bottleneck is SME validation of role–task mappings (Step 2).

Team Composition

1 data scientist (pipeline), 1 HR/workforce planning analyst (mapping), SMEs from each L1 function (validation), 1 project lead.

Technology Stack

Python (pandas, scikit-learn), any SQL database for storage, BI tool (Power BI / Tableau) for visualization, Excel for SME worksheets.

Getting Started: Export the full JSON from the Export Center. That file contains the complete reference inventory, ready to load into your pipeline as Step 1. The Role Mapping Template CSV provides a blank worksheet for Step 2.
Export Center

Download the full inventory or a filtered subset.

Full Task Inventory (CSV)

1,545 tasks × 18 fields

Filtered View (CSV)

Tasks matching current Explorer filters

Full Task Inventory (JSON)

Complete dataset for system integration

Role Mapping Template (CSV)

Blank worksheet for mapping your roles