AI Readiness Assessment Glossary

The complete reference for Recser's 39 AI readiness criteria across 10 dimensions. Each criterion includes four scored maturity-level options — from Explorer (0 pts) to Visionary (3 pts) — covering Strategy, Data, Talent, AI Governance, Culture, Deployment, Business Impact, Scaling, and Ecosystem.

About This Glossary

This is the complete reference for the Recser AI Readiness Assessment — Canon V4.2, 533A Panel Approved. The assessment covers 39 criteria across 10 dimensions. Each criterion is scored 0–3 points corresponding to four maturity levels: Explorer (0), Builder (1), Scaler (2), and Visionary (3).

Module 1: Strategy and Vision — Where are we going with AI?

Evaluates if you have a clear, owned, and funded plan for AI adoption.

  1. Documentation: Which statement best describes your organisation's AI strategy?
    • (0 pts) No written AI strategy — Conversations are informal and occasional
    • (1 pt) Emerging AI roadmap — Exists but not formally approved or integrated into business plans
    • (2 pts) Formal AI strategy — Approved by leadership and aligned with key business goals
    • (3 pts) AI-Centric Business Strategy — Every major decision considers AI implications
  2. Ownership: Who owns AI strategy in your organisation?
    • (0 pts) No Clear Owner — AI is discussed informally; no one is accountable
    • (1 pt) IT or Mid-level Manager — AI treated as technical project, not business priority
    • (2 pts) Senior Executive (C-Suite) — Specific leader has budget and accountability for AI success
    • (3 pts) CEO or Board Level — Strategic priority integrated into the entire business
  3. Pipeline: How developed is your pipeline of AI projects?
    • (0 pts) Brainstorming Ideas — No active projects or clear value defined yet
    • (1 pt) Running Pilots — Testing initial use cases to prove value
    • (2 pts) Use Cases in Production — Several solutions deployed and delivering measurable value
    • (3 pts) Continuous Innovation Pipeline — AI embedded across business functions with systematic scaling
  4. Objectives: What are your primary AI goals?
    • (0 pts) Learning & Exploration — Understand basics and identify potential applications
    • (1 pt) Proving Value — Run pilots to demonstrate feasibility and ROI
    • (2 pts) Operational Efficiency — Improve speed, cost, and quality across the organisation
    • (3 pts) Business Transformation — Create new business models, products, or markets

Module 2: Data Readiness — Do we have the data foundation?

Checks if your data is clean and available enough to fuel AI.

  1. Quality: Which statement best describes the quality and reliability of your data?
    • (0 pts) Fragmented & Inconsistent — Data trapped in silos with frequent errors and gaps
    • (1 pt) Usable but Manual — Reliable data requires significant manual cleaning before use
    • (2 pts) High Quality & Trusted — Centralized data with automated validation and team trust
    • (3 pts) Real-time Precision — Continuously monitored, self-correcting data with industry-leading accuracy
  2. Access: How quickly can teams access the data they need for AI projects?
    • (0 pts) Days or Weeks — Manual IT requests; data is trapped in silos
    • (1 pt) Within 24 Hours — Partial self-service; teams view reports but need data help
    • (2 pts) On-Demand (Minutes) — Unified platform allows authorised users immediate access
    • (3 pts) Instant & Automated — Direct APIs allow AI systems to fetch real-time data
  3. Governance: Which statement best describes your data governance maturity?
    • (0 pts) Ad-Hoc & Informal — No formal policies; ownership unclear and managed reactively
    • (1 pt) Basic Policies — Drafts exist; key stewards appointed for main datasets
    • (2 pts) Defined Framework — Comprehensive rules; ownership, privacy, and security standards enforced
    • (3 pts) Automated & Embedded — Governance baked into platform; rules enforced automatically
  4. Infrastructure: How mature is your data infrastructure (storage and pipelines)?
    • (0 pts) Manual & Siloed — Manual preparation in spreadsheets; high effort to access data
    • (1 pt) Semi-Automated — Basic scripts move data, but pipelines are fragile and break
    • (2 pts) Scalable Data Platform — Centralized warehouse with reliable, fully automated DataOps pipelines
    • (3 pts) Real-Time & AI-Native — Infrastructure supports streaming data and dedicated Feature Stores

Module 3: Technology and Infrastructure — Do we have the tools?

Determines if you have the compute power to run modern AI.

  1. Deployment: What is the level of sophistication of your AI deployment?
    • (0 pts) Individual Ad-hoc Use — Staff use tools like ChatGPT individually without integration
    • (1 pt) Standard Integrations — Using AI features built into existing software with minimal configuration
    • (2 pts) Customized Context — AI models connected to company data to perform specific tasks
    • (3 pts) Autonomous Agents — AI acts autonomously to execute complex workflows across multiple systems
  2. Models: How do you manage the lifecycle of your AI models (versioning, deployment, and monitoring)?
    • (0 pts) Ad-hoc & Manual — Models stored locally with no version history or tracking
    • (1 pt) Basic Versioning — Code tracked in Git but deployment remains a manual process
    • (2 pts) Automated Pipelines — Fully automated deployment with monitoring and rollback capabilities
    • (3 pts) Continuous Operations — Self-optimizing loops that automatically retrain based on new data
  3. Compute: What computing resources support your AI workloads?
    • (0 pts) Standard Hardware — No specialised chips; runs on standard laptops or servers
    • (1 pt) On-Demand Cloud — Rent access to cloud GPUs or models when needed
    • (2 pts) Scalable Infrastructure — Automated clusters scale up for heavy training or inference
    • (3 pts) Orchestrated Ecosystem — Workloads automatically route to the most efficient hardware
  4. Integration: How integrated are AI capabilities with your existing business systems?
    • (0 pts) Isolated & Standalone — Tools used separately with no connection to internal systems
    • (1 pt) Point-to-Point Connections — Direct links exist for specific use cases but are fragile
    • (2 pts) Integrated Platform — AI built into core applications allowing smooth data flow
    • (3 pts) Composable Ecosystem — Modular layer connecting everything; models swapped without breaking systems

Module 4: Talent and Skills — Do we have the people?

Assesses if your people have the skills to execute AI initiatives.

  1. Specialists: How is your AI talent organized and resourced?
    • (0 pts) Outsourced or None — Rely entirely on external vendors; no dedicated internal staff
    • (1 pt) Centralised Core Team — Small central team handles all AI requests for the organisation
    • (2 pts) Embedded Experts — Specialists sit within business units to drive specific goals
    • (3 pts) World-Class Talent — Top-tier global talent driving proprietary innovation
  2. Literacy: How widespread is AI literacy across your organisation?
    • (0 pts) Limited to Tech Teams — Only IT specialists understand AI; others are unaware
    • (1 pt) Role-Specific Training — Upskilling limited to specific technical or analytical roles
    • (2 pts) Company-Wide Foundations — AI literacy included in onboarding; staff understand basics
    • (3 pts) Universal Fluency — AI is core competency; staff use tools daily
  3. Development: Which statement describes your AI talent development?
    • (0 pts) Self-Driven Only — Staff learn on their own time using free resources
    • (1 pt) Ad-Hoc Training — Occasional workshops or subscriptions for interested individuals
    • (2 pts) Structured Pathways — Formal certifications and clear career tracks for AI roles
    • (3 pts) Innovation Culture — Hackathons, R&D time, and knowledge sharing are standard
  4. Collaboration: How do technical and business teams collaborate on AI?
    • (0 pts) Siloed (IT-Led) — Tech builds in isolation; business units are passive customers
    • (1 pt) Consultative Approach — Tech and Business collaborate on projects but remain separate
    • (2 pts) Integrated Squads — Cross-functional teams sit and work together permanently
    • (3 pts) Federated Model — Business units lead initiatives, supported by central platform team
  5. Retention: How successful are you at attracting and retaining top AI talent?
    • (0 pts) Struggling to Hire — Struggle to recruit qualified staff; high turnover or contractor reliance
    • (1 pt) Vendor Dependent — Rely on external partners for complex or specialised AI work
    • (2 pts) Competitive Employer — Successfully recruit and retain experienced AI engineers and scientists
    • (3 pts) Talent Magnet — Destination of choice; top-tier talent proactively seeks us out

Module 5: AI Governance — Do we have the rules?

Measures if you are building solutions safely, ethically, and legally.

  1. Ethics: How are AI ethics and safety principles applied in your organisation?
    • (0 pts) No Formal Policy — No documented principles; decisions left to individual judgment
    • (1 pt) Guiding Principles — High-level values written down but not strictly enforced
    • (2 pts) Mandatory Review — Ethics checklist or committee review required before launch
    • (3 pts) Ethical by Design — Safety and fairness checks built into development process
  2. Oversight: How do you ensure humans review and approve AI decisions?
    • (0 pts) No Formal Oversight — Rely on individual users; no mandatory review process
    • (1 pt) Ad-Hoc Review — Humans intervene only for high-risk or suspicious decisions
    • (2 pts) Mandatory Human-in-the-Loop — Key decisions automatically pause for human approval before proceeding
    • (3 pts) Independent Audit — Independent teams regularly audit effectiveness of human reviews
  3. Fairness: How do you monitor AI systems for bias and fairness?
    • (0 pts) No Monitoring — We do not check for bias; assume outputs are objective
    • (1 pt) Reactive Checks — Manual audits performed only when users complain or issues suspected
    • (2 pts) Continuous Auditing — Automated tools regularly scan outputs for bias and compliance
    • (3 pts) Real-Time Mitigation — System automatically detects and blocks biased outputs in real-time
  4. Compliance: How mature is your AI risk management and regulatory compliance?
    • (0 pts) Unmanaged Exposure — No process to identify or mitigate AI regulatory risks
    • (1 pt) Ad-Hoc Assessments — Manual risk checks for specific projects; consistency varies
    • (2 pts) Systematic Framework — Standard framework followed with clear incident response plans
    • (3 pts) Continuous Compliance — Automated checks aligned with global standards like ISO 42001

Module 6: Organisational Culture — Is our organisation ready to change?

Tests your team's willingness to experiment, fail, learn, and adapt.

  1. Sentiment: What is the predominant employee attitude toward AI?
    • (0 pts) Fear & Resistance — High anxiety about job loss; active resistance to tools
    • (1 pt) Cautious Curiosity — Interest exists, but adoption is slow due to uncertainty
    • (2 pts) Proactive Experimentation — Teams willingly try tools; failure seen as learning
    • (3 pts) Innovation DNA — AI culturally embraced; staff actively seek automation opportunities
  2. Leadership: Which leadership style best characterises your organisation's approach to innovation?
    • (0 pts) Command & Control — Top-down decisions; innovation stifled by strict hierarchy
    • (1 pt) Risk-Averse Management — Focus on stability; pilots permitted but hesitation to scale
    • (2 pts) Collaborative Agile — Leaders actively encourage cross-team feedback and rapid iteration
    • (3 pts) Transformational Leadership — Leaders remove roadblocks and empower teams to take risks
  3. Failure: How does your organisation handle AI experiments that fail?
    • (0 pts) Failure Punished — Failed projects damage careers; teams avoid risk to protect reputation
    • (1 pt) Failure Tolerated — Failures accepted if cheap, but rarely discussed openly
    • (2 pts) Structured Reviews — Mandatory post-mortems; lessons documented to prevent repeating mistakes
    • (3 pts) Fail-Fast Culture — Failure expected and budgeted for; insights applied to next sprint
  4. Workflows: How deeply is AI embedded into daily roles and performance goals?
    • (0 pts) Not Defined — AI absent from job descriptions; usage is informal
    • (1 pt) Specialist Focus — Responsibilities limited to data and tech teams
    • (2 pts) Standard Enabler — Teams use tools; proficiency tracked in key roles
    • (3 pts) Reinvented Roles — Jobs redesigned; staff rewarded for automating workflows

Module 7: Strategic Deployment — How are we deploying AI?

Assesses how you effectively delegate to AI without losing control.

  1. Selection: How do you decide which tasks are safe and suitable for AI automation?
    • (0 pts) Gut Feeling — No formal method; choices based on excitement or need
    • (1 pt) Basic Feasibility — Check technical capability but without deep risk analysis
    • (2 pts) Risk vs Value Mapping — Categorise tasks by cost of error and potential value
    • (3 pts) Knowledge-Structure Matrix — Separate repetitive, rule-based tasks from those needing human judgment
  2. Oversight: How do you determine the level of human oversight for different AI applications?
    • (0 pts) One-Size-Fits-All — Same testing and oversight applied regardless of risk
    • (1 pt) Ad-Hoc Judgment — Supervision levels decided by leads based on intuition
    • (2 pts) Risk-Based Tiers — Tools strictly classified based on their risk level
    • (3 pts) Adaptive Autonomy — System runs autonomously but flags humans when needed
  3. Transparency: How much visibility do you have into why an AI system made a specific decision?
    • (0 pts) Black Box — Input and output visible; internal connection logic remains unknown
    • (1 pt) Behavioral Testing — Rigorous testing maps likely system behaviour without internal visibility
    • (2 pts) Partial Visibility — Top factors influencing results are identified and understood
    • (3 pts) Auditable Reasoning — System provides citations or chain-of-thought for full logic audit

Module 8: Business Impact — What value are we creating?

Tracks the real financial and mission value you are creating with AI.

  1. Costs: How do you manage and optimize the running costs of your AI systems?
    • (0 pts) Unmeasured Costs — No visibility; costs buried in general IT budgets
    • (1 pt) Total Cost Monitoring — Track total monthly bill but cannot break it down further
    • (2 pts) Attributed Costs — Accurately allocate costs to specific projects or departments
    • (3 pts) Unit Economics Optimised — Measure cost per transaction and actively optimise for sustainable scaling
  2. ROI: How do you measure the return on investment or mission value of your AI projects?
    • (0 pts) No Formal Tracking — Rely on anecdotes; value is assumed but not calculated
    • (1 pt) Technical Metrics Only — Track model performance but not business or mission outcomes
    • (2 pts) Efficiency Gains — Track resource savings like reduced costs or time saved
    • (3 pts) Strategic Value Realisation — Measure returns against revenue, social impact, or core goals
  3. Speed: On average, how long does it take to move an AI concept into production?
    • (0 pts) No Production Deployment — Have not yet successfully deployed an AI model to production
    • (1 pt) Long Cycles (12+ Months) — Slow manual deployment; projects often stuck in pilot phase
    • (2 pts) Standard Cycles (6–12 Months) — Reliable process, but scaling requires significant custom effort
    • (3 pts) Rapid Agility (< 6 Months) — Automated pipelines allow us to deploy value quickly
  4. Maturity: What is the highest level of value AI has actively delivered to date?
    • (0 pts) No Proven Value — Experimenting or piloting; no measurable results observed yet
    • (1 pt) Operational Efficiency — Automating existing tasks to save time or reduce costs
    • (2 pts) Product & Service Enhancement — Improving quality or user experience of existing offerings
    • (3 pts) Business Transformation — Creating new revenue streams or new mission delivery models

Module 9: Scaling AI — How do we grow AI initiatives?

Evaluates your ability to move from pilot to mass production.

  1. Expansion: How do you expand successful AI pilots to the rest of the organisation?
    • (0 pts) Stuck in Pilot — Pilots remain isolated; no plan for wider adoption
    • (1 pt) Ad-Hoc Expansion — Scaling is manual; relies on leaders pushing case-by-case
    • (2 pts) Standardised Playbook — Documented process to roll out successful tools organisation-wide
    • (3 pts) Industrialised AI Factory — Shared infrastructure makes scaling rapid, repeatable, and automated
  2. Platform: How are your AI solutions architected to support scaling and integration?
    • (0 pts) Isolated Silos — Standalone experiments with no reusable code or architecture
    • (1 pt) Centralised Hosting — Common environment used; integration remains difficult and manual
    • (2 pts) API-First Design — Models built as APIs to plug into existing software
    • (3 pts) Universal Deployment — Containerised solutions run seamlessly across any environment
  3. Resources: How do you ensure long-term funding and resources for AI scaling?
    • (0 pts) No Dedicated Budget — Resources are hunted project-by-project without long-term security
    • (1 pt) Pilot-Only Funding — Budget exists for pilots but not for long-term maintenance
    • (2 pts) Dedicated Allocation — Recurring budget or grant specifically allocated to support AI scaling
    • (3 pts) Sustainable Economics — Value generated effectively covers the cost of ongoing operations
  4. Assurance: How do you maintain high performance as you scale AI across the organisation?
    • (0 pts) Reactive Fixes — Fix problems only on user complaint; no proactive testing
    • (1 pt) Manual Quality Checks — Occasional human spot-checks; quality dips as volume increases
    • (2 pts) Standardised QA Protocols — Strict testing rules ensure consistency across all teams
    • (3 pts) Automated Retraining Loop — Systems detect drift and trigger retraining without manual intervention

Module 10: Ecosystem and Partnerships — Who do we work with?

Assesses the strength of your network of vendors and experts.

  1. Vendors: How does your organisation engage with external AI technology providers?
    • (0 pts) Ad-Hoc Procurement — Buy off-the-shelf tools as needed with no deeper relationship
    • (1 pt) Managed Relationships — Selected key vendors; interactions limited to support and licensing
    • (2 pts) Strategic Integration — Work closely with vendors to customise tools for specific workflows
    • (3 pts) Co-Innovation Ecosystem — Actively co-develop solutions creating new IP and shared capabilities
  2. Research: How deeply do you partner with external research bodies on AI initiatives?
    • (0 pts) No Research Engagement — No engagement with research institutions regarding AI
    • (1 pt) Passive Consumer — Attend conferences and read reports; no active collaboration
    • (2 pts) Project-Based Experimentation — Provide data or problems for student projects or pilots
    • (3 pts) Joint R&D Partnership — Formally co-develop new models, IP, or impact studies
  3. Sharing: How do you share knowledge and learn from industry peers regarding AI?
    • (0 pts) Internal Focus — Focus on internal projects; no active external engagement
    • (1 pt) Passive Participant — Attend webinars and read reports; rarely contribute insights
    • (2 pts) Active Contributor — Regularly share learnings at conferences and working groups
    • (3 pts) Ecosystem Leader — Actively define industry standards and frameworks others adopt

Maturity Levels

Each criterion is scored on a four-level scale. The four levels are: L1 Explorer (0-25%, early stage capabilities), L2 Builder (26-50%, developing foundational capabilities), L3 Scaler (51-75%, established and expanding capabilities), and L4 Visionary (76-100%, industry-leading capabilities). Scores are aligned with ISO/IEC 42001 and the NIST AI Risk Management Framework.