Four ways to engage.

Most engagements start with the free Snapshot. The AI Readiness Assessment is the anchor offering. Tier 1 and Tier 3 serve specific contexts on either side of the anchor: a single AI vendor decision on one side, an ongoing governance partnership on the other.

Tier 0
AI Governance Snapshot
Free · 5 business days · 30-min readout

Automated diagnostic across the five dimensions. Structured intake required. Uploading existing policies or vendor agreements sharpens the findings, but a scored report is produced either way. No commitment.

Tier 1
AI Vendor Review
$4,000 per vendor · 5 business days
Scale: +$2,000 for vendors with direct PHI access

A single AI vendor evaluated against your regulatory posture. Vendor risk memo, recommended controls, contractual asks, and a clear go or no-go recommendation. For when an AI procurement decision is on your desk.

Tier 2 Anchor
AI Readiness Assessment
$18,000 base · 2–3 weeks · 10 deliverables
Scale: +20% per additional facility, entity, or product family

A full org-level assessment across all five dimensions. Maturity scoring with radar chart, control-by-control gap report, remediation roadmap, three policy drafts, governance charter, contract rider, executive summary, and board briefing deck.

Tier 3
AI Governance Partner
$7,000/month base · Retainer
Scale: +15% per additional entity · Requires completed Tier 2 in prior 12 months

Ongoing AI governance partnership. Quarterly reassessment, on-call vendor reviews absorbed into the retainer, policy maintenance, regulatory change alerts via Sentinel continuous monitoring, and board-level reporting.

Four stages. Most organizations
are at Stage 1 or 2.

Stages describe how mature the governance program is. They are one axis of the maturity model. The five dimensions below are the other axis. The Snapshot scores your organization on both, producing a per-dimension stage position rather than a single overall number.

Most common
Stage 1
Ad Hoc

AI use is occurring without organizational awareness. No policies, no approved tools list, no governance structure. Shadow AI is the norm.

Stage 2
Reactive

The organization knows AI is being used but is responding incident-by-incident. Some policies exist but are incomplete, unenforced, or out of date.

Stage 3
Structured

AI governance is documented and enforced. Approved tools, formal policies, staff training, and a vendor evaluation process are in place.

Stage 4
Optimized

AI governance is continuous. Monitoring in place, policies updated as AI capabilities evolve, governance committee meeting regularly, board-level visibility.

Five dimensions of
AI governance.

Each dimension is scored against the four stages above. The five dimensions are the columns of the matrix. The four stages are the rows. Every cell is mapped to HIPAA requirements and relevant regulatory frameworks. Scores are evidence-cited and grounded in your actual documents, not self-reported.

D-01
AI Policy & Governance Framework

Evaluates whether the organization has formal AI governance structures: approved tools list, acceptable use policy, decision rights, oversight committee, and board-level visibility. Maps to HIPAA administrative safeguards and OCR risk management guidance.

HIPAA Admin SafeguardsOCR Guidance
D-02
Staff Training & Awareness

Assesses whether workforce members understand AI limitations, PHI handling requirements when using AI tools, and the organization's AI-specific policies. Evaluates training content, delivery cadence, and completion tracking.

HIPAA Training Requirements164.308(a)(5)
D-03
Vendor Management & Procurement

Reviews how the organization evaluates, contracts with, and monitors AI vendors. Includes BAA coverage for AI vendors accessing PHI, vendor security assessment process, and sub-processor disclosure.

BAA CoverageAI Vendor Risk
D-04
Incident Response & Risk Management

Evaluates the organization's ability to identify, respond to, and document AI-specific incidents: unauthorized tool use, PHI exposure via AI, hallucination-driven clinical errors, prompt injection. Looks for AI-specific IR procedures beyond the standard HIPAA incident response plan.

HIPAA Incident ResponseAI-Specific Scenarios
D-05
Monitoring & Continuous Improvement

Assesses whether AI governance is an ongoing program (regular reviews, tool approval workflow, policy update process) or a one-time documentation exercise.

Sentinel-ReadyContinuous Governance
How the axes combine

The output is a position on every cell of the 5×4 matrix. Each dimension gets its own stage placement, so the result is a radar chart across all five axes rather than a single overall number. Most healthcare organizations are uneven across columns. Stage 2 on Policy is common, Stage 1 on Awareness is nearly universal, and Stage 3 on Incident Response shows up for orgs with a mature HIPAA program that has not yet been extended to AI.

The asymmetry is the diagnostic value. The weakest dimension is usually where the most leveraged remediation lives. The Snapshot surfaces it. The Assessment closes the gap. The full rubric is below.

Maturity Model

The full 5×4 rubric

Each cell describes observable, behavioral criteria — what the organization does, not what it intends. HIPAA and NIST CSF 2.0 anchors are listed per dimension. The Snapshot produces a per-dimension stage placement against this rubric, evidence-cited from your documentation.

Stage
Awareness
Policy & Governance
Vendor Management
Training & Adoption
Monitoring & Incident Response
Stage 1
Ad Hoc
"AI is happening to us."

No inventory of AI tools in use. Staff using consumer tools (ChatGPT, Claude, Copilot) without IT or Compliance awareness. No central record of what AI touches what data.

No AI-specific policy. Existing policies do not address AI. Staff guidance is verbal or absent entirely.

No formal process. Tools acquired by department or individual decision. Procurement may not know AI is involved. No BAA evaluation for AI vendors.

No AI-specific training. Staff figure it out individually, often using consumer tools without understanding the PHI implications.

No monitoring of AI use. No AI-specific incident response plan. An AI-related incident would be handled entirely ad hoc.

Stage 2
Aware
"We know we have a gap."

Partial inventory exists, usually a spreadsheet maintained by IT or Compliance. Captures sanctioned tools; does not reach shadow or personal use. Recognition that more is occurring than is tracked.

Informal guidance circulated (memo, intranet post, paragraph in onboarding deck). Not part of the formal policy framework. Not signed or formally acknowledged.

Case-by-case review when concerns surface, often after deployment. Compliance consulted ad hoc. No standard checklist. BAA coverage inconsistent.

Awareness-level communication: a memo, an all-hands mention, slides in annual security awareness training. No role-specific guidance, no acknowledgment, no completion tracking.

Manual spot checks when concerns arise. Some logging in sanctioned tools. Generic incident response that does not address AI-specific scenarios.

Stage 3
Documented
"We have a program."

Maintained inventory of sanctioned AI tools with use cases, data flows, and PHI touchpoints documented. Intake process exists for new tools. Reviewed at a defined cadence (e.g., quarterly).

AI Acceptable Use Policy and AI Vendor Management Policy in the formal P&P framework. Staff acknowledgment required at hire and annually. Mapped to HIPAA Security Rule 164.308 administrative safeguards.

Documented AI vendor review process integrated into procurement. Checklist covers data handling, training data use, BAA execution, model provenance, sub-processors, audit rights, and indemnification. Decision logged in vendor file.

Role-based training with distinct curricula for clinical staff, support, IT/admins, and leadership. Acknowledgment required. Completion tracked. Concrete guidance on what is and is not permitted.

Periodic audits of AI use against policy. Defined review cadence. Incident response includes AI-specific scenarios (data leak via prompt, hallucinated output, unauthorized tool use). Roles and response timelines assigned.

Stage 4
Embedded
"We maintain and evolve."

Continuous discovery via DLP/CASB/SaaS-management tooling. Inventory tied to the vendor risk register and data classification program. Lifecycle tracking. Feeds executive and board reporting.

Policies updated on a regulatory change cadence (OCR guidance, state law, framework updates). Tied to control objectives. Versioned with audit trail. Cross-referenced from incident response, vendor management, and risk assessment.

Integrated with the broader vendor risk management program. Review depth tiered by PHI exposure and criticality. Annual reassessment cadence. AI-specific clauses standardized in BAA template. Vendor performance tracked against agreed controls.

Ongoing reinforcement. Microlearning at point of use. Role-specific content updated as policies and tools change. AI-use drills conducted. Training metrics in compliance reporting.

Continuous monitoring (DLP, CASB, prompt logging where available). Alerting on policy violations. Board-level reporting on AI use, incidents, and remediation. Lessons learned feed back into policy and training updates.

NIST CSF 2.0 function codes: GV = Govern, ID = Identify, PR = Protect, DE = Detect, RS = Respond. HIPAA citations reference 45 CFR Part 164.

Ten deliverables.
Two to three weeks.

Every deliverable is produced against your actual documentation. The analysis runs through Rote's compliance workflows. Dan reviews and refines each output before delivery. The result is a complete AI governance program foundation with the implementation work mapped, scoped, and ready to execute.

D-01
AI Governance Maturity Assessment

Your organization scored across all five dimensions. Radar chart visualization of your position. The foundation for every other deliverable.

ScoredRadar Chart
D-02
Prioritized Gap Report

Control-by-control findings mapped to HIPAA Security and Privacy Rules and NIST CSF 2.0. Every finding includes evidence from your documents, gap description, and severity rating.

HIPAANIST CSF 2.0Evidence-Cited
D-03
Remediation Roadmap

30/60/90 day action plan with effort estimates and ownership assignments. Grouped by horizon: quick wins first, structural changes later.

30/60/90 DayEffort-Estimated
D-04
AI Acceptable Use Policy

A draft AI usage policy specific to your organization type, regulatory environment, and approved tool stack. Covers PHI handling, verification requirements, prohibited uses, and governance workflow.

Draft PolicyHIPAA-Aware
D-05
AI Vendor Management Policy

A draft vendor management policy for AI procurement: evaluation criteria, due diligence requirements, approval workflow, BAA coverage requirements, and monitoring obligations.

Draft PolicyProcurement-Oriented
D-06
AI Incident Response Policy

A draft incident response policy for AI-specific scenarios: unauthorized tool use, PHI exposure via AI, hallucination-driven clinical errors, prompt injection. Distinct from your standard HIPAA IR plan.

Draft PolicyAI-Specific Scenarios
D-07
AI Governance Charter

A 4–6 page document that defines who governs AI at your organization: committee composition, decision rights, tool approval workflow, escalation paths, review cadence. Transforms the assessment from a report into a program launch.

Governance StructureDecision Rights
D-08
AI Contract Rider

Standard contractual AI clauses ready for your legal team to attach to AI vendor agreements. Covers liability, transparency obligations, data ownership, regulatory notification timelines, and audit rights.

Legal-ReadyVendor Contracts
D-09
Board-Ready Executive Summary

A 1–2 page written summary with radar chart, top risks, top quick wins, and recommended next steps. Designed for board reporting and executive presentation.

Board-LevelExecutive Summary
D-10
AI Readiness Board Briefing Deck

4–6 slides in Rote trade dress. Designed so you can paste them directly into your board presentation without additional design work. Covers maturity position, top risks, governance roadmap, and program ownership.

Rote-BrandedBoard SlidesClient-Pasteable

Three steps. No billing surprises.

The assessment is document-driven: you provide your existing policies, BAAs, vendor agreements, and any prior risk assessments. Rote runs the analysis. Dan reviews, refines, and delivers. No 12-week interview cycle.

Intake

A 30-minute kickoff call and a document collection intake. You share your existing compliance documentation: policies, BAAs, vendor agreements, prior assessments. Whatever you have is the starting point. A blank slate also works.

Analysis

Rote runs the compliance workflows against your documents: maturity scoring across five dimensions, control-by-control HIPAA gap analysis, BAA coverage review, and risk assessment. Dan reviews every output and refines the findings. The platform does the document hunting. Dan applies the judgment.

Delivery

Ten deliverables in 2–3 weeks: full assessment report, gap findings, roadmap, three policy drafts, governance charter, contract rider, executive summary, and board deck. A 60-minute delivery call walks through the findings. 30 days of async follow-up included.

Why 2–3 weeks instead of 12–16

Traditional compliance consultancies run 12–16 weeks for comparable scope because the mechanical analysis work (reading policies against controls, mapping findings to frameworks, extracting evidence citations) is billed as senior consultant hours. Rote runs that analysis in hours. The 2–3 week timeline reflects delivery on the full deliverable set at full scope.

A healthcare organization
that needed it.

A healthcare operations organization came in with staff using multiple consumer AI tools across their operations team. None approved. None reviewed for PHI handling. None covered under BAA language that addressed AI use. They needed to demonstrate AI governance to a health plan partner that was asking.

In three weeks: a complete AI acceptable use policy, a staff training curriculum with six modular one-pagers, and a vendor management framework for evaluating future AI tools. The compliance officer presented the policy to the health plan within the engagement window.

[Healthcare operations company, Pacific Northwest, engagement 2025]

Why now

Joint Commission-CHAI guidance (2025) created explicit AI governance expectations for accredited organizations. Orgs without documented governance risk findings at accreditation review.

OCR enforcement has called out unsanctioned AI use and vendor AI access as HIPAA risk management obligations, treating AI governance as part of the required HIPAA risk analysis rather than separate from it.

State-level AI legislation is advancing. The organizations that have documented AI governance programs when legislation passes will have a shorter compliance path than those starting from zero.

Enterprise buyers and health plan partners are starting to ask. AI governance questions are appearing in vendor questionnaires and procurement requirements. A documented program answers the question before it becomes a blocker.

This work comes from
doing it, not studying it.

The AI Readiness Assessment is a service of Dang's Solutions LLC, delivered by Dan Gonzalez, JD. The Rote Compliance platform powers it. The same compliance analysis workflows that run inside the platform generate the findings in the assessment. Dan reviews every output before delivery.

The methodology behind this assessment comes from 12+ years of healthcare compliance work: HITRUST audits across 200+ controls, SOC 1/2 certifications, CMS authorization, BAA review at scale for health plan partners. The AI Policy Generator skill that powers three of the ten deliverables started as a policy I wrote for a healthcare operations client who needed to demonstrate AI governance to a health plan partner in under a month. The AI Contract Rider exists because the contract provisions need to come from someone who has negotiated BAAs, not someone who has only read them.

Dan Gonzalez, JD  ·  Full background at dangssolutions.com

$18,000 base.
Predictable scale.

The full 10-deliverable engagement is $18,000 for a single-entity organization with standard documentation. Multi-facility organizations, parent companies with subsidiary entities, and product families with distinct compliance surface areas scale on a fixed formula instead of an opaque quote.

Scale formula
Configuration Engagement fee
Single entity (base) $18,000
Two facilities or entities (+20%) $21,600
Three facilities or entities (+40%) $25,200
Each additional facility, entity, or product family +20% of base

A "facility, entity, or product family" is a distinct compliance surface area: a separate clinical site, an acquired entity with its own BAA stack, or a product line with materially different AI integrations. Read-only or branch operations that share the parent's documentation do not count separately.

What the same deliverables cost separately
Deliverable Standalone rate
HIPAA + AI gap report $5K–$10K
Remediation roadmap $3K–$5K
AI Acceptable Use Policy (draft) $2.5K–$5K
AI Vendor Management Policy $2K–$4K
AI Incident Response Policy $2K–$4K
AI Governance Charter $3K–$5K
Board-ready executive summary + deck $2K–$3K
Total standalone $19.5K–$36K

The assessment is priced at or below the standalone sum for the same deliverables, and delivered in 2–3 weeks rather than the 6–12 weeks it takes to source, engage, and coordinate multiple specialists. Same scope. Less overhead.

Not sure if you're ready for the full assessment? The Snapshot is free. It's a fast automated diagnostic that scores your organization across all five dimensions and shows you where you land on the maturity model. No commitment required.

Get a free Snapshot →

Common questions.

What is the AI Governance Snapshot and how is it different from the assessment?

The Snapshot is a free, fast diagnostic that scores your organization across the five governance dimensions and locates the stage you are at. It is automated. A structured intake is required. Uploading existing policies, BAAs, or vendor agreements improves the specificity of findings, but the Snapshot produces a scored report either way. The AI Readiness Assessment is the full engagement: document-based analysis, 10 deliverables, Dan's review and judgment on every finding. The Snapshot tells you where you are. The assessment closes the gap.

Who is the assessment for?

Mid-size healthcare organizations (health systems, managed care organizations, DMEs, and healthtech companies) that have AI use occurring in their environment but have not built a formal governance program around it. Also appropriate for organizations responding to a health plan partner's vendor questionnaire, preparing for Joint Commission accreditation review, or responding to an OCR audit request.

What documents do I need to provide?

Whatever you have. If you have existing policies, bring them. The assessment uses them as the starting point and identifies what is missing or insufficient. If you have nothing, the assessment starts from a blank slate and produces the full documentation package from scratch. Most organizations are somewhere in between: a general security policy that does not mention AI, and maybe a handful of vendor agreements.

How is this different from hiring a traditional consulting firm?

Traditional compliance firms charge $50K to $150K for comparable scope and take 12 to 16 weeks because the mechanical analysis work (reading your policies against HIPAA controls, extracting evidence, mapping findings to frameworks) is billed as senior consultant hours. Rote runs that analysis in hours. The 2 to 3 week timeline and $18K base price reflect delivery time and scope, not reduced analysis depth. Every finding is evidence-cited. Every policy is produced for your specific organization, not from a generic template.

Does Dan have healthcare legal credentials?

Yes. Dan is a JD with a Health Law Certificate and 12+ years of healthcare compliance experience: HITRUST audit leadership, SOC 1/2, CMS authorization, BAA review at scale, and direct practitioner experience building compliance programs at regulated technology companies. The AI Contract Rider is possible because of the legal background. Pure compliance firms cannot produce it.

What happens after the assessment?

The 30-day async follow-up period is included. After that, the natural next step is ongoing monitoring: either the Tier 3 Governance Partner engagement (which includes Sentinel continuous regulatory monitoring) or a lighter retainer for ongoing compliance advisory. Neither is required. Many organizations run the assessment, implement the roadmap internally, and return when regulations shift.

Is this related to the Rote Compliance platform?

Yes. The AI Readiness Assessment is delivered using Rote's compliance methodology: the same HIPAA gap analysis, risk assessment, and BAA review skills available as open source. Dan reviews every output before delivery. The assessment is the advisory layer on top of the methodology, not a separate product. Organizations that want to run the skills themselves can access them via the tools page or directly on GitHub.

How can the Snapshot be free? It looks like a significant amount of work.

The Snapshot is automated. The analysis that drives the cost of a traditional compliance diagnostic — reading documentation against control frameworks, extracting evidence, mapping gaps to regulatory requirements — Rote runs in minutes. Dan reviews the output before delivery, but the mechanical work ordinarily billed as senior consultant hours is handled by the platform. The scale dimension is worth addressing directly: no compliance professional holds 20 vendor BAAs in working memory and cross-references each against the full set of HIPAA Security Rule controls simultaneously. At 50 agreements, accuracy degrades. At 250, it requires a team and weeks of effort. The methodology applies the same standard to every document in the set, regardless of volume. The Snapshot does not reduce rigor to achieve the free tier. It changes the cost structure — which is also why the full Assessment is $18,000 rather than $80,000.

When does the Tier 1 Vendor Review make sense?

Tier 1 is the right starting point when you have a specific AI vendor decision in front of you and a full governance program is not the immediate need. If your organization is evaluating a clinical AI tool, a new EHR module with AI features, or any vendor that touches PHI through an AI layer, the Tier 1 review produces a structured BAA assessment, risk tier classification, and contract rider recommendations for that vendor. It is $4,000 and takes 5 business days. The +$2,000 increment applies to vendors with direct PHI access rather than de-identified or derived data, where the BAA and data handling obligations are materially different. Organizations that later move to the full AI Readiness Assessment can fold the Tier 1 findings directly into that engagement — it is not duplicated effort.

How does Tier 3 Governance Partner work month to month?

Tier 3 is a governance partnership, not a deliverables contract. After the Tier 2 Assessment establishes the baseline program, Tier 3 maintains it. Sentinel monitors federal and state regulatory sources continuously and flags changes that affect your governance posture. Dan reviews flagged changes and provides a monthly update covering what shifted, what it means for your program, and whether any policy or contract language needs revision. Most months the regulatory surface is stable and the update is brief. When something material moves — a new OCR enforcement pattern, a state AI law, an HHS guidance update — the update addresses it specifically. The $7,000/month base covers one entity. Multi-entity organizations scale at +15% per additional entity. Tier 2 in the prior 12 months is required to enter Tier 3; the program Sentinel monitors needs to exist before it can be monitored.

Start with a free Snapshot.
Know where you stand before you commit.

The Snapshot is a fast automated diagnostic. Structured intake required, no commitment. Uploading your existing documentation sharpens the findings. A scored report is produced either way. If the assessment makes sense after that, we will talk about scope.