Model Governance and Intelligence Controls
Implemented AI governance — approved lanes, prohibited uses, human review enforcement, and model inventory.
What the Intelligence Layer Does
The CreditAxis intelligence layer assists with structured credit drafting and analysis within governed workflows. Current approved use lanes:
- Intelligence assist lane — Credit memo drafts, risk summaries, exception justifications, and committee summaries. All outputs are presented as advisory drafts for human review. Output is not committed to a deal record without explicit user action.
- Help/copilot lane — Platform navigation guidance, credit concept explanation, and general support responses. No deal-specific approvals or regulatory guidance.
What It Does Not Do
CreditAxis does not make credit decisions. The intelligence layer is not an autonomous loan approval engine and is not a substitute for bank credit judgment, compliance review, legal review, or delegated approval authority.
Explicitly prohibited uses:
- Autonomous final credit approval or rejection
- Final regulatory determinations or adverse-action reason generation without human review
- Issuing exception approvals without an explicit human approval action
- Credit conclusions that bypass the governed approval workflow
- Use outside authorized workflow boundaries
Human Review Requirement
All AI-assisted outputs require human review and approval before institutional use. AI outputs are presented in a review lane. No AI output is applied to a deal record, approval record, or exception record without explicit human action.
The platform enforces this structurally — there is no API path by which an AI output can be auto-committed to a governance record.
_Evidence: Available under NDA (ai-governance-standard, model-validation-summary)._
Model Inventory
Active models as of April 2026:
| Model | Provider | Lane | Human Review |
|---|---|---|---|
| Llama-3.1-70B-Instruct | Hugging Face | Intelligence assist | Required |
| Llama-3.1-8B-Instruct | Hugging Face | Help / copilot | Required |
Full model registry with prompt versions, output schema versions, and validation records is available under NDA.
_Evidence: Available under NDA (model-inventory-summary)._
Prompt and Version Governance
Prompts are versioned and governed under change control. Prompt changes follow the same review and approval process as code changes. The current prompt version for each active model is tracked in the model registry.
System instructions include explicit prohibition language — AI models are instructed not to issue final decisions, regulatory conclusions, or approval actions.
Output Schema Control
AI output schemas are defined and validated. Intelligence-assist outputs are expected to conform to a structured JSON schema. The parsing layer validates output structure before presenting it to the user. Malformed outputs are rejected, not silently used.
Validation Checks
All active models have completed the following validation types:
| Validation Type | Result |
|---|---|
| Prompt review | Pass |
| Output schema validation | Pass |
| Human approval gate check | Pass |
| Prohibited use enforcement check | Pass |
Validation records are available under NDA (model-validation-summary).
Provider and Subprocessor
Hugging Face is engaged as an AI/ML inference subprocessor when the Intelligence module is active. Input data consists of deal narrative context — no PII or full borrower records are transmitted. Hugging Face is included in the subprocessor inventory and DPA.
Rollback and Disable Path
Models can be individually disabled or rolled back without platform downtime. The model registry tracks the status of each model (active, limited, disabled, in_validation). A rollback procedure is documented and reviewed.
_Evidence: Available under NDA (ai-governance-standard)._
Auditability
AI-assisted actions are recorded in the audit log with actor identity, model lane, output reference, and user review action. The audit record establishes that a human reviewed and acted on the AI output — not that an AI acted autonomously.
Planned Controls
- External model risk review (H2 2026)
- Automated output policy check on each prompt change
- Hallucination detection layer for high-stakes drafts