What if your infrastructure team had domain-specific GPTs, each running locally on a dedicated Nutanix AHV AI cluster?
- John Goulden
- May 2
- 1 min read

Here's what that unlocks for any enterprise running Nutanix AHV:
Your internal GPT stack:
Nutanix Design Assistant – cluster sizing, deployment patterns, Calm blueprint support
Commvault Design Assistant – backup strategy, retention policy design, recovery workflows
NetApp Design Assistant – volume planning, ONTAP tuning, SnapMirror logic
Ansible Design Assistant – playbooks, roles, inventory logic, multi-environment support
Red Hat Enterprise Design Assistant – RHEL profiles, Satellite use, tuning guides
Network Design Assistant – logical segmentation, routing, L2/L3 validation
Cyber Sentry – config audits, anomaly detection, CIS/STIG mapping
Intelligent Enterprise & Cloud Architect – app modernization, FinOps alignment, hybrid cloud mapping
Datacenter Operations Expert – firmware baselines, patch cadence, lifecycle workflows, energy efficiency tips
All self-hosted, inferencing-capable GPT’s using Ollama with LLaMA2/Mistral, running in isolated VMs or containers.
Managed through Prism Central, Calm, and Flow
Fully audit-friendly with air-gap option
No public cloud dependency
No SaaS dependency. Fully auditable. You control the context, retention, and security. Could this augment your team's capability? I'd be interested in your feedback.
Comments