SystemView 2 – Technical Infrastructure Requirements
For more information on SystemView 2, contact your Customer Success Manager.
Overview
This article defines the minimum supported infrastructure configuration for SystemView 2 deployments. All specifications are per environment unless otherwise stated.
Your specific requirements will be right sized during technical planning based on your actual bed count, concurrent users, data volumes, historical data retention, and operational requirements.
Key Terminology
Before reviewing the technical specifications, familiarise yourself with these key terms:
-
CSI (Container Storage Interface): Standard interface for storage in Kubernetes
-
Node: Virtual machine instance in the Kubernetes cluster
-
Pool: Group of nodes dedicated to specific workloads
-
Premium SSD: High-performance storage tier
-
Workload Isolation: Separating different types of processing into dedicated resources
Environment Requirements
Mandatory Environments
All deployments must include:
-
Development – Initial configuration, integration development, early validation
-
Test – Quality assurance and testing
-
UAT – User acceptance testing prior to production releases
-
Production – Live operational environment
Environment Consistency
-
Production and UAT should be architecturally equivalent
-
Test and Development may be right-sized subject to HealthCare Logic confirmation
-
Node counts should remain consistent even where node sizes (CPU/RAM) are adjusted for smaller deployments
Phased Environment Provisioning
For new installations, the Development environment is required first to support initial configuration and integration development. Test, UAT, and Production environments may be provisioned later based on agreed timelines. This phased approach must be raised at project mobilisation.
Operating System Requirements
SystemView 2 supports two operating systems for Kubernetes nodes:
Recommended: Talos Linux / Native Azure Kubernetes OS
-
Immutable, purpose-built operating system for Kubernetes
-
Available for most virtualisation platforms (Talos)
-
Minimal attack surface and reduced maintenance overhead
-
Requires high levels of automation during deployment and configuration (Talos)
Alternative: Ubuntu 24.04 LTS
-
Widely supported general-purpose Linux distribution
-
Requires more manual operating system-level maintenance
-
Suitable for organisations with existing Ubuntu infrastructure and expertise
Note If using Azure Kubernetes Services (default) and pool resources provided by Azure natively, then no explicit OS selection is required, due to licensing, we recommend Linux nodes as they can save a significant percentage on hourly cost.
Kubernetes Deployment Models
SystemView 2 is deployed via Kubernetes. Three supported deployment configurations exist:
Quick Selection Guide
|
Your Environment |
Recommended Configuration |
|---|---|
|
<1000 beds + Cloud (Azure/AWS/GCP) |
Configuration 1 (Simple Pools) |
|
<1000 beds + On-premises/Bare metal |
Configuration 2 (consider storage complexity) |
|
1000-5000 beds |
Configuration 3 (Workload-Based Pools) |
|
>5000 beds |
Contact HealthCare Logic for architectural review |
Configuration 1: Simple Pools & Managed Storage
Recommended for: Smaller hospital installations (<1000 beds)
Architecture: Combines workloads into two primary pools for operational simplicity
Minimum Requirements (per environment):
|
Pool |
Min Nodes |
vCPU |
RAM |
OS Disk |
Data Disk |
Performance |
Azure VM Reference |
|---|---|---|---|---|---|---|---|
|
System Pool |
2 |
4 cores |
8 GBi |
128GB SSD |
64GB SSD |
Standard SSD |
Standard_D2as_v6 |
|
User Pool |
5 |
6 cores |
64 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_D4as_v6 |
Total minimum nodes per environment: 7 (2 System + 5 User)
System Pool hosts: Monitoring, logging, Vault, policy management, certificates
User Pool hosts: SystemView 1.0, SystemView 2.0, Data Warehouse, Analytics, Keycloak authentication
Additional Components (for AI Insights):
-
Storage Account for LLM file share (mounted to Kubernetes cluster)
-
App Service Plan: Linux (P0v3) or existing plan with sufficient capacity for FastAPI Python application
Configuration 2: Workload-Based Pools & BYO Storage
Recommended for: Smaller hospital installations where storage complexity needs to be managed. Brings-your-own storage may increase storage management complexity quickly as the environment scales.
Architecture: Workload isolation separates system services, user applications, data warehouse, analytics, and interoperability services into dedicated pools. Requires customer-managed storage cluster if not using managed Kubernetes services or enterprise CSI providers.
Minimum Requirements (per environment):
|
Pool |
Min Nodes |
vCPU |
RAM |
OS Disk |
Data Disk |
Performance |
Azure VM Reference |
|---|---|---|---|---|---|---|---|
|
System Pool |
2 |
4 cores |
8 GBi |
128GB SSD |
64GB SSD |
Standard SSD |
Standard_D2as_v6 |
|
User Pool |
2 |
6 cores |
64 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_D4as_v6 |
|
Warehouse Pool |
1 |
4 cores |
32 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_E4as_v5 |
|
Analytics Pool |
1 |
4 cores |
32 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_D4ps_v6 |
|
Interop Pool* |
1 |
4 cores |
32 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_D4ps_v6 |
|
Storage Pool |
3 |
4 cores |
4 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
(implied) |
*Interop Pool required only if FHIR or other interoperability services are enabled
Total minimum nodes per environment: 6 base + 1 optional (Interop) + 3 storage = 9-10 nodes
IMPORTANT NOTE: The Storage Pool (3 nodes minimum) is OPTIONAL - SEE NOTES. This is not required if the client uses either a Managed Kubernetes Service (like Azure) or VMWare vSphere CSI provider.
Storage Pool Specifications:
-
3 Minimum nodes
-
Type: Intel (recommended)
-
4 cores, 4 logical processors
-
4 GBi RAM
-
OS Disk: 128GB SSD (Premium performance)
-
Data Disk: 64GB SSD (Premium performance)
-
Cluster Storage: Customer-dependent based on replication factor, ingestion volumes, and historical trend requirements
-
Premium SSD Performance required
Pool Responsibilities:
-
System Pool hosts: Monitoring, logging, Vault, policy management, certificates
-
User Pool hosts: SystemView 1.0, SystemView 2.0, Keycloak authentication
-
Warehouse Pool hosts: Data warehouse processing (isolated to prevent noisy neighbour effects)
-
Analytics Pool hosts: Analytics OLAP processing (isolated to prevent query contention)
-
Interop Pool hosts: FHIR interoperability services to customer systems (if required)
-
Storage Pool: Storage Provider to Kubernetes cluster
Note on Virtual Machines: The Virtual Machine has a large count but allows the load to be spread over any virtualised farm better (by requiring less cores for a single instance). Each Machine is a clone/worker, there are no specific installations for each machine apart from being a part of a Kubernetes cluster and having access to the required storage.
Configuration 3: Workload-Based Pools & Managed Storage
Recommended for: Standard hospital installations (1000-5000 beds)
Architecture: Workload isolation separates system services, user applications, data warehouse, analytics, and interoperability services into dedicated pools
Minimum Requirements (per environment):
|
Pool |
Min Nodes |
vCPU |
RAM |
OS Disk |
Data Disk |
Performance |
Azure VM Reference |
|---|---|---|---|---|---|---|---|
|
System Pool |
2 |
4 cores |
8 GBi |
128GB SSD |
64GB SSD |
Standard SSD |
Standard_D2as_v6 |
|
User Pool |
2 |
6 cores |
64 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_D4as_v6 |
|
Warehouse Pool |
1 |
4 cores |
32 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_E4as_v5 |
|
Analytics Pool |
1 |
4 cores |
32 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_D4ps_v6 |
|
Interop Pool* |
1 |
4 cores |
32 GBi |
128GB SSD |
64GB SSD |
Premium SSD |
Standard_D4ps_v6 |
-
Interop Pool required only if FHIR or other interoperability services are enabled
Total minimum nodes per environment: 6 base + 1 optional (if Interop enabled) = 6-7 nodes
System Pool hosts: Monitoring, logging, Vault, policy management, certificates
User Pool hosts: SystemView 1.0, SystemView 2.0, Keycloak authentication
Warehouse Pool hosts: Data warehouse processing (isolated to prevent noisy neighbour effects)
Analytics Pool hosts: Analytics OLAP processing (isolated to prevent query contention)
Interop Pool hosts: FHIR interoperability services to customer systems (if required)
Note on Virtual Machines: The Virtual Machine has a large count but allows the load to be spread over any virtualised farm better (by requiring less cores for a single instance). Each Machine is a clone/worker, there are no specific installations for each machine apart from being a part of a Kubernetes cluster and having access to the required storage.
Storage Options - Decision Matrix
Important: Options on the left-hand side are recommended as they are provided by third-party vendors and eliminate the need for customer-managed storage clusters.
The following table helps determine which storage solution is appropriate and whether a dedicated storage cluster is required:
|
Customer Scenario |
Storage Solution |
Requires Storage Cluster? |
CPU/RAM per Storage Node |
|---|---|---|---|
|
Cloud Hosted (Azure AKS, AWS EKS, GCP GKE) |
Managed Kubernetes (includes Storage) |
No |
N/A - Built-in CSI |
|
VMware vSphere + vSAN |
vSphere CSI |
No |
N/A - Uses vSphere CSI |
|
Azure Stack Edge / Azure Local |
AKS on HCI CSI |
Yes - 3 nodes minimum |
4 cores / 4 GBi |
|
NetApp / Dell / Pure Storage / HPE |
Vendor CSI driver |
No |
N/A - Uses vendor driver |
|
Large budget, multi-site DR |
Enterprise Storage |
Vendor dependent |
As per vendor specs |
|
Has Hyper-V (no HCI) |
Customer evaluation required |
Potentially |
TBD |
|
Bare metal, no external storage |
Longhorn |
Yes - 3 nodes minimum |
4 cores / 4 GBi |
|
Small budget, multiple sites (e.g., 12 sites) |
Longhorn or managed service |
Depends on choice |
4 cores / 4 GBi if Longhorn |
Key Principle: Storage clusters (3 nodes minimum) are only required when:
-
Using Azure Stack Edge / Azure Local with AKS on HCI CSI
-
Using bare metal infrastructure without external storage (Longhorn)
Storage clusters are NOT required when:
-
Using Managed Kubernetes Services (Azure AKS, AWS EKS, GCP GKE) - these include built-in CSI
-
Using VMware vSphere with vSAN - uses vSphere CSI provider
-
Using enterprise storage solutions (NetApp, Dell, Pure Storage, HPE) - uses vendor CSI drivers
Enterprise Deployments (>5000 beds)
Enterprise deployments require:
-
Workload isolation across all pools (Configuration 3 as minimum baseline)
-
Premium SSD storage for all production workloads
-
Documented disaster recovery strategy
-
Infrastructure architectural review with HealthCare Logic
Node sizing and concurrency modelling are validated during joint architectural review. Enterprise customers should engage HealthCare Logic early in the procurement process.
Sizing Methodology
The bed count classifications above (Small: <1000 beds, Normal: 1000-5000 beds, Enterprise: >5000 beds) provide an initial framework for infrastructure planning. However, bed count is an indicative guide, not a definitive calculator.
Your actual infrastructure requirements depend on multiple factors:
-
Number of facilities and their geographic distribution
-
Concurrent user count during peak operational periods (e.g., morning flow meetings, executive reviews)
-
Data ingestion volumes and refresh frequency
-
Historical data retention requirements
-
Number of enabled clinical domains
-
Reporting complexity and analytical query patterns
-
Integration complexity with source hospital systems
-
Planned use of AI Insights features (if applicable)
During technical planning, HealthCare Logic will assess these factors holistically to recommend the appropriate configuration for your environment. For organisations near classification boundaries (e.g., 950 beds or 5,200 beds), we may recommend the higher tier to ensure performance headroom and accommodate future growth.
AI Insights (Optional Module)
Important: The AI Insights feature is being progressively developed and implemented. Timing and final specifications will be confirmed between all parties once available.
When enabled, AI Insights requires the following additional resources per environment:
Always-On System Node
-
vCPU: 2 cores, 2 logical processors, burstable
-
RAM: 4 GBi
-
Storage: OS Disk 128GB SSD (Premium)
-
Azure VM: Standard_B2s
GPU Node
-
vCPU: 4 cores, 4 logical processors
-
RAM: 28 GBi
-
GPU: 1x NVIDIA Tesla T4 (16GB VRAM)
-
Storage: Ephemeral OS Disks (100GiB) for faster boot times and zero cost when pool not in use
-
Azure VM: Standard_NC4as_T4_v3
Additional AI Infrastructure
-
Storage Account: Hosting LLM in Azure Storage Account (File Share) mounted to Kubernetes cluster
-
App Service Plan: P0v3 (Linux), for FastAPI Python application managing communication between web interface, database, and AI model
Storage Requirements
Production Storage Standards
Production storage must be:
-
CSI-compliant (Container Storage Interface)
-
Premium SSD for user-facing and analytics workloads (User Pool, Warehouse Pool, Analytics Pool, Interop Pool)
-
Standard SSD acceptable for System Pool components only
Critical: HDD production storage is not supported for any workload pools.
Supported Storage Options
|
Storage Environment |
CSI Provider |
Dedicated Storage Cluster Required |
|---|---|---|
|
Managed Kubernetes Services (Azure AKS, AWS EKS, GCP GKE) |
Built-in CSI |
No |
|
VMware vSphere + vSAN |
vSphere CSI |
No |
|
Azure Stack Edge / Azure Local |
AKS on HCI CSI |
Yes (3 nodes minimum) |
|
NetApp / Dell / Pure Storage / HPE |
Vendor CSI Driver |
No |
|
Bare Metal (no external storage) |
Longhorn |
Yes (3 nodes minimum) |
Dedicated Storage Cluster Specifications
When Required:
-
Azure Stack Edge / Azure Local deployments using AKS on HCI CSI
-
Bare metal infrastructure without external storage using Longhorn
-
See Storage Options Decision Matrix above for full guidance
Minimum Specifications (when required):
Each storage node requires:
-
Minimum Nodes: 3
-
vCPU: 4 cores, 4 logical processors (Intel recommended)
-
RAM: 4 GBi
-
OS Disk: 128GB SSD (Premium performance)
-
Data Disk: 64GB SSD (Premium performance)
-
Cluster Storage: Customer-dependent based on:
-
Replication factor (typically 3x)
-
Daily ingestion volumes
-
Historical trend retention requirements
-
Number of enabled clinical domains
-
Analytics workload patterns
-
-
Performance Tier: Premium SSD required
Important: Storage clusters are NOT required if using:
-
Managed Kubernetes Services (Azure AKS, AWS EKS, GCP GKE) - built-in CSI provider
-
VMware vSphere + vSAN - vSphere CSI provider
-
Enterprise storage with vendor CSI drivers (NetApp, Dell, Pure Storage, HPE)
High Availability Expectations
For medium and large deployments (≥1000 beds), the following are required:
-
Multi-node cluster architecture (per specifications above)
-
Documented backup policy
-
Snapshot capability prior to platform upgrades
-
Documented disaster recovery approach
-
Regular testing of backup and recovery procedures
Enterprise deployments should additionally consider:
-
Availability zone distribution where supported by infrastructure
-
Geographic redundancy for multi-site health systems
-
Automated failover capabilities
Responsibilities
Customer Responsibilities
-
Infrastructure provisioning and hosting
-
Kubernetes cluster deployment and management
-
Storage configuration and CSI provider setup
-
Backup and disaster recovery implementation and testing
-
Operating system security patching and updates
-
Network configuration, connectivity, and firewall rules
-
Access management and authentication integration
HealthCare Logic Responsibilities
-
SystemView application deployment and configuration
-
Kubernetes workload orchestration and management
-
Data feed and integration configuration
-
Clinical domain configuration and customisation
-
Integration development and data mapping
-
Performance optimisation within supported infrastructure
-
Application-level monitoring and support
Common Infrastructure Pitfalls
The following infrastructure issues are the most common causes of degraded performance or instability:
Under-Sized User Node Pool
Reducing RAM or CPU allocation below recommended levels can result in:
-
Slower dashboard rendering and query response times
-
Resource contention between concurrent users
-
Unexpected application restarts during peak usage
-
Memory pressure causing pod evictions
Mitigation: Maintain minimum node specifications or increase node counts to distribute load.
Insufficient Storage Performance
Using non-Premium SSD storage for production user-facing workloads may cause:
-
Delayed integration processing and extended refresh windows
-
Increased refresh times affecting data currency
-
Data warehouse query latency
-
Dashboard timeouts during complex analytical queries
Mitigation: Production environments require Premium SSD storage for User, Warehouse, Analytics, and Interop pools.
Removing Workload Isolation
Collapsing warehouse, analytics, and user workloads into a single pool in medium or large hospitals (≥1000 beds) can lead to:
-
Noisy neighbour effects where analytical processing impacts live dashboard performance
-
Query contention between operational and analytical workloads
-
Analytical cube processing blocking real-time user queries
-
Unpredictable performance during heavy analytical processing
Mitigation: Workload-based isolation (Configuration 3) is strongly recommended for hospitals ≥1000 beds.
Inconsistent UAT and Production Architecture
Significant architectural differences between UAT and Production environments can result in:
-
Successful UAT validation but degraded production performance
-
Unpredictable upgrade behaviour in production
-
Inability to accurately test performance at scale
-
Surprises during go-live that could have been identified in UAT
Mitigation: Production and UAT environments should remain materially equivalent in architecture and node specifications.
Shared Clusters with Unrelated High-Load Applications
Deploying SystemView 2 into Kubernetes clusters shared with other high-throughput systems without capacity modelling may introduce:
-
Resource starvation affecting SystemView performance
-
Latency spikes during peak usage of other applications
-
Pod scheduling delays due to insufficient cluster capacity
-
Unpredictable performance characteristics
Mitigation: Dedicated Kubernetes cluster for SystemView or careful capacity planning for shared clusters.
Lack of Backup & Snapshot Policy
Failure to implement regular backups and pre-upgrade snapshots may:
-
Increase recovery time in the event of infrastructure issues
-
Complicate rollback procedures after failed upgrades
-
Risk data loss in disaster scenarios
-
Violate organisational data governance requirements
Mitigation: Implement documented backup policy with regular testing and pre-upgrade snapshot procedures.
Using Inappropriate Storage Architecture
Deploying storage clusters when managed services are available, or failing to deploy storage clusters when required, can lead to:
-
Unnecessary infrastructure complexity and maintenance overhead
-
Performance degradation due to insufficient storage nodes
-
Failed deployments due to missing storage infrastructure
-
Increased costs from over-provisioning
Mitigation: Consult the Storage Options Decision Matrix (Section 3) to determine the appropriate storage architecture for your environment. Use managed Kubernetes services or vendor CSI providers when available to avoid the complexity of managing dedicated storage clusters.
Utility Server
-
Provides Azure DevOps Runner for HCL to deploy SystemView
-
Allows for HCL support personnel to conduct ad-hoc investigations as required on the platform internally
-
Allows SSH and HTTPS/TLS access to internal platform components is required for troubleshooting or administration
|
Component |
Specification |
|---|---|
|
Operating System |
Windows Server 2022 Datacenter |
|
vCPU |
4 cores, 4 logical processors (Intel or AMD) |
|
RAM |
16 GB |
|
Server Type |
Virtual Machine |
|
Storage Drives: |
|
|
C: Drive |
126GB (Standard SSD) |
|
D: Drive |
32GB (Standard SSD) |
|
T: Drive |
150GB (Standard SSD) |