When setting up a Proxmox virtualisation environment with network-attached storage, ensuring your NFS share can handle VM workloads is crucial. Poor storage performance can turn your virtualisation dream into a sluggish nightmare. In this comprehensive guide, I’ll walk you through exactly how to test your NFS storage performance for Proxmox VMs, using real-world testing scenarios and professional benchmarking tools.
Why NFS Storage Performance Testing Matters
Virtual machines are incredibly sensitive to storage performance. Unlike traditional applications that might tolerate occasional slowdowns, VMs experience:
- Boot delays from poor random read performance
- Application freezing during high I/O operations
- User frustration from inconsistent response times
- Cascading performance issues affecting multiple VMs
Before deploying production VMs on NFS storage, you need to verify it can handle your workload demands.
Our Test Environment Setup
For this guide, we’re testing a TrueNAS NFS export with:
- Storage: 2x 1TB SSDs in a ZFS pool
- Network: 1Gbps connection
- NFS Export:
/mnt/VM_PRX/vmstore
- Client: Proxmox VE host
Step 1: Mounting the NFS Share
First, we need to mount the NFS share on our Proxmox host for testing.
Troubleshooting NFS Mount Issues
Before mounting, check what NFS exports are available:
showmount -e 192.168.1.xxx
This revealed an important discovery – our target export was restricted to specific IP addresses:
Export list for 192.168.1.xxx:
/mnt/NVME_FAST/VM_ProxMox *
/mnt/BIgData/QNAP_Migration 192.168.1.0/24
/mnt/VM_PRX/vmstore 192.168.1.xxx,192.168.1.xxx,192.168.1.xxx
Common NFS Mount Issues:
- Access denied: Check authorized networks in TrueNAS NFS export settings
- Connection refused: Verify NFS service is running
- Wrong path: Use exact path shown in
showmount -e
output
Successful Mount Command
Once the NFS permissions were configured correctly:
mkdir -p /mnt/truenas-test
mount -t nfs -o vers=3,hard,intr,rsize=8192,wsize=8192 192.168.1.xxx:/mnt/VM_PRX/vmstore /mnt/truenas-test
Step 2: Installing Performance Testing Tools
We’ll use FIO (Flexible I/O Tester), the industry standard for storage benchmarking:
apt install fio hdparm sysstat -y
Step 3: The Ultimate VM Storage Testing Script
Here’s the comprehensive testing script that simulates real VM workloads:
#!/bin/bash
TEST_DIR="/mnt/truenas-test"
LOG_DIR="/tmp/fio_results"
mkdir -p $LOG_DIR
echo "=== VM Storage Performance Testing Started ==="
echo "Test directory: $TEST_DIR"
echo "Results will be saved in: $LOG_DIR"
echo
# Test 1: Random 4K Read (Critical for VM boot and application loading)
echo "1/6: Running Random 4K Read Test (VM boot performance)..."
fio --name=vm-rand-read-4k \
--directory=$TEST_DIR \
--rw=randread \
--bs=4k \
--size=1G \
--numjobs=4 \
--iodepth=32 \
--time_based \
--runtime=120 \
--group_reporting \
--output=$LOG_DIR/rand_read_4k.log \
--output-format=normal > /dev/null
echo " Completed. Results saved."
# Test 2: Random 4K Write (Critical for VM operations)
echo "2/6: Running Random 4K Write Test (VM operations)..."
fio --name=vm-rand-write-4k \
--directory=$TEST_DIR \
--rw=randwrite \
--bs=4k \
--size=1G \
--numjobs=4 \
--iodepth=32 \
--time_based \
--runtime=120 \
--group_reporting \
--output=$LOG_DIR/rand_write_4k.log \
--output-format=normal > /dev/null
echo " Completed. Results saved."
# Test 3: Mixed Workload (Typical VM pattern - 70% read, 30% write)
echo "3/6: Running Mixed Workload Test (typical VM usage)..."
fio --name=vm-mixed-workload \
--directory=$TEST_DIR \
--rw=randrw \
--rwmixread=70 \
--bs=4k \
--size=2G \
--numjobs=8 \
--iodepth=16 \
--time_based \
--runtime=300 \
--group_reporting \
--output=$LOG_DIR/mixed_workload.log \
--output-format=normal > /dev/null
echo " Completed. Results saved."
# Test 4: Sequential Throughput (Large file operations)
echo "4/6: Running Sequential Throughput Test (large file transfers)..."
fio --name=vm-seq-throughput \
--directory=$TEST_DIR \
--rw=rw \
--rwmixread=50 \
--bs=1M \
--size=4G \
--numjobs=2 \
--time_based \
--runtime=180 \
--group_reporting \
--output=$LOG_DIR/seq_throughput.log \
--output-format=normal > /dev/null
echo " Completed. Results saved."
# Test 5: Latency Test (VM responsiveness)
echo "5/6: Running Latency Test (VM responsiveness)..."
fio --name=vm-latency-test \
--directory=$TEST_DIR \
--rw=randread \
--bs=4k \
--size=1G \
--numjobs=1 \
--iodepth=1 \
--time_based \
--runtime=120 \
--lat_percentiles=1 \
--group_reporting \
--output=$LOG_DIR/latency.log \
--output-format=normal > /dev/null
echo " Completed. Results saved."
# Test 6: Multi-VM Simulation (Multiple concurrent VMs)
echo "6/6: Running Multi-VM Simulation Test..."
fio --name=multi-vm-simulation \
--directory=$TEST_DIR \
--rw=randrw \
--rwmixread=65 \
--bs=4k \
--size=500M \
--numjobs=16 \
--iodepth=8 \
--time_based \
--runtime=300 \
--group_reporting \
--output=$LOG_DIR/multi_vm.log \
--output-format=normal > /dev/null
echo " Completed. Results saved."
echo
echo "=== All Tests Completed! ==="
echo
echo "=== PERFORMANCE SUMMARY ==="
# Extract and display key metrics
echo "1. Random 4K Read Performance:"
grep -E "(read:.*IOPS|lat.*avg)" $LOG_DIR/rand_read_4k.log | head -2
echo
echo "2. Random 4K Write Performance:"
grep -E "(write:.*IOPS|lat.*avg)" $LOG_DIR/rand_write_4k.log | head -2
echo
echo "3. Mixed Workload Performance:"
grep -E "(read:.*IOPS|write:.*IOPS)" $LOG_DIR/mixed_workload.log | head -2
echo
echo "4. Sequential Throughput:"
grep -E "(read:.*MiB/s|write:.*MiB/s)" $LOG_DIR/seq_throughput.log | head -2
echo
echo "5. Latency Analysis:"
grep -E "(lat.*avg|95.00th)" $LOG_DIR/latency.log | head -4
echo
echo "=== RECOMMENDATIONS ==="
echo "For VM storage, you want:"
echo "- Random 4K Read: >5,000 IOPS (>10,000 excellent)"
echo "- Random 4K Write: >2,000 IOPS (>5,000 excellent)"
echo "- Average Latency: <10ms (for responsive VMs)"
echo "- 95th Percentile: <50ms (to avoid noticeable pauses)"
echo
echo "Full detailed results available in: $LOG_DIR"
Step 4: Running the Tests
Save the script and execute it:
chmod +x /tmp/vm_storage_test.sh
/tmp/vm_storage_test.sh
The complete test suite takes approximately 20-25 minutes and covers six critical performance scenarios that mirror real VM workloads.
Interpreting Your Results: Real-World Performance Analysis
Our test results revealed excellent performance characteristics:
Random 4K Read Performance: 14,200 IOPS ⭐⭐⭐⭐⭐
read: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(6657MiB/120001msec)
clat (usec): min=137, max=5642, avg=279.86, stdev=163.14
Analysis: Outstanding performance with 0.28ms average latency. VMs will boot quickly and applications will load without delays.
Random 4K Write Performance: 197,000 IOPS ⭐⭐⭐⭐⭐
write: IOPS=197k, BW=769MiB/s (806MB/s)(90.1GiB/120035msec)
Analysis: Exceptional write performance – this is enterprise-class storage capability. No delays during file saves, database operations, or system updates.
Mixed Workload Results ⭐⭐⭐⭐⭐
read: IOPS=18.0k, BW=70.5MiB/s (73.9MB/s)(20.6GiB/300022msec)
write: IOPS=7734, BW=30.2MiB/s (31.7MB/s)(9064MiB/300022msec)
Analysis: Excellent mixed workload handling typical of real VM usage patterns.
Sequential Throughput ⭐⭐⭐
read: IOPS=221, BW=222MiB/s (232MB/s)(39.0GiB/180037msec)
write: IOPS=222, BW=223MiB/s (233MB/s)(39.1GiB/180037msec)
Analysis: Good throughput limited by 1Gbps network bandwidth – perfectly normal for this network setup.
Latency Consistency ⭐⭐⭐⭐⭐
95.00th=[ 396] (microseconds)
Analysis: Outstanding latency consistency with 95th percentile at 0.396ms – no performance spikes.
Performance Benchmarks for VM Workloads
Random 4K Read IOPS (VM Boot & Application Loading)
- < 1,000: Poor – Sluggish VMs, slow boot times
- 1,000-3,000: Acceptable for basic VMs
- 3,000-8,000: Good for business VMs
- 8,000-15,000: Excellent for demanding applications
- > 15,000: Outstanding – Enterprise grade
Random 4K Write IOPS (VM Operations)
- < 500: Poor – VMs will freeze during writes
- 500-1,500: Acceptable for read-heavy workloads
- 1,500-4,000: Good for typical business VMs
- 4,000-8,000: Excellent for database VMs
- > 8,000: Outstanding
Latency (VM Responsiveness)
- > 50ms: Poor – Noticeable delays
- 20-50ms: Acceptable for non-critical VMs
- 10-20ms: Good for business applications
- 5-10ms: Excellent – Very responsive
- < 5ms: Outstanding
Real-World VM Capacity Recommendations
Based on our test results, this NFS storage can confidently handle:
- 10-20 Windows VMs simultaneously
- 15-30 Linux VMs for web servers and applications
- Multiple database VMs with excellent performance
- 20-30 mixed VMs in a typical business environment
Common Performance Issues and Solutions
Issue: Low Random IOPS
Symptoms: Slow VM boot times, application delays Solutions:
- Check TrueNAS ZFS record size settings
- Verify SSD alignment and over-provisioning
- Consider adding more drives to the pool
Issue: High Latency
Symptoms: VM freezing, inconsistent performance Solutions:
- Check network configuration and MTU settings
- Verify NFS mount options (hard vs soft mounts)
- Monitor network utilization during peak times
Issue: Poor Sequential Performance
Symptoms: Slow large file transfers, backup delays Solutions:
- Upgrade network infrastructure (1Gbps to 10Gbps)
- Optimize NFS block sizes (rsize/wsize parameters)
- Check for network bottlenecks
Adding NFS Storage to Proxmox
Once testing confirms good performance, add the storage to Proxmox:
pvesm add nfs truenas-vmstore \
--server 192.168.1.xxx \
--export /mnt/VM_PRX/vmstore \
--content images,vztmpl,iso,backup \
--options vers=3,hard,intr,rsize=8192,wsize=8192
Conclusion
Proper NFS storage testing is essential before deploying production VMs. Our comprehensive testing approach revealed enterprise-grade performance from a 2x SSD TrueNAS setup, with outstanding random I/O performance and excellent latency characteristics.
The testing script provided covers all critical VM workload patterns and gives you confidence in your storage infrastructure. With 14,200 random read IOPS and sub-millisecond latency, this storage setup will deliver excellent VM performance for any business workload.
Remember: Random I/O performance and low latency matter most for VMs – sequential throughput, while important, is often network-limited and less critical for typical VM operations.
Ready to test your own NFS storage? Download the script, run the tests, and ensure your virtualization infrastructure can handle your workload demands!
About Performance Testing: Regular storage performance testing should be part of your infrastructure maintenance routine. Test quarterly or after any significant changes to your storage or network configuration.
Next Steps: Consider implementing continuous monitoring of your NFS performance using tools like Prometheus and Grafana to track performance trends over time.