Skip to main content
Pure Technical Services

Provisioning VMs on a Pure Volume When Error for No Space Occurs

Currently viewing public documentation. Please login to access the full scope of documentation.

Problem

The customer is unable to provision new VMs on a new pure volume, even though the volume is not full and after increasing the volume size.

In this example, the Pure datastore shows only 52% full on the ESXi host. See attached "Puredatastore" screenshot.

~ # df -h

Filesystem   Size   Used Available Use% Mounted on

VMFS-5      41.0T  21.2T     19.8T  52% /vmfs/volumes/puredatastore

The Pure FlashArray shows the volume is provisioned 41TB but only a total of 1.2TB is used with a high data reduction ratio.

purevol list --space
Name                                        Size     Thin Provisioning  Data Reduction  Total Reduction  Volume   Snapshots  Shared Space  System  Total  
VM_Storage                                  41T      73%                7.9 to 1        30.2 to 1        939.07G  287.75G    -             -       1.20T  

The error in the VMkernel log shows:

FS3DM: 2004: status No space left on device copying 1 extents between two files, bytesTransferred = 0 extentsTransferred: 0".

Impact

The customer is unable to provision larger size VMs on a Pure datastore mounted to the ESXi host. Creating a VM which is 10G of size works, but creating a 30GB VM fails.

Solution

Follow the solution provided in VMware KB
https://kb.vmware.com/selfservice/mi...rnalId=1007638 to gather the output and troubleshoot this issue:

vmkfstools -P -v 10 /vmfs/volumes/datastore_name

The following output shows that the datastore is running low on the pointer (Ptr) blocks or inodes, which is why it is full.

~ # vmkfstools -P -v 10 /vmfs/volumes/puredatastore/
VMFS-5.60 file system spanning 1 partitions.
File system label (if any): puredatastore
Mode: public ATS-only
Capacity 45079708303360 (42991360 file blocks * 1048576), 21463906123776 (20469576 blocks) avail, max file size 69201586814976
Volume Creation Time: Wed Aug 13 06:40:01 2014
Files (max/free): 130000/116451
Ptr Blocks (max/free): 64512/245
Sub Blocks (max/free): 32000/29350
Secondary Ptr Blocks (max/free): 256/256
File Blocks (overcommit/used/overcommit %): 0/22521784/0
Ptr Blocks  (overcommit/used/overcommit %): 0/64267/0
Sub Blocks  (overcommit/used/overcommit %): 0/2650/0
Volume Metadata size: 1023770624
UUID: 53eb0841-1faf6578-b865-ecf4bbc519f8
Logical device: 53eb083d-9bd41bc0-17ca-ecf4bbc519f8
Partitions spanned (on "lvm"):
        naa.624a9370a2aedf261ad6c61800011010:1
Is Native Snapshot Capable: YES
OBJLIB-LIB: ObjLib cleanup done.

There is not enough Ptr blocks to satisfy a larger VM which require more PTR blocks. The solution would be to do the following:

1. Delete some of the VMs / files (or templates) from the datastore to release some of the ptr blocks so more VMs can be created. 
2 Create a new datastore and create new VMs on that datastore.

This can be a fairly common issue with larger datastores (30+tb in size) and typically the work is by having multiple datastores around 30TB in size.