I created a folder on my VSAN datastore, but how do I delete it?

I created a folder on my VSAN datastore using the vSphere Web Client, but when I wanted to deleted it I received this error message that that wasn’t possible. So how do I delete a VSAN folder when I don’t need it any longer? It is fairly straight forward, you open up an SSH session to your host and do the following:

  • change directory to /vmfs/volumes/vsanDatastore
  • run “ls -l” in /vmfs/volumes/vsanDatastore to identify the folder you want to delete
  • run “/usr/lib/vmware/osfs/bin/osfs-rmdir <name-of-the-folder>” to delete the folder

This is what it would look like:

/vmfs/volumes/vsan:5261f0c54e0c785a-81e199f6c9a23d73 # ls -lah
total 6144
drwxr-xr-x    1 root     root         512 Sep 27 03:17 .
drwxr-xr-x    1 root     root         512 Sep 27 03:17 ..
drwxr-xr-t    1 root     root        1.4K Sep 24 05:38 16254152-1469-2c18-3319-002590c0c254
drwxr-xr-t    1 root     root        1.2K Sep 26 01:21 85803a52-6858-ded5-b40b-00259088447a
lrwxr-xr-x    1 root     root          36 Sep 27 03:17 ISO -> e64d1b52-1828-04ca-95a8-00259088447e
lrwxr-xr-x    1 root     root          36 Sep 27 03:17 TestVM -> ed31d351-a222-83bf-bb70-002590884480
drwxr-xr-t    1 root     root        1.4K Sep 27 01:40 cc8ebe51-6881-7dc8-37f8-00259088447e
drwxr-xr-t    1 root     root        1.2K Sep 27 01:52 e64d1b52-1828-04ca-95a8-00259088447e
drwxr-xr-t    1 root     root        1.2K Jul  3 07:52 ed31d351-a222-83bf-bb70-002590884480
lrwxr-xr-x    1 root     root          36 Sep 27 03:17 iso -> 16254152-1469-2c18-3319-002590c0c254
lrwxr-xr-x    1 root     root          36 Sep 27 03:17 las-fg01-vc01.vmwcs.com -> cc8ebe51-6881-7dc8-37f8-00259088447e
lrwxr-xr-x    1 root     root          36 Sep 27 03:17 vmw-iol-01 -> 85803a52-6858-ded5-b40b-00259088447a

/vmfs/volumes/vsan:5261f0c54e0c785a-81e199f6c9a23d73 # /usr/lib/vmware/osfs/bin/osfs-rmdir vmw-iol-01

Deleting directory 85803a52-6858-ded5-b40b-00259088447a in container id 5261f0c54e0c785a81e199f6c9a23d73 backed by vsan

Be careful though, cause when you delete it guess what… it is gone! Yes not being able to delete it using the Web Client is a known issue, and on the roadmap to be fixed.

Be Sociable, Share!

    Comments

    1. GS says

      Seriously massive thank you for this. I’ve been testing VSAN with XenDesktop and had to delete a load of folders manually when some of the desktop creation got messed up.

    2. Martin says

      I have a problem, I don´t have that folder vsanDatastore.
      So i cant remove it. But i have the errormessage:
      “vsan datastore does not have capacity”
      //Martin

    3. Cary says

      Same warning message here. I’m not able to find any way to get rid of it. I was hoping that removing the trial license key might do something, but you can’t remove it. ARGH!!!

    4. says

      You may check first if the host is part of vsan cluster by giving “esxcli vsan cluster get” and if you see then “esxcli vsan cluster leave” will detached the host and remove the vsan datastore from the host as well.

    5. says

      It seems I lost a few VMDKs during a power outage, where VSAN cluster was not cleanly shutdown. When trying to delete some of the remaining orphaned folders on the VSAN using these commands, I get the following message:

      /vmfs/volumes/vsan:524658e8ea1e236a-0aa2c7892dd37f1a # /usr/lib/vmware/osfs/bin/osfs-rmdir UI\ VM
      Deleting directory eb0cf452-48b9-f7aa-61c1-00a0d1eb90cc in container id 524658e8ea1e236a0aa2c7892dd37f1a backed by vsan
      Failed. Search vmkernel log and osfsd log for opID ‘osfsIpc-1392338648.21′.
      From /var/log/osfsd.log:

      2014-02-14T00:44:08Z osfsd: 33939:IPCRecv:565: Full IpcRequest received, dumping it now
      2014-02-14T00:44:08Z osfsd: 33939:IPCDump:801: {version: 0x00000001; op: DELETE; friendlyName: ; objectUuid: eb0cf452-48b9-f7aa-61c1-00a0d1eb90cc; providerID: [ vsan]; containerID: 524658e8ea1e236a-0aa2c7892dd37f1a; vobCtxHanlde: 0xFFFFFFFFFFFFFFFF; vimOpID: osfsIpc-1392338648.21; bufferSize: 0x0;}
      2014-02-14T00:44:08Z osfsd: 33939:Provider_Lookup:437: Found matching driver for ID [ vsan]
      2014-02-14T00:44:08Z osfsd: Enqueue a new work item in threadpool osfsd-vsan
      2014-02-14T00:44:08Z osfsd: Add the new item into the pending list
      2014-02-14T00:44:08Z osfsd: 86099:Provider_Lookup:437: Found matching driver for ID [ vsan]
      2014-02-14T00:44:08Z osfsd: 86100:VsanDirGetMaxDOMNameLookupTries:1712: [opID=1be7d7b4] VSAN max DOM name lookup retries: 8
      2014-02-14T00:44:08Z osfsd: 86100:VsanDirGetDOMNameLookupRetryDelay:1737: [opID=1be7d7b4] VSAN DOM name lookup retry delay: 1
      2014-02-14T00:44:08Z osfsd: 86100:VsanObj_Open:397: [opID=1be7d7b4] VsanObj_Open successful: Already exists
      2014-02-14T00:44:08Z osfsd: 86100:VsanDeleteObjectInt:1382: [opID=1be7d7b4] VSAN object open for mount: ‘Already exists’, UUID eb0cf452-48b9-f7aa-61c1-00a0d1eb90cc
      2014-02-14T00:44:08Z osfsd: 86100:VsanOsfs_Mount:63: [opID=1be7d7b4] Trying to probe VSAN object (uuid: eb0cf452-48b9-f7aa-61c1-00a0d1eb90cc, container: vsan:524658e8ea1e236a-0aa2c7892dd37f1a)
      2014-02-14T00:44:08Z osfsd: 86100:VsanOsfs_Mount:102: [opID=1be7d7b4] Error setting OSFS VSI node for ‘eb0cf452-48b9-f7aa-61c1-00a0d1eb90cc': Already exists
      2014-02-14T00:44:08Z osfsd: 86100:DumpAllNonVmfsFileNames:674: [opID=1be7d7b4] Dumping list of non-VMFS system files found in directory /vmfs/volumes/vsan:524658e8ea1e236a-0aa2c7892dd37f1a/eb0cf452-48b9-f7aa-61c1-00a0d1eb90cc
      2014-02-14T00:44:08Z osfsd: 86100:DumpAllNonVmfsFileNames:683: [opID=1be7d7b4] Directory /vmfs/volumes/vsan:524658e8ea1e236a-0aa2c7892dd37f1a/eb0cf452-48b9-f7aa-61c1-00a0d1eb90cc, file .f50cf452-c878-ee4a-d3e2-00a0d1eb90cc.lck
      2014-02-14T00:44:08Z osfsd: 86100:VsanDeleteObjectInt:1395: [opID=1be7d7b4] Can not delete VSAN object (eb0cf452-48b9-f7aa-61c1-00a0d1eb90cc): Directory not empty
      2014-02-14T00:44:08Z osfsd: 86100:VsanFinishOp:470: [opID=1be7d7b4] Operation completed with status: Directory not empty
      2014-02-14T00:44:08Z osfsd: 86100:IPCCompletionFn:902: [opID=1be7d7b4] IPC completed: Directory not empty
      2014-02-14T00:44:08Z osfsd: 33939:Event_Pump:286: PumpEvents: Interrupted system call, continuing
      2014-02-14T00:44:13Z osfsd: Completed main loop for the worker thread osfsd-vsan
      2014-02-14T00:44:23Z osfsd: 33939:Provider_Lookup:437: Found matching driver for ID [ vsan]
      2014-02-14T00:44:23Z osfsd: Enqueue a new work item in threadpool osfsd-vsan
      2014-02-14T00:44:23Z osfsd: Add the new item into the pending list
      2014-02-14T00:44:23Z osfsd: Adding a thread to threadpool osfsd-vsan
      2014-02-14T00:44:23Z osfsd: Starting main loop for the worker thread osfsd-vsan
      2014-02-14T00:44:23Z osfsd: Added a new thread to threadpool (osfsd-vsan), numThreadsInPool (1)
      2014-02-14T00:44:23Z osfsd: 86131:AutoUnmountProviderInt:267: [ vsan] Prepared to auto unmount directories
      2014-02-14T00:44:23Z osfsd: 86131:AutoUnmountEachContainer:168: [ vsan] Autounmount processing container vsan:524658e8ea1e236a-0aa2c7892dd37f1a
      2014-02-14T00:44:23Z osfsd: 86131:AutoUnmountPruneChildren:54: Trying to auto unmount children in container: vsan:524658e8ea1e236a-0aa2c7892dd37f1a
      2014-02-14T00:44:23Z osfsd: 86131:AutoUnmountPruneChildren:100: Autounmounted children in container ‘vsan:524658e8ea1e236a-0aa2c7892dd37f1a’
      2014-02-14T00:44:23Z osfsd: 86131:AutoUnmountProviderInt:279: [ vsan] Completed auto unmount of inactive directories
      2014-02-14T00:44:23Z osfsd: 33939:Event_Pump:286: PumpEvents: Interrupted system call, continuing
      2014-02-14T00:44:28Z osfsd: Completed main loop for the worker thread osfsd-vsan

    Leave a Reply