Partitioning your ESX host – part II

A while back I published an article on partitioning your ESX host. This was based on 3.5, and of course with vSphere this has slightly changed. Let me start by quoting a section from the install and configure guide.

You cannot define the sizes of the /boot, vmkcore, and /vmfs partitions when you use the graphical or text installation modes. You can define these partition sizes when you do a scripted installation.

The ESX boot disk requires 1.25GB of free space and includes the /boot and vmkcore partitions. The /boot partition alone requires 1100MB.

The reason for this is the fact that the service console is a VMDK. This VMDK is stored on the local VMFS volume by default in the following location: esxconsole-<system-uuid>/esxconsole.vmdk. By the way, “/boot” has been increased as a “safety net” for future upgrades to ESX(i).

So for the manual installations there are three partitions less to worry about. I would advise to use the following sizes for the rest of the partitions, and I would also recommend to rename the local VMFS partition during installation. The default name is “Storage1″, my recommendation would be “<hostname>-localstorage”.

Primary:
/     - 5120MB
Swap  - 1600MB
Extended Partition:
/var  - 4096MB
/home - 2048MB
/opt  - 2048MB
/tmp  - 2048MB

With the disk sizes these days you should have more than enough space for a roughly 18GB for ESX in total.

Be Sociable, Share!

    Comments

    1. says

      Duncan,

      Any idea why the 1100MB requirement for /boot? I have a scripted ESX4 installation using 47MB out of a 250MB /boot partition. What’s with all the extra space?

    2. Tomi says

      Does ESX 4.0 store any persistent data into /var/run or /var/lock?

      I have been using tmpfs for those (and for /tmp) on my ESX 3.5 scripted installs and it works fine but I haven’t yet done enough testing with ESX 4.0 on similar config.

    3. says

      Odd. When I look at my freshly installed ESX 4 server and do an fdisk -l the results are not what you describe. The VMKCORE is NOT included in the Service console partition.

      [root@esx4 core-dumps]# fdisk -l

      Disk /dev/sda: 250.0 GB, 250059350016 bytes
      64 heads, 32 sectors/track, 238475 cylinders
      Units = cylinders of 2048 * 512 = 1048576 bytes

      Device Boot Start End Blocks Id System
      /dev/sda1 * 1 1100 1126384 83 Linux
      /dev/sda2 1101 1210 112640 fc VMware VMKCORE
      /dev/sda3 1211 238475 242959360 5 Extended
      /dev/sda5 1211 238475 242959344 fb VMware VMFS

      Disk /dev/sdb: 7973 MB, 7973371904 bytes
      255 heads, 63 sectors/track, 969 cylinders
      Units = cylinders of 16065 * 512 = 8225280 bytes

      Device Boot Start End Blocks Id System
      /dev/sdb1 1 76 610438+ 82 Linux swap / Solaris
      /dev/sdb2 77 331 2048287+ 83 Linux
      /dev/sdb3 332 969 5124735 5 Extended
      /dev/sdb5 332 969 5124703+ 83 Linux

      [root@esx4 core-dumps]# df -h
      Filesystem Size Used Avail Use% Mounted on
      /dev/sdb5 4.9G 1.7G 3.0G 36% /
      /dev/sda1 1.1G 74M 954M 8% /boot
      /dev/sdb2 2.0G 40M 1.8G 3% /var/log

    4. says

      Scott,
      I suspect the boot partition is so large, to accommodate future ESX upgrades. It would allow upgrades to dump the files on a local partition and not require mounting the VMFS volume to get at the files.
      Forbes.

    5. The-Kevster says

      So to boot ESX4 (if I understand correctly). ESXi loads and boots the Service Console VM. Then the users VMs run on the ESXi hypervisor?

    6. says

      With issues pertaining to the core files sometimes proliferated by agents installed in the visor, I would add a separate /var/core and possibly /var/log file systems. This especially true when things like the N1KV VEM is installed on the host.

    7. Sven says

      Hi Duncan,

      You suggest to create /var. The graphic installer default is /var/log. Should just adjust /var/log to 4096 MB or change to /var?

      How can I define a Partition as an extended Partition? Or does it mean any ext3 Partition is automatically an Extended Partition?

      Best regards, Sven

    8. c0de4badf00d says

      Hey I have RAID 5 with 4X500GB SATA, total of 1.3 TB approx Disk, now I want to make a partition for ESXi 4.0 and the rest as the data store for VMs.

      Someone happen to mention to keep the two separate.

      Basically I don’t want to give to total of 1.3 TB to the Host OS Boot and want to keep the data store in the same machine on a different partition.

      Suggest:
      How do I do that at the time of install itself?
      Or
      How do I do that after the install?

    9. Ian says

      Hi Duncan,

      I’m with you all the way with 1600MB swap, 2GB for /tmp and I used to put in a 4GB /var in VI3 (reserving judgement on the need for as much as 4GB in vSphere for the moment).

      What I’m not so sure about the need for /opt since V3.5 put the HA logs down below /var/log and we’re moving away from needing additional software in the Service Console which may use /opt.

      Then there’s 2GB for /home. I don’t undestand why you need so much. Surely you only need a couple of hundred MB (1GB top if you like to work in GBs) for the home directories of additional Service Console users (to store their MP3s).

      Just interested in the reasons for your approach.

      Regards

      Ian

    10. Ian says

      I agree with the “why take a risk” approach of putting in the separate partitions. Just wondered why the sizing. Though as you say you’ve got the 72GB local drive normally.

    11. aenagy says

      With HDDs at a minimum of 72 GB, soon to be 144 GB, what’s the point of splitting out the partitions other than “/boot”, “/”, swap and vmcore? If you have proper monitoring on your ESX host (this is assumed), then setting the alarm threshold at 25%, for example, should capture problems that are starting to spiral out of control. I have encountered problems in the past with ESX 3.0.x where the “/var/log” had filled up and rendered the host unmanagable, I couldn’t even log in via local Console. This in my mind shots a hole in the whole lets-create-separate-partitions-in-case-they-fill-up theory.

      The other issue is how do you configure the local VMFS volume to only big enough for the Console and no more. Otherwise I would have Planners and VM admins saying “Ooo, more Datastores with free space! Let’s build some VMs there!”. Bzzzt — No, sorry, but thanks for playing. Been there, done that and don’t want to go back. Yes, I realize that Datastore ACLs would solve this problem but I would get people bugging me about all that free space we are not using.

    12. says

      Take a look at this, http://www.dailyhypervisor.com/?p=1479 you could script your installations and create multiple VMFS Datastores on your drive. If you create a COS only Datastore and size it to the COS you could isolate it from having other VM’s placed on it. Alternatively you could just create the COS disk as large as you want to almost fill a single VMFS partition.

    13. richardg says

      I have a question in the past on 3.5 it was a reccommended to have your vmcore and swap on different arrays. So is this no longer needed either. We have been building our servers out with 4 72’s split into to arrays of 69gb each and splitting up the partitions is this needed anymore???
      Thanks for the help.

    14. mickier says

      I’m seeing something I don’t understand – I have 6x1TB drives, raid 6, and I have 3.9TB. I install esxi 4, and it says the total space on this array is 1.9TB.
      It cuts the space in 1/2…?!

    15. Manfred says

      Hi

      found this Today:
      http://www.experts-exchange.com/Software/VMWare/Q_23835860.html
      >>
      With ESXi you cannot change this during the install, but you can create a new volume afterwards to replace the datastore that is made with installation.

      *************************************************************************************************************************
      ************ CAUTION: following the steps below WILL remove all data in the existing datastore. *******
      *************************************************************************************************************************
      First, log into your VMware Infrastructure client. Highlight the virtual host from the left pane (should be the only thing there, since this is a fresh install). Go to the configuration tab, and select “Storage” under the Hardware heading. To the right you will see your datastore, and a column labelled “device.” Record the data from this column (mine was vmhba1:0:0:3). You will need this later.

      1) At the VMware console (where you see the machine’s IP) press Alt – F1 to get to a new console window with some log information.
      2) Type “unsupported” (no quotes) and hit enter. You will not see the characters as you type them
      3) Enter the “root” password — you are now at a commandline
      4) enter the following command: vmkfstools –createfs vmfs3 –blocksize 8M vmhba1:0:0:3
      Replace the blocksize parameter with whatever you need (I used 2M, to get virtual machines up to 500ish gigs). Replace the vmhba1:0:0:3 with the name that you recorded earlier. All set!

      Shalom
      Manfred

    16. Razor says

      Hi,

      Im installing VMware Sphere 4 and am fresh new on this field. I have 2 146 GB HDD and using raid1. The partitions are as follows:

      Mount
      Point Size (M)

      / 10240
      swap 1600
      /var 6142
      /var/core 15360
      /opt 2048
      /home 2048
      /tmp 2048

      This makes up around 40 GB with /boot and vmkcore partitions.

      The rest of the space is for VFMS datastore to store virtual machines. But as I will be using a SAN, so I guess 100 GB remains unused.

      please correct me.

      Thanks,

      Razor.

    17. dhunt40 says

      I have the same question as Razor. What do I do with all of the additional local storage on mirrored 144 GB drives? Also, with this much space, why partition anything other than boosting the swap partition to 1.6 GB?

      Dan

    18. says

      because the size of your mountpoints will not increase. Remember the COS is a VMDK on a VMFS volume. This VMFS volume will be roughly 143GB and the partitions within the VMDK will not be able to grow.

    19. wilson says

      @aenagy : “With HDDs at a minimum of 72 GB, soon to be 144 GB, what’s the point of splitting out the partitions other than “/boot”, “/”, swap and vmcore?”

      Quite simply, VMware ESX (including 3.5U4) will occasionally have a problem and start spewing logs like crazy. This will then run the system out of disk space and consequently cause esx.conf to truncate, which will then render your system quite unmanageable.

      I’ll prefer to keep my partitions separate, thank you very much.

    20. dhunt40 says

      Wilson, I see your point on the log files filling up even a large volume. So would you proportionally adjust the size of the various partitions so that the aggregate size of the partitions fill up the 144 GB drive space? Or would you leave some disk space free and if so, for what?

      thanks,

      dan

    21. CleverTrevor says

      Hi

      I have got 2 mirrored 14.7 GB SSDs? Any suggestions on the best way to configure the partitions? I am a Windows dude and don’t really understand this stuff.

      Thanks

      CT

    22. Saiparasad says

      Hi i am new to Virtualization.
      Incase if i ask anything stupid please forgive me ;)

      1st question ?

      4-300GB hard disk with Raid 1.

      In device details capacity :- shows 278.46 GB – 1st disk
      Primary Partitions:
      1. Linux native 1.08 GB
      2. VMware Diagnostic 117.66 MB
      3. Extended 277.29 GB

      And for the other disk it shows 278.46 GB – 2nd disk
      Primary Partitions:
      1. DOS 16-bit >= 32M 4.01 GB
      2. VMFS 274.47 GB
      I get 2 disk 277.25 GB and other 274.25 why? what does it mean.

      I could not find all these settings which you guys have mentioned above.
      Primary:
      / – 5120MB
      Swap – 1600MB
      Extended Partition:
      /var – 4096MB
      /home – 2048MB
      /opt – 2048MB
      /tmp – 2048MB

      I want to make sure i am doing the right thing.

      2nd question ?

      We are implementing ESX 4 update 2 in our netwok and moving machines to it.
      On some VM’s i have used Thick and on Some i have used Thin provisioning.
      Recently i created a new VM – A(Thin provisioning) at that point Storage1 had 55 GB remaining.
      I used 50 GB for the new machine. Storage1 now shows 49 GB free i.e 6 GB is used by the new installation(VM – A)
      I know thin provisioning uses space as required but what happens when VM – B(Thin provisioned) reaches their allocated space for eg. VM – B is been given 40 GB and is just using 20 out of it what happens when it occupies the other 20 GB.
      Out of Storage1’s 49 GB – 20 GB is used so only 29 GB is free.
      When VM – A starts using its alocated space i.e 50 GB out of which only 6 GB was used earlier.
      Storage 1 has only 29 GB remaining. What happens ?

      I hope i have not confused any one ;)

      Thanks in advance.

    23. Jack says

      Hi Duncan,

      I wonder if I have RAID1 (15K 146GB SAS) on R710 and VMFS is going to be on EQL array.

      What I did is used up ALL space in 146GB, after adding this ESX Host to vCenter, VC is starting to complain two things

      1. localstorage health check warning
      2. Folder rootFolder warning

      I thought it’s ok to use all the space for / (50GB) and /var (50GB) in my case.

      Could you kindly clarify please?

      Thanks,
      Jack

    24. neil hunter says

      Making the boot partition that size is advisable. I’ve inherited 20 esx 3.5 servers and all of them have 100mb boot partitions.

      I’ve just upgraded the first host to 4.1, which went fine. I then applied update 1 and the server kakked itself. It said that there was a missing file and that there was a signature mismatch between vmkctl and vmkernel.

      It turns out that the problem was with the lack of space on the boot partition. Because I had less than 24mb available on the boot partition it wasn’t able to apply the update correctly.

      To fix the issue I had to delete some unused ramdisks from /boot, do a esxcfg-boot -b and then it sprung back to life after a restart.

      It’s a shame I cant resize that boot partition, vmware have told me it’s a destructive build to do so.

      Unless they just mean there isn’t a supported method…..

    25. says

      Hi I was wondering if you’re up for selling your site? Please get in touch whenever you read this. Cheers. Mark