Introduction
If you are new to VMware or just getting started in virtualization, you might feel a bit overwhelmed by the product names and concepts. (Well, at least I did when I began!) VMware’s ecosystem has tons of products, and it’s not immediately obvious how they all fit together. For example, many beginners struggle to understand the difference between ESXi and vSphere, or how vCenter ties in. What should you do if you’re confused? The same as any tech question: search online, ask fellow admins, read forums and documentation. This post isn’t a step-by-step tutorial from zero to hero, but rather a collection of fundamental things I wish I had known when I started. These are based on personal experience (much of it earned by fixing my own mistakes). So, let’s start with VMware and cover some basics!
Glossary
To set the stage, it helps to clarify a few key terms that often confuse beginners. Here are some VMware definitions in simple terms:
- VMware – The company name (the folks who make all the vSphere products).
- VMware vSphere – The suite of VMware’s server virtualization products. This includes the ESXi hypervisor, vCenter Server, and other components (similar to how Microsoft Office is a suite containing Word, Excel, etc.).
- VMware ESXi – The bare-metal hypervisor that you install on a physical server to run virtual machines. ESXi is the core engine that allows you to create and run VMs and their guest operating systems.
- VMware vCenter Server – The management server for vSphere. vCenter lets you centrally manage multiple ESXi hosts and all the VMs on those hosts from one console.
- VMware vSphere Client – The administration interface for vSphere. In modern versions, this is a web-based app you access through a browser (older versions had a Windows application or a Flash-based client, but today it’s unified as an HTML5 web client). The vSphere Client connects to vCenter Server (or directly to an ESXi host) so you can configure VMs, networks, storage, etc.
To help visualize how these pieces fit, here’s a simple diagram of a VMware environment:
In the image, a vCenter Server manages a few VMware ESXi hosts (each running multiple VMs), and an administrator’s laptop uses the vSphere Client to connect to vCenter and control the whole setup. Essentially, vCenter is the brains, ESXi hosts are the muscle, and the vSphere Client is how you interact with the system.
VMware ESXi used to be free… but not anymore
Until early 2024, VMware offered a free version of its ESXi hypervisor, commonly used in labs, test environments, and small setups. However, following Broadcom’s acquisition of VMware, the free ESXi license was officially discontinued as part of a broader move to subscription-only licensing. Starting February 2024, there was no way to legally obtain or use ESXi without a paid subscription. This change affected not only ESXi but also other VMware offerings, effectively phasing out perpetual licenses and entry-level bundles like vSphere Essentials.
This meant that new users in 2024 could no longer rely on free ESXi for lab use or small-scale virtualization projects. All deployments, even single-host environments, required a valid paid subscription tied to an entitlement under Broadcom’s new customer portal. While existing free ESXi installs technically continued to work, no updates or support were available unless upgraded to a subscription model.
Local storage vs. shared storage – why you should always choose shared
When setting up VMware hosts, you have a critical decision to make about storage: will your VMs live on local storage (disks inside or directly attached to one server), or on shared storage that multiple hosts can access? Many beginners think they can get by with just local disks and still use all the cool vSphere features – but that’s a mistake. If you want to use capabilities like High Availability, DRS, or vMotion, you need shared storage. Let’s talk about why shared storage is so important, and what the differences are.
Local storage
Local storage means the VM data resides on a storage device that is local to the ESXi host itself. This could be internal hard drives or SSDs inside the server, or a direct-attached external disk pack connected via SAS, SATA, SCSI, USB, etc. Typically, the local disks are formatted with VMware’s VMFS file system to store the virtual machine files. Local storage is simple – it doesn’t require any special network for storage traffic, and you can improve performance or redundancy on a single host by using RAID arrays across multiple local disks. For example, you might have an ESXi host with a RAID 10 of SSDs as a local datastore. VMs running on that host will read and write to those internal disks.
However, local storage has major limitations. First, any local disk (or direct-attached unit) is a single point of failure for the VMs on that host – if the disk or the host fails, those VMs are down (no other host can run them, because no one else can see that storage). Second, local storage typically can only be accessed by that one host and not simultaneously shared with others (there’s no multi-initiator access in the case of internal disks). You also can’t use multiple network paths to a local disk; it’s literally tied to the one machine. And crucially, with purely local storage on each ESXi, you cannot share VMs across hosts – meaning features like vMotion or HA won’t work because those require two hosts to see the same storage.
The diagram below illustrates a simple setup with local storage: one ESXi host has its own VMFS datastore, with a couple of VMs stored on it:
No other host has access to those VMs. While this setup is straightforward and can be fine for non-critical or single-host environments, it doesn’t support the high-availability features that VMware is known for.
In short, local storage might be okay for a test lab or a single-host deployment, but it’s not recommended if you plan to have multiple hosts or want to use VMware’s clustering features. You might save on hardware initially by avoiding a shared storage system, but you lose out on reliability and flexibility. As the saying goes in VMware circles: if it’s not on shared storage, it’s not highly available.
Shared storage
A shared storage architecture means that the storage holding your VMs is accessible by multiple ESXi hosts at the same time. Traditionally, shared storage in VMware environments is provided by a SAN (Storage Area Network) or a NAS device. This could be, for example, an iSCSI SAN, a Fibre Channel SAN, or a NAS server exporting NFS shares. It could even be cloud-based storage in some setups. The key is that all the ESXi hosts connect to the same storage resource over a network, and thus all see the same datastore. That way, any host can run any VM because the VM’s virtual disks are available to all hosts. The figure below shows a basic shared storage scenario:
Two ESXi hypervisor servers are connected to a common storage system where all the virtual disks (VMDKs) of the VMs reside. With this setup, if one host fails or needs maintenance, the other host can access the same VM data and take over running those VMs (this is how VMware High Availability works). Likewise, vMotion can live-migrate a VM from Host A to Host B since both share the disk files. Shared storage is essential for enabling vSphere features like vMotion, HA, and DRS – without it, each host is an island.
Shared storage, when properly configured, is also more robust: you can have multiple paths from hosts to storage (for redundancy and load balancing), and the storage devices themselves often have advanced resiliency features. It does introduce the need for a storage network (like a dedicated iSCSI network or Fibre Channel fabric), but the benefits to uptime and flexibility are worth it.
Now, a big consideration is that traditional shared storage hardware (SAN/NAS appliances) can be expensive. This is where the concept of software-defined storage (SDS) comes into play, and it’s arguably the smartest way to build shared storage nowadays. SDS solutions allow you to use ordinary servers and disks to create a shared storage pool, rather than relying on a proprietary external box. It’s far more flexible than physical shared storage and can be dramatically cheaper.
In VMware’s world, an example of SDS is VMware vSAN – an add-on that uses the local disks of your ESXi hosts and merges them into a distributed datastore. With vSAN, you don’t need an external SAN at all; the vSAN software takes several ESXi servers with their own SSDs/HDDs and creates a single shared datastore that all those hosts (and only those hosts) use. This can significantly reduce costs because you’re using standard server hardware (as long as it’s on VMware’s hardware compatibility list) instead of specialized SAN hardware.
There are also third-party SDS solutions available. For instance (surprise-surprise!), StarWind Virtual SAN is a software-defined storage product that can pool local server disks into a replicated shared datastore accessible by all nodes in a VMware cluster. StarWind Virtual SAN is often used as a cost-effective, high-performance alternative to a traditional SAN – it runs on your existing servers and ensures that each piece of data is mirrored between hosts, giving you a highly available storage backbone without needing expensive external arrays.
To sum up, shared storage (whether a physical SAN/NAS or a virtual SDS solution) is always recommended in multi-host VMware setups. It unlocks the full power of vSphere’s features.
The diagram below provides an example of an SDS approach using VMware vSAN:
In this illustration, multiple ESXi hosts are combined into a vSAN cluster; each host contributes some local SSDs and HDDs (shown at the bottom) to form a single shared datastore (the vSphere + Virtual SAN layer) for all the VMs on top. This kind of setup (and similarly, StarWind’s Virtual SAN or other SDS technologies) eliminates the single point of failure of local disks while leveraging the hardware you already have. The result is a reliable, shared storage pool that is accessible by all hosts simultaneously, enabling features like vMotion and HA, and it often comes at a lower cost than buying a new SAN appliance.
Thick provisioning vs. Thin provisioning – what do you expect from your virtual disk?
When creating a virtual machine, one of the decisions you’ll face is how to provision the VM’s virtual disk. In VMware, you typically have a choice between Thick Provisioning and Thin Provisioning for virtual disks (VMDKs). This choice determines how space is allocated on the datastore and can impact performance, storage efficiency, and even security. There’s no one-size-fits-all answer here – it really depends on what you need from the virtual disk: maximum performance? minimal storage usage? security against snooping old data? Let’s break down the differences so you can decide which provisioning type meets your needs.
Thick-Provisioned Virtual Disks
If you choose a Thick Provisioned disk, it means the full size of the virtual disk is allocated up front on the datastore at the time of creation. For example, if you create a 50 GB thick-provisioned VMDK, the system will immediately mark off 50 GB of space on the datastore for that VM, even if the VM’s OS only uses a small portion of it initially. Think of it as claiming your entire parking spot, whether your car is present or not. Thick provisioning has a couple of advantages: the space is reserved so you won’t unexpectedly run out of space for that VM later, and thick disks can be slightly faster to write to (no need to allocate space on the fly during writes). Thick disks are also quick to create since it’s mostly just marking space as used (unless you choose a type that zeroes out data, which we’ll discuss in a moment). However, a potential security concern with thick provisioning is that if the storage system doesn’t wipe the blocks when allocating them, any previously written data on those disk blocks could theoretically be read by the VM. That leads us to the two sub-types of thick provisioning that VMware offers, which handle this issue differently.
VMware actually provides two flavors of thick provisioning:
- Lazy-Zeroed Thick – The virtual disk’s space is allocated in full, but old data on the physical storage is not immediately erased. Instead, each block is “zeroed out” at the moment it’s first written to by the VM. This means creation of the disk is fast, but the very first write to any block might be a bit slower due to the need to clear that block (since it contains stale data until then). Lazy-zeroed thick disks are slightly less secure in the sense that until a block is overwritten, it could contain old residual data (though the guest OS wouldn’t normally see this data unless it tries to read unused blocks). In practice, lazy-zeroed is the default thick mode and is fine for many cases, but you should avoid it if you have strict security requirements about data sanitization. Performance-wise, after the first-time writes, it behaves like a regular thick disk.
- Eager-Zeroed Thick – This type takes it a step further by immediately zeroing out every block of the disk at creation time. That means it will take longer to create the VMDK (especially if it’s large, since it’s writing zeros to all the storage space), but once that’s done, the disk is as “clean” as it can be (no old data remnants) and every block is pre-zeroed, so writes don’t incur the clearing penalty later. Eager-zeroed thick disks are required if you want to use certain VMware features like Fault Tolerance (FT), because they guarantee no stale data. They also can have slightly better write performance for the VM from the get-go (since the space is pre-cleared, every write is just a write, no extra work needed). Essentially, you pay the cost upfront (time to create the disk) to get potentially better performance and security.
Both types of thick provisioning consume the full allocated space right away. For instance, imagine you have a 60 GB datastore and you create two thick disks of 30 GB each for two VMs. Even if each VM is only actually using 10 GB inside its OS, the datastore will show 60 GB used (completely full) because those 30+30 GB have been reserved for the VMs. The figure below illustrates this scenario:
On the right, the 60 GB datastore is entirely allocated by Disk1 and Disk2 (each 30 GB thick). Disk1 (green segment) is a lazy-zeroed thick disk and Disk2 (orange segment) is an eager-zeroed thick disk, but in both cases the space is fully taken. The difference is subtle: for Disk1, the blocks are allocated but will be zeroed lazily as the VM writes to them (meaning some old data might remain in unused areas until overwritten, represented conceptually by the shaded segments), whereas Disk2’s blocks were all proactively zeroed (hence no old data remains). The main takeaway is that with thick provisioning, you do not gain any space savings on your datastore – you trade storage efficiency for guaranteed space and possibly performance.
Thin-Provisioned Virtual Disks
With Thin Provisioning, the idea is to allocate storage on-demand instead of upfront. When you create a thin-provisioned VMDK, you still specify a maximum size (say 30 GB), but the hypervisor will not immediately reserve 30 GB on the datastore. It might initially allocate only a few megabytes, just enough to start the VM. As the VM’s OS writes data to its disk, the VMDK file will grow in size up to the limit. In other words, the space is claimed only as needed. The obvious benefit of thin provisioning is storage efficiency – you can have a lot of VMs each with (thin) 50 GB disks on a 500 GB datastore, and as long as their actual usage stays low, you won’t consume all the space. For example, using that same 60 GB datastore scenario: if you create two 30 GB thin disks for your VMs and each VM only uses 10 GB of actual data, then only ~20 GB of the datastore will be used, leaving ~40 GB free. The diagram below shows how thin disks leave free space on the datastore:
You can see Disk1 and Disk2 (green and orange segments) occupy only 10 GB each (their actual data), and the rest of the 60 GB datastore is still free for growth. Thin provisioning is great for maximizing your storage utilization and avoiding wasted space from over-provisioning.
However, thin provisioning comes with trade-offs. One concern is performance: when new data is written and the thin disk needs to expand, the system has to find and allocate new blocks on the datastore, and if those blocks haven’t been used before, they may need to be zeroed (just like the lazy-zeroed thick case) before the write can proceed. This can make writes slower at the moments when expansion happens (usually this is not hugely noticeable unless your disk is growing rapidly or your storage is under heavy load).
Another issue is management: thin disks make it easier to overcommit storage – meaning you might allocate more virtual disk capacity across your VMs than the physical datastore actually has, betting that not all VMs will use their full allotment. If you’re not carefully monitoring usage, you could accidentally fill up a datastore, which is a dangerous situation (if a datastore runs out of space, VMs can pause or crash). Thin provisioning also doesn’t automatically reclaim space when the VM deletes data; the VMDK file can stay large unless you manually punch holes (using VMware Tools “disk shrink” or storage VMotion, etc., in certain circumstances). VMware has improved this with features like automatic UNMAP (space reclamation) in newer vSphere versions, but it’s something to be aware of.
In summary, use thin provisioning for flexibility and better space efficiency, especially in dev/test or when you need to stretch your storage, but keep an eye on your capacities. And for any critical or high-performance VMs, you might prefer thick provisioning to ensure predictable performance. If you do go thin, monitor your datastores and set up alerts so you’re not caught off guard by running out of space. Thin provisioning is not a set-and-forget feature – it requires a bit of management discipline to avoid trouble. As a rule of thumb, never rely on thin provisioning to magically expand your storage – always know your actual usage and growth patterns.
Snapshots vs. Backups
Beginners often mix up snapshots with backups, but they are very different tools in the virtualization world. It’s crucial to understand this difference to protect your data properly.
A backup is an independent copy of your data (or VM) that can be stored elsewhere and used to restore the data in case the original is lost or damaged. For example, using a backup software to save a VM’s disk to an external location means you could later recover that VM on a different host, or reconstruct it even if the original host dies. Backups are typically part of a long-term data protection strategy – they might be taken nightly, weekly, etc., and kept for weeks or months. The key is that a backup is separate from the original; if your VM or server disappears, the backup is still safe (assuming your backup storage is safe).
A snapshot, on the other hand, is a feature that captures the state of a VM at a point in time – kind of like a freeze frame. In VMware, when you take a snapshot of a VM, the hypervisor creates a delta disk file to record changes going forward, and optionally a memory state file if you snapshot the RAM. The original VMDK is preserved as it was, and any new writes go to the delta. This allows you to revert the VM to the snapshot state if needed. However, snapshots are not independent copies of the VM. They rely on the original disk and are stored with the VM on the same datastore. If the underlying storage is lost or the VM is deleted, the snapshots go with it. Think of a snapshot as a quick safety net before you do something risky on a VM: for example, take a snapshot, then apply an update or configuration change. If the change messes things up, you can rollback (revert) to the snapshot and the VM will be exactly how it was before. This is extremely useful, but it’s a short-term mechanism.
The important myth to debunk is “snapshots = backups.” Snapshots are not backups. You cannot use a snapshot as a reliable recovery point if the VM is gone. If the entire host or datastore fails, your snapshots won’t help, because they lived on the same storage. Also, snapshots are not meant to be kept long-term. VMware snapshots are known to grow in size and can even hurt performance if left for too long. They are a temporary convenience, not a backup solution.
So what should a beginner remember? Use regular backups (with proper backup software or VMware’s built-in backup tools) to protect your VMs and data. Backups will save your bacon when disaster strikes, allowing you to restore VMs on any infrastructure. Use snapshots sparingly – for instance, before making a system change or updating software, take a snapshot so you can quickly undo the change if needed. But once you verify everything is okay, delete the snapshot to merge changes and free up space. Never rely on a VMware snapshot alone as your disaster recovery plan. In fact, a best practice is to never leave snapshots running for more than a short period (hours or a few days at most). If you need a longer-term point-in-time backup, then take a real backup.
In summary: Backups are your true recovery plan, while snapshots are a handy temporary safety net. Both have their place, but they are not interchangeable. Make sure you implement a solid backup routine for your VMware environment (and test those backups!), and treat snapshots as a convenience for administrators, not as data protection.
Logs are important – make sure not to lose them
Logs are an administrator’s best friend when it comes to diagnosing problems in an ESXi environment. VMware ESXi and vCenter generate various log files (for example, vmkernel logs, VM logs, etc.) that record what’s happening under the hood. If something goes wrong – say a host crashes or a VM has an error – the logs are often the only clue to understanding the issue. For beginners, it might not be obvious how VMware handles logs, and there’s a particular gotcha if you run ESXi on certain types of media.
By default, ESXi will store its logs in a location called the scratch partition. On a typical server with local disks, the scratch partition resides on disk. However, many people (especially in lab or budget setups) install ESXi on a small USB stick or SD card in the server, instead of a full-fledged drive. When ESXi runs from flash media like that, it often doesn’t have a persistent scratch partition on the USB/SD (since those media are small and also VMware tries to minimize writes to them to prolong their life). In such cases, ESXi uses a RAM disk for scratch space. This means the logs are essentially being kept in memory (specifically, a 512 MB RAM disk for ESXi logs). The big caveat here is that if you reboot the host, those logs stored on the RAM disk are gone (wiped, since RAM doesn’t persist through reboots). So, if your ESXi is running from a USB stick and it crashes and reboots, the logs that might tell you why it crashed could be lost by the time it comes back up! Even on a normal installation, if you don’t configure log persistence or remote logging, you could accidentally lose logs that get rotated or cleared out.
To avoid this, make sure your logs are being saved to a persistent location. There are a couple of ways to do this. One is to configure ESXi to use a datastore for its scratch location (for example, point it to a local or shared datastore so logs get written to disk). Another convenient solution, especially if you have vCenter, is to use a central syslog server. VMware vCenter Server (the Appliance, and even Windows vCenter in older versions) includes a syslog collector service that can aggregate logs from your ESXi hosts. In vCenter Server 6.x and above, the syslog feature is built-in – you just need to configure your ESXi hosts to send their logs to the vCenter’s syslog IP or hostname. There are also third-party and open-source syslog servers you can use for this purpose. The key point is: don’t ignore your logs. Set up remote logging, or at least ensure logs go to a disk that isn’t ephemeral.
Additionally, never manually delete log files unless you really know what you’re doing. Some beginners, when trying to free up space, might be tempted to delete logs. This is generally not a good idea, as it can make troubleshooting impossible after the fact. VMware will rotate logs on its own (most logs roll over to “-old” files after they reach a certain size). If logs are consuming a lot of space, it might indicate another issue (like something constantly erroring in a loop). Check with VMware support or forums before purging logs by hand.
In summary, configure your ESXi logging so that logs are retained either on stable storage or sent to vCenter/another syslog server. That way, when something goes wrong, you have the forensic evidence to understand it. It’s much harder to fix a problem when you have zero clues about what happened. Logging might not be the flashiest topic, but it’s definitely something to set up properly even in a small VMware deployment.
Always check the networking
Networking in a virtual environment is just as important as in physical, and vSphere offers a lot of flexibility in how you configure virtual networks. As a newcomer, it’s easy to make mistakes or overlook best practices with networking, which can lead to performance bottlenecks or even outages. So, a big thing you should always keep in mind is to plan and check your networking configuration.
One piece of advice is to create a network design (roadmap) for your VMware environment. This means think through the different types of network traffic you will have and separate them appropriately. Typical VMware setups will have at least a management network (for host management and vCenter/ESXi communication), a VM network (for the actual virtual machine traffic, which could be several different VLANs depending on your VMs), and possibly dedicated networks for vMotion, for storage (if using iSCSI/NFS or vSAN, for instance), and for fault tolerance logging or backup traffic. Each of these functions should ideally run on separate VLANs or even physically separate NICs if possible. This segregation prevents, say, a burst of VM traffic from overwhelming your vMotion or management traffic. It also adds security (your management network can be isolated from the VM networks, etc.).
When configuring virtual switches (vSwitches or distributed switches) and port groups in ESXi, use consistent naming and settings across all hosts. For example, if you have a VLAN for “Production VMs” and you create a port group for it on one host, make sure to create that same port group (with the same name and VLAN ID) on all hosts that will need to run those VMs. Consistency is key. If you don’t keep things identical, you might find that vMotion fails because the destination host doesn’t have a matching network, or a failover doesn’t work as expected. It can become a mess if each host has slightly different network setups. So document your network design and apply it uniformly.
Pay attention to your physical NIC assignments as well. Most servers have multiple physical NIC ports – you’ll want to spread your networks across these for redundancy. For instance, you might team two NICs for your management and VM traffic VLANs, and team another two NICs for storage and vMotion (with VLANs separating the traffic). Ensure your switches are configured for the VLANs and trunking properly if you’re tagging VLANs in VMware. A common mistake is to have a VLAN mis-match or a trunk port not allowing the needed VLAN, causing a network to be unreachable and leaving you scratching your head. Always verify the physical switch port configurations match what the ESXi host expects.
Another networking tip: don’t change things on the fly without a plan. If you start renaming port groups, moving cables, or reassigning NICs haphazardly, you can easily lose track of what’s what. For example, swapping two network cables on the back of a host without updating your mapping documentation can mean the management network ends up on the wrong VLAN or a disconnected NIC, potentially cutting off access to that host. Changes should be made methodically and documented.
Ultimately, the goal is that if one of your ESXi hosts goes down or needs maintenance, the remaining hosts can seamlessly take over the VMs – and part of that means the networking for those VMs (and for vMotion/HA) is intact and consistent. If you’ve carefully set up separate networks for each purpose and kept them identical across hosts, you’ll be able to use VMware features effectively to mitigate issues (for example, vMotion VMs away before downtime, or HA restarting VMs on another host during a failure). But if the networking isn’t solid, those features might not work when you need them most.
In summary, plan your virtual network as if it were a physical network – isolate critical traffic, double-check all configurations, and keep everything documented and consistent. It will save you from unpredictable headaches down the road.
Choosing the right virtual network adapter for VMs
One more networking topic for beginners is the choice of virtual network adapter for your virtual machines. When you create a VM in VMware, you can choose what type of network interface the VM will have (e.g., E1000, VMXNET3, etc.). This isn’t a physical NIC, but an emulated or paravirtual device presented to the guest OS. The type of adapter can affect performance and compatibility. Here are the common options and when to use them:
- VMXNET3 – This is VMware’s optimized paravirtualized network adapter. It’s designed for high performance and low overhead. VMXNET3 supports advanced features like multiqueue, large receive offload, IPv6 offloads, and MSI/MSI-X interrupt delivery. It does not emulate a specific real-world NIC; instead, it requires the VMware Tools drivers inside the guest OS. VMXNET3 is the go-to choice for modern operating systems (Windows 7/Windows Server 2008 and later, Linux with modern kernels, etc.) once you have VMware Tools installed. If performance is a priority, you’ll want to use VMXNET3. Just remember to install VMware Tools (or open-vm-tools for Linux) in the guest, because without the proper driver the VM won’t recognize this adapter. Essentially all VM hardware versions since vSphere 4 support VMXNET3, so it’s available on any reasonably current setup.
- E1000/E1000e – These are emulated versions of Intel Ethernet adapters. The E1000 emulates an Intel 82545EM Gigabit Ethernet NIC (a fairly old but widely supported gigabit NIC), and the E1000e emulates a newer Intel 82574 Gigabit NIC. The big advantage of E1000/E1000e is that almost every operating system has built-in drivers for them (since the hardware they emulate is common). For example, when you install Windows or Linux, it will usually automatically detect an “Intel Pro/1000” network card and have a driver ready. This makes it easy to get networking running even before VMware Tools is installed. However, the E1000/E1000e adapters have more overhead than VMXNET3 because every packet has to be processed by the hypervisor to emulate the physical card. They also don’t support some of the offloading and advanced features that VMXNET3 does, which can result in higher CPU usage for the same network throughput. In practice, you might use E1000/e for older guest OSes that don’t support VMXNET3, or during OS installation if you can’t slipstream the VMXNET driver. For modern OSes, you’d typically switch to VMXNET3 after installing VMware Tools for better performance.
- VLance (AMD 79C970) – This is an emulated AMD PCnet-II (PCnet32) 10/100Mbps NIC. It’s a very legacy virtual NIC that VMware includes mainly for compatibility with very old operating systems (think Windows NT, early Windows 2000, or some DOS/Novell NetWare type VMs). Most 32-bit operating systems from the early 2000s or late 90s will recognize this “AMD Lance” adapter out of the box. However, it’s limited to 100 Mb and has poor performance by today’s standards. You generally would not use Vlance unless you have a guest that cannot use E1000 or VMXNET3. In modern usage, it’s almost never seen except in scenarios where you’re virtualizing something truly ancient that has no other NIC driver options. (VMware’s “Flexible” adapter mode in older versions could automatically present as Vlance until Tools were installed, then switch to VMXNET – but that’s historical.)
For most people starting out with VMware in 2024, your best bet is: use VMXNET3 for all new VMs (make sure to install VMware Tools in the guest to get the driver). It will give you the best network performance and capabilities. If you encounter a situation where the OS doesn’t yet have VMware Tools (like during a network install of an OS) and it doesn’t detect a network with VMXNET3, you might temporarily switch the VM’s adapter to E1000 so the installer has network access, then later switch to VMXNET3 after Tools is in place. But those cases are rare now, since even the VMware Tools can be installed easily after initial OS setup without network.
In summary, VMXNET3 vs E1000 is the main decision. VMXNET3 is better performance-wise, E1000 is there for broad compatibility. And Vlance is basically only for legacy support. Choosing the right adapter ensures you get the most out of your VM’s networking and avoid any unnecessary bottlenecks.
Conclusion
Starting with VMware can be a bit daunting, but it’s a rewarding journey. In this article, we covered a few fundamental areas that every beginner should know: understanding VMware’s terminology, the limitations of the free ESXi version, the importance of using shared storage for advanced features, the differences between thick and thin provisioning of virtual disks, why snapshots are not a substitute for backups, the need to preserve and manage your logs, and the importance of planning your network configuration (including picking the right virtual NICs for your VMs). These insights should give you a solid foundation and help you avoid some common pitfalls when building out your VMware environment.
Remember, everyone starts somewhere – even seasoned VMware engineers were beginners once and learned from experience (and mistakes!). Don’t (Remember, everyone starts somewhere – even seasoned VMware engineers were beginners once and learned from experience and the occasional mistake. Don’t be afraid to experiment (in a lab environment), ask questions, and keep learning. With a solid grasp of these fundamentals, you’re well on your way to building a reliable and efficient VMware setup. Good luck with your VMware journey, and happy virtualizing!)