I would strongly advice to stick to your baremetal intallation. So yes, it's possible if you are willing to take money in your hands and sacrifice it to the hardware-upgrade-daemons I operate two of those boxes, but mine do have E3 CPUs with 16GB each and an LSI controller that i do forward to my DSM6.0.2 instances. So why bothering with ESXi then? Especialy, if the only things you gain are disadvantages? So appearently your goal can't be to run concurent VMs on the same host. RDM drives do not provide SMART information because vmware only implemented a subset of sata/scsi commands. so there will be no SMART information available for your drives. SMART Information: without a vt-d capable CPU you won't be able to add a HBA controller and forward it to your XPE vm via direct-io. RAM: 6 GB of total ram is not realy much if you consider that ESXi already wants to have a share of it itself.
Esxi 6.5 raid controller compatibility driver#
The installation of the ESXI started only if i put the correct Adaptec Raid Driver into the virtual Storage 2 and follow the steps like in the instruction.
Esxi 6.5 raid controller compatibility install#
LSI SAS9201-16i 16-Port SAS/SATA controller (the card is set passthrough in ESXi) XPEnology 6. I followed the instruction 'How to install and configure an ESXi 6.5 host', but after installing my System doesnt find the Hard-Disk to boot from. 2x Xeon L5640 2.27GHz 64GB 1333MHz RAM 2x 250GB SanDisk NGFF SSD. Unfortunatly this is where the fun starts. I am able to create the array using the MegaRAID BIOS so I know that the firmware is functional. CPU: a passmark of roughly 2500 is not realy that much if you plan to run concurent VMs next to your XPE installation The SATA controller is setup as passthrough in ESXi 6.5. With the RAID key installed (and properly configured with the iMR BIOS from Nov-2010) ESXi identified the controller as an ‘LSI Logic / Symbios Logic MegaRAID SKINNY Controller’. You would not want to loose your already limited ressources, would you? Why would anyone want to migrate from a baremetal installation to an ESXi installation with such limited ressources? If i understood right, your setup has a Celeron G1610 with 6GB Ram. NOK: in RAID controlled mode activated in bios (disks not visible in ESXI administration interface) compatible with current settings (RAID controlled activated)->issue: no disks visibleĭid somebody have tried to set the same config (settings), with RAID controller activated in the bios, and disks visible ? Some advise ? OK: in ACPI mode activated in bios (disks visible in administration interface), but not compatible with current settings (RAID controlled activated -> issue: data not accessible by DSM (virtual) Tried several time to install ESXI (6.0 / 6.5) custom HP version (SD & USB key ): OK, no issue, ESXI properly installed ESXI installation -> could be installed on USB ou internal SD card DSM disks (4 HDD, 2 in raid mode, 1 data, 1 SSD - connected internal in CDROM plug,not used until now -) -> reuse of disks without migration (RDM mode needed) + RAID controller activated in the bios (as of today with the bare metal DSM 6.0.2)
![esxi 6.5 raid controller compatibility esxi 6.5 raid controller compatibility](https://i0.wp.com/vcloud-lab.com/files/resized/598075/601;623;33a8ddef741ca5f4db52a60d58469bf995957ddf.png)
I would like to switch to VM with ESXI, but taking in account several "contraints/specifications":
![esxi 6.5 raid controller compatibility esxi 6.5 raid controller compatibility](https://4sysops.com/wp-content/uploads/2017/01/VMware-ESXi-6.5-compatibility.png)
I have my Gen8 Proliant server (t1610, 6Go RAM), which works fine with the new loader, in bare metal config.