FreeNAS Mini XL, 8 bay Mini-ITX NAS

Catching up on email, I saw a Newsletter from iX Systems announcing the FreeNAS Mini XL (the irony).  On the new FreeNAS Mini page it looks just like the FreeNAS mini but taller to accommodate 8-bays.

Available on Amazon starting at $1,500 with no drives.

Here’s the Quick Start Guide and Data Sheet.

The pictures show what appears to be equipped with the Asrock C2750d4i motherboard which has an 8-core Atom / Avoton processor.  With the upcoming FreeNAS 9.10 (based on FreeBSD 10) it should be able to run the bhyve hypervisor as well (at least from CLI, might have to wait until FreeNAS 10 for a bhyve GUI) meaning a nice all-in-one hypervisor with ZFS without the need for VT-d.   This may end up being a great successor to the HP Microserver for those wanting to upgrade with a little more capacity.

The case is the Ablecom CS-T80 so I imagine we’ll start seeing it from Supermicro soon as well.  According to Ablecom it has 8 hotswap bays plus 2 x 2.5″ internal bays and still managed to have room for a slim DVD/Blu-Ray drive.

ablecom_cs_t80It’s really great to see an 8-bay Mini-ITX NAS case that’s nicer than the existing options out there.  I hope the FreeNAS Mini XL will have an option for a more powerful motherboard even if it means having to use up the PCI-E slot with an HBA–I’m not really a fan of the Marvell SATA controllers on that board, and of course a Xeon-D would be nice.

 

 

VMware vs bhyve Performance Comparison

Playing with bhyve

Here’s a look at Gea’s popular All-in-one design which allows VMware to run on top of ZFS on a single box using a virtual 10Gbe storage network.  The design requires an HBA, and a CPU that supports VT-d so that the storage can be passed directly to a guest VM running a ZFS server (such as OmniOS or FreeNAS).  Then a virtual storage network is used to share the storage back to VMware.

vmware_all_in_one_with_storage_network
VMware and ZFS: All-In-One Design

bhyve, can simplify this design since it runs under FreeBSD it already has a ZFS server.  This not only simplifies the design, but it could potentially allow a hypervisor to run on simpler less expensive hardware.  The same design in bhyve eliminates the need to use a dedicated HBA and a CPU that supports VT-d.

freebsd_bhyve
Simpler bhyve design

I’ve never understood the advantage of type-1 hypervisors (such as VMware and Xen) over Type-2 hypervisors (like KVM and bhyve).  Type-1 proponents say the hypervisor runs on bare metal instead of an OS… I’m not sure how VMware isn’t considered an OS except that it is a purpose-built OS and probably smaller.  It seems you could take a Linux distribution running KVM and take away features until at some point it becomes a Type-1 hypervisor.  Which is all fine but it could actually be a disadvantage if you wanted some of those features (like ZFS).  A type-2 hypervisor that supports ZFS appears to have a clear advantage (at least theoretically) over a type-1 for this type of setup.

In fact, FreeBSD may be the best visualization / storage platform.  You get ZFS and bhyve, and also jails.  You really only need to run bhyve when virtualizing a different OS.

bhyve is still pretty young, but I thought I’d run some tests to see where it’s at…

Environments

This is running on my X10SDV-F Datacenter in a Box Build.

In all environments the following parameters were used:

  • Supermico X10SDV-F
  • Xeon D-1540
  • 32GB ECC DDR4 memory
  • IBM ServerRaid M1015 flashed to IT mode.
  • 4 x HGST Ultrastar 7K300 HGST 2TB enterprise drives in RAID-Z
  • One DC S3700 100GB over-provisioned to 8GB used as the log device.
  • No L2ARC.
  • Compression = LZ4
  • Sync = standard (unless specified).
  • Guest (where tests are run): Ubuntu 14.04 LTS, 16GB, 4 cores, 1GB memory.
  • OS defaults are left as is, I didn’t try to tweak number of NFS servers, sd.conf, etc.
  • My tests fit inside of ARC.  I ran each test 5 times on each platform to warm up the ARC.  The results are the average of the next 5 test runs.
  • I only tested an Ubuntu guest because it’s the only distribution I run in (in quantity anyway) addition to FreeBSD, I suppose a more thorough test should include other operating systems.

The environments were setup as follows:

1 – VM under ESXi 6 using NFS storage from FreeNAS 9.3 VM via VT-d

  • FreeNAS 9.3 installed under ESXi.
  • FreeNAS is given 24GB memory.
  • HBA is passed to it via VT-d.
  • Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
  • Ubuntu guest given VMware para-virtual drivers

2 – VM under ESXi 6 using NFS storage from OmniOS VM via VT-d

  • OmniOS r151014 LTS installed under ESXi.
  • OmniOS is given 24GB memory.
  • HBA is passed to it via VT-d.
  • Storage shared to VMware via NFSv3, virtual storage network on VMXNET3.
  • Ubuntu guest given VMware para-virtual drivers

3 – VM under FreeBSD bhyve

  • bhyve running on FreeBSD 10.1-Release
  • Guest storage is file image on ZFS dataset.

4 – VM under FreeBSD bhyve sync always

  • bhyve running on FreeBSD 10.1-Release
  • Guest storage is file image on ZFS dataset.
  • Sync=always

Benchmark Results

MariaDB OLTP Load

This test is a mix of CPU and storage I/O.  bhyve (yellow) pulls ahead in the 2 threaded test, probably because it doesn’t have to issue a sync after each write.  However, it falls behind on the 4 threaded test even with that advantage, probably because it isn’t as efficient at handling CPU processing as VMware (see next chart on finding primes).
sysbench_oltp

Finding Primes

Finding prime numbers with a VM under VMware is significantly faster than under bhyve.

sysbench_primes

Random Read

byhve has an advantage, probably because it has direct access to ZFS.

sysbench_rndrd

Random Write

With sync=standard bhyve has a clear advantage.  I’m not sure why VMware can outperform bhyve sync=always.  I am merely speculating but I wonder if VMware over NFS is translating smaller writes into larger blocks (maybe 64k or 128k) before sending them to the NFS server.

sysbench_rndwr

Random Read/Write

sysbench_rndrw

Sequential Read

Sequential reads are faster with bhyve’s direct storage access.

sysbench_seqrd

Sequential Write

What not having to sync every write will gain you..

sysbench_seqwr

Sequential Rewrite

sysbench_seqrewr

 

Summary

VMware is a very fine virtualization platform that’s been well tuned.  All that overhead of VT-d, virtual 10gbe switches for the storage network, VM storage over NFS, etc. are not hurting it’s performance except perhaps on sequential reads.

For as young as bhyve is I’m happy with the performance compared to VMware, it appears to be a slower on the CPU intensive tests.   I didn’t intend on comparing CPU performance so I haven’t done enough variety of tests to see what the difference is there but it appears VMware has an advantage.

One thing that is not clear to me is how safe running sync=standard is on bhyve.  The ideal scenario would be honoring fsync requests from the guest, however I’m not sure if bhyve has that kind of insight from the guest.  Probably the worst case under this scenario with sync=standard is losing the last 5 seconds of writes–but even that risk can be mitigated with battery backup. With standard sync there’s a lot of performance to be gained over VMware with NFS.  Even if you run bhyve with sync=always it does not perform badly, and even outperforms VMware All-in-one design on some tests.

The upcoming FreeNAS 10 may be an interesting hypervisor + storage platform, especially if it provides a GUI to manage bhyve.

 

Supermicro X10SDV-F Build; Datacenter in a Box

I don’t have room for a couple of rackmount servers anymore so I was thinking of ways to reduce the footprint and noise from my servers.  I’ve been very happy with Supermicro hardware so here’s my Supermicro Mini-ITX Datacenter in a box build.

Supermicro Microtower

Supermicro X10SDV Motherboard

Unlike most processors, the Xeon D is SOC (System on Chip) meaning that it’s built into the motherboard.  Depending on your compute needs, you’ve got a lot of pricing / power flexibility with the Mini-ITX Supermicro X10SDV motherboards with the Xeon D SOC CPU ranging from a budget build of 2 cores to a ridiculous 16 cores rivaling high end Xeon E5 class processors!

How many cores do you want?  CPU/Motherbord Options

x10sdv-4c-tln2f_spec Supermicro board with fan

A few things to keep in mind when choosing a board.  Some come with a FAN (normally indicated by a + after the core count), some don’t.  I suggest getting it with a fan unless you’re putting some serious air flow (such as with a 1U server) through the heatsink.  I got one without a fan and had to do a Noctua mod (below).

Many versions of this board are rated for 7-years lifespan which means they have components designed to last longer than most boards!  Usually computers go obsolete before they die anyway, but it’s nice to have that option if you’re looking for a permanent solution.  A VMware / NAS server that’ll last you 7-years isn’t bad at all!

On the last 5 digits, you’ll see two options: “-TLN2F” and “-TLN4F” this refers to the number network Ethernet ports (N2 comes with 2 x gigabit ports, and N4 usually comes with 2 gigabit plus 2 x 10 gigabit ports).  10 gbe ports may come in handy for storage, and also having 4 ports may be useful if you’re going to run a router VM such as pfSense.

 

I bought the first model just known as the “X10SDV-F” which comes with 8 cores and 2 gigabit network ports.  This board looks like it’s designed for high density computing.  It’s like cramming dual Xeon E5’s into a Mini-ITX board.  The Xeon D-1540 will well outperform the Xeon E3-1230v3 in most tests, can handle up to 128GB memory, two nics (this also comes in a model that offers two more 10Gbe providing four nics), IPMI, 6 SATA-3 ports, a PCI-E slot, and an M.2 slot.

Supermicro X10SDV-F Motherboard
Supermicro X10SDV-F

IPMI / KVM Over-IP / Out of Band Management

One of the great features of these motherboards is you will never need to plug in a keyboard, mouse, or monitor.  In addition to the 2 or 4 normal Ethernet ports, there is one port off to the side, the management port.  Unlike HP iLO, this is a free feature on the Supermicro motherboards.  The IPMI interface will get a DHCP address. You can download the Free IPMIView software from Supermicro, or use the Android app to scan your network for the IP address.  Login as ADMIN / ADMIN (be sure to change the password).

Supermicro IPMI KVM over IP

You can even reset or power off, and even if the power is off you can power on the server remotely.

 

Supermicro KVM

And of course you also get KVM over IP, which is so low level you can get into the BIOS and even load an ISO file from your workstation to boot off of over the network!

When I first saw IPMI I made sure all my new servers have it.  I hate messing around with keyboards and mice and monitors and I don’t have room for a hardware based KVM solution.  This out of band management port is the perfect answer.  And the best part is the ability to manage your server from remote.  I have used this to power on servers and load ISO files in California from Idaho.

I should note that I would not be exposing the IPMI port over the internet, make sure it’s on it’s behind a firewall accessible only through VPN.

Cooling issue | heatsink not enough

The first boot was fine but it crashed after about 5 minutes while I was in the BIOS setup…. after a few resets I couldn’t even get it to post.  I finally realized the CPU was getting too hot.  Supermicro probably meant for this model to be in a 1U case with good air flow.  The X10SDV-TLN4F is a little extra but it comes with a CPU fan in addition to the 10Gbe network adapters so keep that in mind if you’re trying to decide between the two boards.

Noctua to the Rescue

I couldn’t find a CPU fan designed to fit this particular socket, so I bought a 60MM Noctua NF-A6x25

60MM Noctua FAN on X10SDV-F
60MM Noctua Fan

This is my first Noctua fan and I think it’s the nicest fan I’ve ever owned.  It came packaged with screws, rubber leg things, an extension cord, a molex power adapter, and two noise reducer cables that slow the fan down a bit.  I actually can’t even hear the fan running at normal speed.

Noctua Fan on Xeon D-1540 X10SDV-F
Notcua Fan on Xeon D-1540

There’s not really a good way to screw the fan and the heatsink into the motherboard together, but I took the four rubber things and sort of tucked them under the heatsink screws.  This is  surprisingly a secure fit, it’s not ideal but the fan is not going to go anywhere.

Supermicro CSE-721TQ-250B

This is what you would expect from Supermicro, a quality server-grade case.  It comes with a 250 watt 80 plus power supply.  Four 3.5″ hotswap bays, trays are the same as you would find on a 16 bay enterprise chassis.  Also it comes with labels numbered from 0 to 4 so you could choose to label starting at 0 (the right way) or 1.  It is designed to fit two fixed 2.5″ drives, one on the side of the HDD cage, and the other can be used on top instead of an optical drive.

The case is roomy enough to work with, I had no trouble adding an IBM ServerRAID M1015 / LSI  9220-8i

CS721

 

I took this shot just to note that if you could figure out a way to secure an extra drive, there is room to fit three drives, or perhaps two drives even with an optical drive, you’d have to use a Y-splitter to power it.  I should also note that you could use the M.2. slot to add another SSD.

supermicro_x10sdv-f_sc721_opened

The case is pretty quiet, I cannot hear it at all with my other computers running in the same room so I’m not sure how much noise it makes.

This case reminds me of the HP Microserver Gen8 and is probably about the same size and quality but I think a little more roomier and with Supermicro IPMI is free.

Compared to the Silverstone DS380 the Supermicro CS721 is a more compact.  The DS380 has the advantage of being able to hold more drives.  The DS380 can fit 8 3.5″ or 2.5″ in hotswap bays plus an additional four 2.5″ fixed in a cage.  Between the two cases I much prefer the Supermicro CS-721 even with less drive capacity.  The DS380 has vibration issues with all the drives populated, and it’s also not as easy to work with.  The CS-721 looks and feels much higher quality.

Storage Capacity

cs721_open_doorI loaded mine with two Intel DC S3700 SSDs and 4 x 6TB drives in RAID-Z (RAID-5) the case can provide up to 18TB of storage which is a good amount for any data hoarder wanting to get started.

I think the Xeon D platform offers great value with a great range of power and pricing options.  The prices on the Xeon D motherboards are reasonable considering the Motherboard and CPU are combined, if you went with a Xeon E3 or E5 platform you’d be paying about the same or more to purchase them separately.   You’ll be paying anywhere from $350 to $2500 depending on how many cores you want.

Core Count Recommendations

For a NAS only box such as FreeNAS, OmniOS+NappIt, NAS4Free, etc. or a VMware All in one with FreeNAS and one or two light guest VMs I’d go with a simple 2C CPU.

For a bhyve or VMware + ZFS an all-in-one I think the 4C is a great starter board, it will handle probably a lot more than most people need for a home server running a handful of VMs including the ability to trans-code with a Plex or Emby server.

From there you can get 6C, 8C, 12C, or 16C, as you start getting more cores the clock frequency starts to go down so you don’t want to go overboard unless you really do need to use those cores.  Also, consider that you may prefer to get two or three smaller boards to allow failover instead of one powerful server.

What Do I Run On My Server Under My Desk?

Other Thoughts

cs721_frontI’m pretty happy with the build, I really like how much power you can get into a microserver these days.  My build has 8 cores (16 threads) and 32GB memory (can go up to 128GB!), and with 6TB drives in RAID-Z (RAID-5) I have 18TB of usable data (more with ZFS compression).  With VMware and ZFS you could run a small datacenter from a box under your desk.

 

NAS Server Build – ASRock C2750D4I & Silverstone DS380

The other day I got a little frustrated with my Gen 8 Microserver, I was trying to upgrade ESXi to 5.5 but the virtual media feature kept disconnecting in the middle of the install due to not having an ILO4 license–I actually bought an ILO4 enterprise license but I have no idea where I put it!  What’s the point of IPMI when you get stopped by licensing?  I hate having to physically plug in a USB key to upgrade VMware so much that I decided I’d just build a new server–which I honestly think is faster than messing around with getting an ISO image on a USB stick.

Warning: I’m sorry to say that I cannot recommend this motherboard that I reviewed earlier:  I ended up having to RMA this board twice to get one that didn’t crash.  The Marvell SATA Controller was never stable long term under load even after multiple RMAs so I ran it without using those ports which sort of defeated the reason I got the board in the first place.  Then in 2017 the board died shy of 3 years old, the shortest I have ever had a motherboard last me.  Generally I have been pretty happy with ASRock desktop boards but this server board isn’t stable enough for business or home use.  I have switched to Supermicro X10SDV Motherboards for my home server builds.

Build List

ASRock C2750D4I Motherboard / CPU

C2750D4I-1(L)

Update: 2014-05-11.  Here’s a great video review on the motherboard…

12 SATA ports!  This motherboard is perfect for ZFS which loves having direct access to JBOD disks.  The Marvell SATA controllers did not show up in VMware initially,  however Andreas Peetz provides a package for adding unsupported drivers in VMware, and this worked perfectly.  It took me a couple minutes to realize all you need to do is run these three commands:

Update November 16, 2014 .. it turned out the below issue was caused by a faulty Marvell controller on the motherboard, I ran FreeBSD (a supported OS) and the fault also occurred there so I RMAed the motherboard … I ended up getting a bad motherboard again but after a second RMA everything is stable in VMware… so you can disregard the below warning.

Update March 12, 2015.  My board continues to function okay, but some people are having issues with the drives working under VMware ESXi.  Read the comments for details.

Update August 23, 2014 ** WARNING Read this before you run the command below **  I had stability issues using the below hack to get the Marvell controllers to show up.  VMware started hanging as often as several times a day requiring a system reboot.  This is the entry in the motherboard’s event log: Critical Interrupt – I/O Channel Check NMI Asserted.  I swapped the Kingston memory out for Crucial on ASRock’s HCL list but the issue still persisted so I can’t recommend this drive for VMware.  After heavy I/O tests ZFS also detected data corruption on two drives connected to the Marvell controllers.  I am pretty sure this is because VMware does not officially support these drives so this issue likely doesn’t exist for operating systems that officially support the Marvell controller.

asrock_kvm_over_ipIPMI (allows for KVM over IP).  After being spoiled by this on a Supermicro board IPMI with KVM over IP is a must have feature for me, I’ll never plug a keyboard and monitor into a server again.

Avoton Octa-Core processor.  Normally I don’t even look at Atom processors, but this is not your grandfather’s Atom.  The Avoton processor supports VT-x, ECC memory,  AES instructions, and is a lot more powerful and at only 20 W TDP.  This CPU Boss benchmark says it will probably perform similarly to the Xeon E3-1220L.  The Avoton can also go up to 64GB memory where the E3 series is limited to 32GB making it a good option for VMware or for a high performance ZFS NAS.  The Avoton does not support VT-d so there is no passing devices directly to VMs.

My only two disappointments are no internal USB header on the board (I always install VMware on a USB stick so right now there’s a USB stick hanging on the back) and I wish they had used SFF-8087 mini-SAS connectors instead of individual SATA ports on the board to cut down on the number of SATA cables.

Overall I am very impressed with this board and it’s server-grade features like IPMI.

Instead of going into more detail here, I’ll just reference Patrick’s review of the ASRock C2750D4I

Alternative Avoton Boards

There are a few other options worth looking at.  The ASRock C2550D4I is the same board but Quad core instead of Octa Core.  I actually almost bought this one except I got the 2750 at a good price on SuperBiiz.

Also the SuperMicro A1SAi-2750F (Octa core) and A1SAi-2550F (Quad core) are good options if you don’t need as many SATA ports or you’re going to use a PCI-E SATA/SAS controller.  Supermicro’s motherboards have the advantage of Quad GbE ports, an internal USB header (not to mention USB 3.0), while sacrificing the number of SATA ports–only 2 SATA3 ports and 4 SATA2 ports.  These Supermicro boards use the smaller SO-DIMM memory.

 Silverstone DS-380: 8 hot-swap bay chassis

ds-380-and-asrock

 

The DS-380 has 8 hot-swap bays, plus room for four fixed 2.5″ drives for up to 12 drives.  As I started building this server I found the design was very well thought out.  Power button lockout (a necessity if you have kids), locking door, dust screens on fan intakes, etc.  The case is practical in that the designers cut costs where they could (like not painting the inside) but didn’t sacrifice anything of importance.

gen8_microserver_and_ds380_silverstone
HP Gen 8 Microserver (Left) next to Silverstone DS-380 (right)

A little larger than the HP Gen8 Microserver, but it can hold more than twice as many drives.  Also the Gen8 Microserver is a bit noisier.

ds-380-open

ds-380-tray-with-ssdYou’ll notice above from the top there is a set of two drives, then one drive by itself, and a set of five drives.  This struck me as odd at first, but this is actually that way by design.  If you have a tall PCI card plugged into your motherboard  (such as a video card) you can forfeit the 3rd drive from the top to make room for it.

The drive trays are plastic, obviously not as nice as a metal tray but not too bad either.  One nice feature is screw holes on the bottom allow for mounting a 2.5″ drive such as an SSD!  That’s well thought out!  Also there’s a clear plastic piece that runs alongside the left of each tray that carries the hard drive activity LED light to the front of the case (see video below).

Here’s the official Silverstone DS-380 site, and here’s a very detailed review of the DS-380 with lots of pictures by Lawrence Lee.

Storage

Using 4TB drives 8 bays would get you to 24TB using RAID-Z2 or RAID-6.  Plus have 4 2.5″ fixed bays left for SSDs.

Virtual NAS

I run a virtualized ZFS server on OmniOS following Gea’s Napp-in-one guide.  I deviate from his design slightly because I run on top of VMDKs instead of Passing the controllers to the guest VM (because I don’t have VT-d on the Avoton).

ZIL – Seagate SSD Pro

120GB Seagate Pro SSD.  The ZIL (ZFS Intent Log) is the real trick to high performance random writes, by being able to cache writes on capacitor backed cache the SSD can guarantee a write to the requesting application before it is transferred out of RAM and onto spindles.

So far…

I’m pretty happy with the custom build.  I think the Gen 8 HP Microserver looks more professional compared to the DS-380 which looks more like a DIY server.  But what matters is on the inside, and having access to IPMI when I need it without having to worry about licensing is worth something in my book.

M1015 HBA In the HP Gen8 Microserver

Here’s a quick overview on installing the IBM ServerRaid M1015 HBA (aka LSI SAS9220-8i) in the HP Gen8 Microserver.

ibm_serverraid_m1015_bracket

These cards can be bought for around $100 on ebay.  The HBA has two 6Gbps SAS ports (each port has 4 lanes, each lane is 6Gbps giving a theoretical maximum of 24Gbps per port and 48Gbps if using both ports).  A typical configuration for maximum performance is one lane to each drive using a SFF-8087 breakout cable.  With two of these cables this card is capable of running 8 drives.  You can run more drives with a SAS expander but I haven’t had a need to yet.  I typically flash it into IT (JBOD) mode.  This is a popular card for running ZFS, which is my use-case.

gen8_hp_microserver_sas1

The picture above shows the original location of the 4-drive bay SAS connector, you just need to move it to the HBA,  I didn’t have to re-wire it, there is plenty of slack in the cable so I just had to pull it to the M1015 and plug it in (below).

gen8_hp_microserver_sas2

At first boot all my drives were recognized and VMWare and all the guests booted up as normal.

hp_gen8_microserver_m1015_hba

Also, a few people have asked about mounting an extra drive in the ODD bay, here’s the power connection I think could be tapped into with a Y-splitter (below).

hp_microserver_odd_bay_power

Does this have an advantage over the Gen8 Microserver’s B20i SmartArray controller?   For a lot of setups it probably offers no advantage.  I probably wouldn’t do it in my environment except I already have a couple of M1015’s lying around.  Here’s what you get with the M1015.

  • In IT mode drives are hot-swappable.  No need to power-down to swap out a bad drive.
  • B20i only has 2 6Gbps ports, the other two are 3Gbps.  The M1015 can run up to 8 lanes (10 if you use the first two lanes from the B20i) in 6Gbps.  If you’re using the server as a NAS you’re more limited by the two single Gbps NICs so this shouldn’t be an issue for most setups.
  • The M1015 is known to work with 4TB drives, the Microserver only supports up to 3TB.
  • VMWare can be booted off a USB, but it needs at least one SATA drive to store the first VM’s configuration, so whatever SATA controller that drive is on can’t be used as a pass-through device.  So if you want to pass an HBA directly to a VM (which is a typical for Napp-it All-in-One setups) you can pass the entire M1015 controller to a VM which gives it direct hardware access to the drives (requires a CPU with VT-d).

 

Installed Xeon E3-1230V2 in Gen8 HP Microserver

Gen8 HP MicroserverThanks to a homeservershow forum member keeping track of prices  I ordered the HP Gen8 Microserver 1610T … of course, nobody wants to run VMWare on a Celeron so obviously the first thing to try is installing a Xeon processor.

Update, this post is about 3-years old but HP hasn’t updated their Microserver to support a newer generation of processors since then, I’ve moved to a Supermicro Mini Tower with a X10SDV Motherboard.

Installing the Xeon E3-1230 V2 in the Microserver

Gen8 HP Microserver MotherboardThe HP Microserver’s CPU is passively cooled and the heatsink is rated for a max TDP of 35W, and there’s no port on the motherboard that I could find for an extra CPU fan.

The obvious option is the Xeon E3-1220L V2 at 17W but it’s expensive and hard to find, and only has 2 cores.

 

Xeon E3-1230v2 in the MicroserverI already have a Xeon E3-1230 V2 (69W), and for most people this is a better option because it’s readily available and affordable (currently $234 at Amazon).  I figure worse case I could disable two of the four cores to bring it down to 35W.

I’ve never used my own thermal paste so I’m not sure if that’s the right amount, but that’s how much I did.

First Boot…

Hey, it worked!

I thought it wise to at least go into the BIOS and disable 3.7GHz Turbo, so the max we’ll hit is 3.3GHz.

Boot screen

VMWare ESXi booted just fine (I used the version provided by HP).  Now I’ve got hyper-threading and VT-d (Direct Path I/O) on a Gen8 Microserver!

VMWare Screenshot

And the temperature is doing just fine…

Temperature at idle

gen8_microserver_xeon_e3_cpu_load_testCPU Load Test

10 minutes full load using “stress” on a VM.  All four cores clocking in at 3.292GHz.  You can see the temperatures bumped up but still within specifications.  Fan was still running at 51%.  Temperature inside my house is currently 84F so if it can survive a full load in this heat I’m not concerned about it running into problems. 

Temperature at full load

 

Compatible Processors

(added July 28, 2013)

Here’s a list of processors I think would be good candidates.  I’ve excluded the Core i5 series because they don’t support ECC.

The stock processor is no different than the i3 except for clock speed and hyper-threading so I don’t think it’s worth the money to upgrade to an i3.

The main reasons to upgrade to a Xeon is the AES instruction set, VT-d, or more cores and a faster clock speed.  I think the best value currently is the Xeon E3-1230 V2.

ProcessorGHzTDPCorHTECCVT-xVT-dAESWorks
Xeon E3-1220 v23.169 W4NoYesYesYesYesShould
Xeon E3-1220L v22.317 W2YesYesYesYesYesShould
Xeon E3-1230 v23.369 W4YesYesYesYesYesVerified
Xeon E3-1265 v22.545 W4YesYesYesYesYesShould*
Core i3-3250T335 W2YesNoYesNoNoShould
Core i3-3220T2.835 W2YesNoYesNoNoShould
Celeron G1610T2.335 W2NoYesYesNoNoYes
Pentium G2020T2.535 W2NoYesYesNoNoYes

*Processors ending in a 5 have integrated HD graphics, I’m not sure if this will cause problems.

HP Microserver Gen8 Specs Released

hp_microserver_gen_8Just saw on STH that the HP Microserver Gen8 specs have been released

  • Intel Pentium G2020T (2 core, 2.5Ghz, 3MB, 35W) OR Intel Celeron G1610T
  • 2GB ECC memory (up to 16GB, 2 slots)
  • One PCIe expansion slot.
  • Dual gigabit ethernet ports (332i adapter)
  • Dynamic Smart Array B120i/ZM
  • 150w power supply
  • HP iLO4 remote management port
  • Tool-less maintenance
  • LED health-status light-bar
  • The RAID controller can handle hot-swap now?

My initial thoughts:

The  Celeron G1610T or Pentium G2020T is a much needed upgrade from the previous Microserver’s AMD Turion II Neo N54L.  CPU Benchmark with all three processors: http://www.cpubenchmark.net/mid_range_cpus.html  See specs here: http://ark.intel.com/products/71070  Notice the lack of VT-d (unable to pass a PCI device directly to a VM), no AES-NI, and no Hyper-Threading.  If you really need those features it is an 1155 socket so if it’s not soldered to the board there’s the possibility of swapping it out for a Xeon E3 series processor…although you would need to be careful to run one cool enough.

16GB Memory is a nice upgrade from the previous Microserver’s 8GB, and it is very difficult to find a server this small that supports ECC memory (yes, you need ECC memory).  Unfortunately 8GB modules are fairly expensive.

Dual gigabit ports is a nice upgrade for a NAS, and I believe this particular adapter supports teaming so it may be possible to get a 2 gigabit aggregated link with a proper switch.

b120iSince I use ZFS I was concerned because JBOD mode is not listed as a mode for the  B120i, but this HP support article – Dynamic Smart Array Driver Support for Solaris indicates RAID mode can be disabled which puts the card into HBA mode.

HP iLO4 is a remote management feature, after being spoiled by Supermicro’s IPMI I’ll never plug a keyboard or monitor into a server again.  Basically this lets you remotely manage your server, power on/off, KVM, load media remotely.  It seems to me that HP is trying to sell a subscription with this service, so I’m hoping the important features are free.  Update 2013-06-19: Remote console and media requires the purchase of an ILO4 license.  This will run ~$150 for a three year license… a rather large disappointment.

The LED indicator could be a nice feature, it remains to be seen how ZFS can interface with it or if it can be controlled through some scripting.

hp_microserver_tuck_awayI’ve been a fan of the HP Microservers since using my N40Ls (read my Amazon Review of the HP Microserver), they’re small stack-able servers you can stick just about anywhere and run virtually silent.  Considering all these features if you were trying to build a server with similar specs you couldn’t do it for less than buying one of these.  This will handle running a light-weight server for a home NAS, VMWare ESXi with a few VMs, or a small business.


hp_microserver_father_son_2Start saving up, this makes for a great father-son project!  The G1610T model is $450 and the G2020T is $520… I believe the G2020T is only priced a few dollars more than the G1610T so you would be better off purchasing the cheaper model and upgrading the processor, but maybe there is something included in the more expensive model that’s not listed on the specs… perhaps a battery backed write cache for the RAID card.  (Update 2013-06-19: It appears this is not the case…)

More G8 Microserver updates and leaked pictures from Monsta…