Seagate NAS Pro DP-6 Network Attached Storage Review
Testing Methodology
System Configuration | |||
Case | Cooler Master Cosmos II SE | ||
CPU | Intel i7 4770K | ||
Motherboard | MSI Z97m Gaming | ||
Ram | 2 GB G.Skill F3-12800CL9q DDR3-1600 | ||
GPU | MSI GTX 970 OC | ||
Hard Drives | Samsung 840 EVO 256gb SSD | ||
Western Digital black 500 gb 7200 RPM HDD | |||
Power Supply | NXZT Hale v2 1000 Watt power supply |
Four Seagate 4 TB NAS 5900 RPM drives were installed and used in the NAS tests.
A dual port Intel network card was installed in the test system.
In our testing I used the Thecus N5550 & QNAP TS-451 to get comparison numbers against the Seagate DP-6. The Seagate DP-6 used 4 drives and 2 GB of ram. The other NAS devices use 4 GB of ram and the same Seagate NAS hard drives.
Network Layout
For all tests the NAS was configured to use the a single network interface. One CAT 6 cable was connected to the Cisco 2960 from the NAS and one CAT 6 cable was connected to the workstation from the switch. Testing was done on the workstation with only 1 network card active. The switch was cleared of any configuration and left in a unconfigured state. Jumbo frames was not enabled and no changes to the network interfaces was made.
Software
All testing is done based off of a single client accessing the NAS.
To test NAS Performance we use four applications; The Intel NAS Performance toolkit, Crystal Disk Mark, Atto Storage benchmark, and Anvil Storage utilities.
The Intel NAS Performance toolkit simulates various tasks for storage devices such as video streaming, copying files and folders to and from the NAS as well as creating content directly on the NAS. To limit caching, a 2GB G.Skill memory module was used in all tests. All options in the Performance toolkit were left that the defaults. The NAS performance test is free to download. You can pick up a copy for yourself here.
To run Crystal Disk Mark, and Anvil storage utilities, a network share was mapped as a drive letter.
All tests were run a total of three times then averaged to get the final result.
RAID 0, RAID 10, and RAID 5 are all tested.
Tests for RAID 5 were run after the array was fully synchronized
RAID Information
Images courtesy of Wikipedia |
JOBD or Just a Bunch Of Disks is exactly what the name describes. The hard drives have no actual raid functionality and are spanned at random data is written at random.
RAID 0 is a stripe set and data is written across the disks evenly. The advantage of RAID 0 is speed and increased capacity. With RAID 0 there is no redundancy and data loss is very possible.
RAID 1 is a mirrored set and data is mirrored from one drive to another. The advantage of RAID 1 is data redundancy as each piece of data is written to both disks. The disadvantage of RAID 1 is write speed is decreased as compared to RAID 0 due to the write operation is performed on both disks. RAID 1 capacity is that of the smallest disk.
RAID 10 combines the 1st two raid levels and is a mirror of a stripe set. This allows for better speed of a RAID 0 array but the data integrity of a RAID 1 array.
RAID 5 is a stripe set with parity. RAID 5 requires at least 3 disks. Data is striped across each disk, and each disk has a parity block. RAID 5 allows the loss of one drive without losing data. The advantage to RAID 5 is read speeds increase as the number of drives increase but the disadvantage is write speeds are slower as the number of drives is increased. There is overhead with RAID 5 as the parity bit needs to be calculated and with software RAID 5 there is more of a performance hit.
RAID 6 expands on RAID 5 by adding an additional parity block to the array that is distributed across all the disks. Since there are two parity blocks in the array more overhead is used with a RAID 6 array.
For a full breakdown of RAID levels, take a look at the Wikipedia article here.
RAID configurations are a highly debated topic. RAID has been around for a very long time. Hard drives have changed, but the technology behind RAID really hasn’t. So what may have been considered ideal a few years ago may not be ideal today. If you are solely relying on multiple hard drives as a safety measure to prevent data loss, you are in for a disaster. Ideally you will use a mutli-drive array for an increase in speed and lower access times and have a backup of your data elsewhere. I have seen arrays with hot spares that had multiple drives fail and the data was gone.
Do yourself a favor and read up on the different types of RAID arrays and plan accordingly. Personally, I use a RAID 10 array with an automated backup to the cloud. I feel with that setup, I’ve done what I can to keep my data safe.
Hi Tom, I have this unit and was wondering if in your opinion the so-dimm memory module could be upgraded to a larger size? Also do you know if the stock memory is ECC?
I have this NAS, and use it for shared video editing storage with 2 editing systems. I wanted to make it work a bit faster, so I bonded the two Ethernet plugs into one “load balancing” 2Gb/s connection.
I got a Netgear GSS116E – ProSAFE 16-port Gigabit Click Switch specifically because it was able to do port aggregation.
The whole experience turned into a massive pain. I plugged the NAS into two of the switch’s ports and configured the switch to link those two ports together. It wouldn’t work. Eventually, I just tried moving the cables over to NON-aggregated ports on the switch, and the NAS popped right up on the network. I don’t know if the NAS and the GSS116e don’t have compatible port aggregation or what, but it just didn’t work. I do know that the GSS116e only does “static LAGs,” not LACP. According to the NAS’ monitoring info, it is putting out over 200MB/s (>1.6Gbps) with two plain old, unlinked gigabit ethernet ports.
So before you spend the money on a new network switch, try it out with what you have, just linking the ports in the NAS and plugging them into a dumb gig-e switch. That’s the only way I could get this to work.