When it comes to speed:
RAID0 - (two disks) slightly faster read & write than non-RAID. Lose one disk, lose ALL your data.
RAID1 - (two disks) faster read, no effect on write speed. Lose one disk, lose nothing.
RAID5 - multiple disks, striped data & parity. Read & write speed may be faster, tolerant of one drive failure.
RAID6 - multiple disks, striped data & parity. Read & write speed may be faster, tolerant of two drive failures.
UnRAID - multiple disks, data is contained entirely on each drive, parity is stored on one or two dedicated drives. Tolerant of one or two failures.
The big difference is that for 'real' RAID (0 through 6) all drives must be an identical capacity. UnRAID will use the full capacity of all drives, even of differing size, as long as the parity drive is largest, because it does not stripe the data across the disk set.
BTW, Synology also make nice NAS boxes, though I have no personal experience
Conkers, I'm afraid. Parity stripped RAIDs have abysmal write performance. Not just slow, abysmal. Even with a top end SAS controller (well, once the cache fills up). It has to do at least 3 reads, wait for data, compute it, then at least 3 writes. Do not underestimate how massively badly this impacts performance - with good caching (not available on any consumer PC boards, or most cheapo NAS devices), you can get almost usable results if its purely sequential, but very little is... ...and none is with shared storage.
On most PC boards, even R1 is likely to be slower at both read and write than non-raid, as the compute is done in the driver. Toy level NAS devices suffer similar. As does all the shit like FreeNAS etc (made worse by its near exclusive use of ZFS, and its not a great port, and PC hardware is shite, and the ZIL becomes the bottleneck). Toy NAS only works with one client - give to 2, performance dies, and you often get timeouts and dataloss, hence can't be recommended for data integrity. *NOTHING* is faster than local DAS, assuming same drive technology when comparing. Also DAS is more reliable, thus the best integrity.
As to SAS v SATA, they are leagues apart. NCQ helps with SATA (assuming drive, controller and driver (if software RAID, most PC shit will fall into this)), but doesn't even bring into same league as SAS.
I could prove this, as the 2 physical servers currently hosting the OOF VMs are currently on RAID10 on 2.5" SAS drives. The same physical servers also have a RAID10 on 2.5" SATA drives (granted, only 7200rpm spindle drives), used a cheap archival storage (2Tb SAS 10k rpm drives are £600+ each, compared to about £70 for SATA). If I put the OOF primary database on the SATA array, I'd expect it to do the usual Linux thing when its resources can't keep up - start thread blocking, before eventually shitting its pants with a panic. However, straight away, you'd see all the database queries to build a page taking a few seconds, rather than the .2s it takes now if not cached.
SAS SSD takes it to another level again, but I ain't rich enough for them
. But if anybody wants to donate some, preferably HPE in the HPE hotplug carriers (gen8 or later type), >800Gb, my address is:
TheBoy
Brackley