Automation Guy

One mans perpetual headache in the land of data centre automation

HOME LAB BUILD: PART 10 – CONFIGURING XPENOLOGY STORAGE

storage_5The next step is to  configure the storage within Synology DSM. To do this, login to the web interface at http://:5000 and go to Storage Manager from the Main Menu at the top left of the screen.

iSCSI LUNs -- File vs Block

Before I did this setup for real I had been playing around in VMware Workstation with a pair of virtualised ESXi 5.5 machines and a virtual Synology NAS. When creating iSCSI LUNs on the Synology you get the option to choose either a Regular File LUN, a single Block Level LUN on a RAID array or multiple  Block Level LUNs on a RAID array. Block Level LUNs supposedly provide the best performance while File Level LUNs give you advanced features like thin provisioning and VAAI support.

I tried setting up both to do a comparison. When you create block level iSCSI LUNs they do not need to be backed by a Volume, you just create them, point them at the Disk Group, specify the size you want and that's it. Once you've configured your iSCSI LUNs you then need to set up some targets. The process is the same regardless of what type of LUN you've chosen. The problem I had, is that I could not for the life of me get the ESXi hosts to discover the devices when the LUNs were setup as block devices. In fact, the rescan HBA process would actually hang the ESXi server and cause it to need a reboot.

I battled with this for a long time, until eventually someone on the XPEnology forums answered my question, and confirmed that as of Synology DSM 5.0 block level iSCSI wasn't working properly and it was a known issue.

Create Disk Groups

I'm going to be creating two disk groups. One will contain four 3TB drives which I'll be using for Virtual Machine storage. The second will consist of a single 1TB disk and will be used for streaming media. Open Storage Manager, select Disk Group and Create...

create-disk-group

Select all the disks you want to have as part of your disk group and click Next...

choose-raid-type

Now you have to select a RAID type to use for the disk group. I'm going with Synology Hybrid RAID (SHR). SHR gives you a bit more flexibility than traditional RAID. It allows you to grow the disk group by adding additional disks of equal or greater size at a later time. Depending on the number of disks in the disk group SHR will present you with additional options to select how many disk failures to tolerate. You can create an SHR disk group with just a single disk and have no fault tolerance. If the group consists of two disks, it will effectively mirror them (RAID1). If you have three disks, you'll get striping with distributed parity or RAID5. With four disks, you are given the option to choose between tolerating one or two disk failures. When selecting one, you again get RAID5 and when selecting two I believe you get RAID6.

shr-protection-level

For this disk group, I'm selecting 1 disk fault-tolerance. This will effectively give me a RAID5 disk group. The performance difference between SHR and RAID5 is minimal. Next you'll be asked if you want to perform a disk check while building the array. Selecting yes means the background verification process takes longer, but I did this anyway just to be safe.

perform-disk-check

When you're happy with the disk group configuration click Apply.

confirm-disk-group-settings

The disk group will show up as creating. The process of creating the disk group took less than 30 seconds.

disk-group-creating

Once created, the disk group will begin a background verification process. While this does not impact functionality, it will impair performance. This process took a very long time to complete, presumably because of the size of the disk group combined with the fact I asked it to Perform disk check. I left the machine running all night and it was still going in the morning. The process probably took about 14 hours to complete.

disk-group-verification

I repeated the process for the second disk group that will contain just a single 1TB disk. Again, I selected SHR as the RAID type just so that I can easily expand the disk group later on if required.

Create Volumes

As I'll be using iSCSI File Level LUNs, I next need to create some volumes to back them. I want to setup a In Storage Manager, access Volume.

synology-volumes

Click Create, then select Custom Mode.

volume-creation

Next, select Multiple Volumes on RAID, unless you want one giant volume filling up  all the space on your disk group...

multipl-volumes-on-raid

Select the disk group upon which the volume will be created.

volume-select-dg

Next, specify the size for the volume then click Next and Apply.

vol-size

Repeat the above process for each volume you wish to create. In total I want seven volumes which I'll be using for different things...

  • Volume 1 -- 20GB -- This is the volume I'll use for installing Synology Packages
  • Volume 2 -- 350GB -- This will be a VMFS datastore used for ISO storage.
  • Volume 3, 4, 5 and 6 -- 2TB (each) -- Each of these four volumes will be a VMFS datastore used for Virtual Machine storage.
  • Volume 7 -- 1TB -- This will be used to store media such as music and films for streaming.

I end up with something looking like this...

synology-volumes-complete

Create iSCSI LUNs

Now that we have our volumes configured, it's time to create the iSCSI LUNs that will be used by ESX. In Storage Manager, select iSCSI LUN and click Create.

iscsi-lun-type

Select iSCSI LUN (Regular Files) and click Next. I need to map each iSCSI LUN to the relevant volume. The volumes I'll be using for iSCSI storage are 2, 3, 4, 5 and 6. I won't create iSCSI Target Mappings at this point, as I'll do that as a separate step later.

iscsi-lun-props

You must specify a name for the LUN, select the volume which will back it and turn on thin provisioning and Advanced LUN features. Under iSCSI Target Mapping select None. Click Next and Apply. Once all iSCSI LUNs have been created, it looks like this...

iscsi-luns

Create iSCSI Targets

We now need to create an iSCSI target for each LUN we've setup. For target names, I'm going to use the same name as the LUN to avoid any confusion.

iscsi-tgt

Now map the target to the relevant LUN...

iscsi-tgt-mapping

Click Next and Apply. The iSCSI target should show up as "Ready" after a few seconds.

iscsi-tgt-ready

I have a pair of ESX servers, so these LUNs are going to be shared storage. As such, there is one final important step to complete. Select the iSCSI target, and click Edit. Then tick the box labelled "Allow multiple sessions from one or more iSCSI initiators".

iscsi-multi-initiators

Repeat the above steps for the rest of the iSCSI targets that need to be configured for each iSCSI LUN.

Configure Temporary NFS Storage

I want to configure my ESX hosts to use a dedicated distributed virtual switch with multiple uplinks for iSCSI traffic, but I can't configure a dvSwitch until I've setup a vCenter server. This presents a bit of a chicken and egg situation. The answer is to present some of the storage temporarily over NFS. This allows you to configure a shared datastore for the ESX hosts, build the vCenter and supporting infrastructure on that, configure the vSphere networking and then once the iSCSI storage is set up migrate the VMs to it.

Go to the Synology DSM Control Panel and select File Services...

dsm-file-services

Tick  the "Enable NFS" box...

enable-nfs

Back on the Control Panel, select Shared Folder...

shared-folder

Click on Create...

create-shared-folder

Once you've specified a name a description and selected a volume to back the shared folder, click OK. You'll be taken to the Permission tab. Select NFS Permissions...

nfs-perms

Click Create...

create-nfs-perms

I'm allowing anything on my local subnet to access the shares, by specifying the network address in CIDR format (192.168.0.0/24). The rest of the settings can be left as they are. Click OK. Make note of the Mount Path from the bottom left corner of the window...

nfs-mount-path

Now that storage is configured the next steps are to start building out the vSphere environment. Specifically the ESXi hosts, vCenter server and virtual networking configuration.

2 Comments

  1. I ran int o the exact same problems as you. Most likely I will end up using OpenIndiana or Freenas.

  2. I had similar problem with discover lun from xpenology. From my DS212j vSphere had found iSCSI but from xpenology not, but when i ADDED ADRESS xpenology ( go to host-> configuration-> storage adapter -> iscsi software adapter -> proporties-dynamic discovery-> add ) it’s works.

Leave a Reply

Your email address will not be published.

*

© 2017 Automation Guy

Theme by Anders NorenUp ↑