Clustering iFolder using OES 2 SP1 and SLES 10 SP2 Linux Server
This Appnote aims to provide step by step procedure to set up iFolder server on a NCS cluster. This also describes how to configure NCS using iSCSi and configure an iFolder server on NSS for high availability configuration.
Table of Contents
- Hardware, software and other requirements
- Installing NCS and iFolder software on servers
- Setting up iSCSI shared storage
4.1 Configuring iSCSI target
4.2 Configuring iSCSI initiators
- Setting up NCS on OES2 SP1 Linux Servers
5.1 Creating a new NCS cluster
5.2 Add another node to already existing cluster
- Configure iFolder for NCS
6.1 Creating NSS pool and volume on iSCSI shared disk
6.2 Setting up first NCS node for iFolder
6.3 Setting up second NCS node for iFolder
6.4 Configuring NCS-iFolder load and unload script using iManager
- Verifying iFolder server web interface with cluster
- References and links for detailed documentation
NCS: Novell Cluster Services is a server clustering system that ensures high availability and manageability of critical network resources including data, applications, and services. It is a multinode clustering product for Linux that is enabled for Novell eDirectory and supports failover, failback, and migration of individually managed cluster resources.
iFolder: Novell iFolder 3.7 is the next generation of iFolder, supporting multiple iFolders per user, user-controlled sharing, and a centralized network server for secured file storage and distribution. With iFolder, users’ local files automatically follow them everywhere—online, offline, all the time—across computers. Users can share files in multiple iFolders, and share each iFolder with a different group of users. Users control who can participate in an iFolder and their access rights to the files in it. Users can also participate in iFolders that others share with them.
iSCSI: iSCSI (for “Internet SCSI”) protocol allows clients (called initiators) to send SCSI commands to SCSI storage devices (targets) on remote servers. It is a popular Storage Area Network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally-attached disks. Unlike Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure.
- Three server machines which meet the minimum hardware requirement specifications of SLES10 SP2 and OES2 SP1.
- Two above machines installed with SLES10 SP2 and OES2 SP1 and both connected in same eDirectory tree (Machine-1 and machine-2).
- One machine with at-least base install of SLES10 SP2 (Machine-3).
- All the machines updated with latest patches.
- I.P addresses for all the above machines.
- Two extra i.p addresses for cluster and shared pool resource.
- All machines connected on a high speed network.
- To start installation of iFolder and NCS software on the OES2 SP1 servers, open a terminal window on machine-1 and launch “yast2 sw_single”.
- From the “filter” drop down list, select Patterns.
- From “OES Services” section, select “Novell Cluster Services (NCS)” and “Novell iFolder” for installation and click “Accept” button.
# yast2 sw_single
- Installation of the software will start and may take some time to complete.
- After installation of software is complete, yast would confirm installation of more packages. Click “No” to complete and finish the installation.
- Repeat the above steps from 1 to 5 on second OES2 SP1 server i.e machine-2 to install NCS and iFolder software.
A typical iSCSi cluster looks like the below figure. This diagram is a graphical representation of our setup too. For iSCSI, it’s preferable to use two nic cards in your setup. One for iSCSI and other for general network access . This ensures that iSCSI data flow doesn’t hog the general network and provides good performance.
4.1.1 Creating a new partition for iSCSI target configuration.
- On machine-3 with base install of SLES 10 SP2, login as root user and launch Yast2 and search for Partitioner. Click on the Partitioner icon.
- You can launch partitioner by typing “yast2 disk” on the terminal as well.
- Click Yes on the window showing warning message and reach the Expert Partitioner wizard.
- Click on the Create button to start creating a new partition.
- Select the Device where you wish to create a new partition and click OK.
- Choose to create a Primary Partition from Partition Type and click OK.
- Select Do not format and enter the size of new partition in the box named End and click OK.
- To complete creation of this new partition, click “Apply”.
- Make a note of name/path of the device you just created for our use during iSCSI target configuration (e.g /dev/cciss/c0d1p1).
- Click Apply button on the pop up message to complete the task and click “quit” to come out of Expert Partitioner wizard.
4.1.2 Configure iSCSI target.
- On machine-3 with base install of SLES-10 SP2, as root user launch iSCSI target configuration wizard by running “yast2 iscsi-server” in a terminal window of the machine.
- Click Continue on the Popup message to install iSCSI target package if it is not already installed.
- After the package is installed, you will reach iSCSI Target Overview page.
- Choose option “When Booting” in Service Start and click on “Targets” tab on the window.
- Click on the “Delete” button to delete the default entry and click “Continue” on the pop up message to confirm the deletion.
- Click on the Add button on the same on iSCSI Target Overview page to start adding a new iSCSI target.
- Modify the Identifier field with a appropriate name for easy identification. Put the Identifier as “ifolder_cluster” and click Add.
- Click on the Browse button and select the partition / device created above and click open and OK.
- Click the Next button to reach Modify iSCSI Target page.
- Click the Next button again if you are not using authentication for your iSCSI configuration else enter details for authentication and finish the configuration.
- Type “yast2 iscsi-client” in a terminal window of machines with OES2 SP1 installed to start the configuration wizard. (machine-1 and machine-2)
- Click continue to install the required package if it is not already installed.
- After the package is installed, you would be taken to “iSCSI Initiator Overview” page.
- Select the option “When booting” to start the service automatically every time you reboot the machine.
- Click “Connected Targets” tab on iSCSI Initiator Overview page
- Click on the Add button and you would be presented with iSCSI Initiator Discovery page.
- Enter the I.P address of the iSCSI target server (Machine-3) in the IP Address field and click on Next.
- Select the entry of this connection on the page and click once on “toggle start-up” button to change “start-up” field of this connection from “manual” to “automatic” . This will ensure that iSCSI initiator connects to the target automatically whenever the machine is rebooted.
Note: If after iSCSI target configuration, if the lists of disks on iSCSI target do not appear, check your firewall settings of the iSCSI target machine (Machine-3). To quickly verify this, disable the firewall in the iSCSI target server. If the connection now works well, enable the firewall and make sure that “iSCSI Target” service is allowed in the firewall.To do this, login to the target server as a root, bring up firewall configuration wizard by typing “yast2 firewall” on terminal, then click on Allowed Services. Select the service to Allow drop down menu and select iSCSI Target from the menu, then click on Add, click on Next and click on Accept button to finish the configuration.
If you have manually selected a different port for iSCSI, make sure you open that port in the firewall.
- Click finish to complete the configuration.
- Repeat the steps 1-9 on the second OES2 SP1 machine i.e machine-2 to configure iSCSI initiator on the same.
- After we have configured iSCSI initiator our OES2 SP1 nodes (machine-1 and machine-2), we need to initialize the shared disk and mark it sharable for clustering.
- On machine-1, open a terminal window, type nssmu and press enter.
- From devices option, select the shared disk corresponding to our above created iSCSI disk and press F3 to initialize the disk. Initialization will prepare the disk for our use. Press Y to confirm initialization and then press F6 to mark it shareable for clustering.
- To exit the NSS utility press Esc until you reach back to terminal prompt.
- On this terminal window type “yast2 ncs” to start NCS configuration wizard.
- Enter the password of tree admin user when asked during wizard and click ok.
- After authentication, you would be presented with main “NCS” configuration screen.
- On the main NCS configuration screen, select “New cluster” and enter FDN of the new cluster you wish to build in “cluster FDN” field.
e.g: In the screenshot below, ifolder_cluster is the name of cluster, ou=cluster represents the ou where the cluster object would be created in eDirectory.
- Enter an unused i.p address in “cluster i.p address” field. This would be the i.p of your cluster resource.
- From the next drop down list, select the shared media corresponding to your iSCSI shared disk and click next.
- Confirm in the next window that NCS configuration wizard has correctly identified your current node and click finish.
- The wizard will end after the configuration is finished.
- We now have a single node cluster. To add another node to the cluster, login to the second machine with OES2 SP1.
- On the second OES2 SP1 node (Machine-2), launch “yast2 ncs” from a terminal window.
- Wait for the wizard to launch and enter the eDirectory admin user password in the required field and click ok.
- You would be presented with a similar NCS configuration window as were presented during configuration of NCS on our first node (machine-1).
- To add this machine to the already created cluster, on the main “Novell Cluster Services Configuration” window select “existing cluster” and enter the FDN detail of the cluster we wish to add this node to (cluster ifolder_cluster we created above). All other fields would get disabled.
- Click next and finish buttons to add this machine as node to the cluster.
- To verify that out NCS configuration is correct, open a terminal window on booth NCS nodes (machine-1 and machine-2) one by one and execute command “cluster view”. If you see information similar to the screenshot below on both the machines, your configuration is correct.
- Login to first OES2 SP1 machine (machine-1) as root and in a terminal window type nssmu to launch NSS Management Utility.
- Select Pools from the Main menu and press Enter key.
- Press Insert key to create a new pool.
- Enter the Pool name, e.g: “ifl_pool” and press Enter key.
- Select the iSCSI shared device to create a new cluster pool.
- Specify the size of the pool and press Enter.
- Assign a free i.p address for the newly created pool, select Apply and then press Enter to complete the creation of the pool on the shared device.
- Press Esc key to come back to main menu of NSS Management Utility.
- Select Volumes from the main menu and press Enter.
- Press Insert key.
- Enter the name for the volume e.g : “ifl_volume” .
- On the Encrypt Volume? Message type N to choose not to encrypt this volume. You can choose Y if you wish to enable encryption for this NSS volume.
- Press Esc multiple times to exit nssmu.
Login as root and open a terminal window on the OES2 SP1 machine which currently has the above created NSS pool and volume mounted on it (machine-1).
As we are using NSS volume for iFolder data and config files, we would need to provide apache user “wwwrun” rights on the NSS volume. Run the following command to do this.
# rights -f <mount path of NSS volume> -r all trustee wwwrun.<ou>.<o>.<tree name>
e.g : rights -f /media/nss/IFL_VOL -r all trustee wwwrun.cluster.novell.ifolder
- Type “yast2 novell-ifolder3” in the terminal window and press enter to start iFolder configuration wizard.
- Click “Continue” button on “LDAP configuration for open enterprise services already configured” message.
- Now you would see a “Novell iFolder system configuration options” window.
- Select all the three options i.e “iFolder Server”, “iFolder Web Admin” and “iFolder Web access” and click next.
- On the next screen, provide the details for identification of this system to users. Use a folder on NCS shared volume ( e.g : ifl_vol), we created above for “path to server’s data files” field and “recovery agent certificates” fields and click next.
- On the next configuration screen, provide a name for iFolder server and enter the i.p address of the NCS pool which has the shared volume for public and private host i.p address and click next button to reach next configuration window.
- On “Novell LDAP iFolder configuration” window, select a eDirectory server in the tree you wish to use for this configuration. You can leave the default selection as-it-is and click next.
- On the next screen, you would be asked to enter the password of eDirectory admin user. Enter the password and click ok button to get next window of the configuration wizard.
- Enter the details of a user who you wish to use as administrator for iFolder and the user’s password in the provided fields. You can change info of LDAP proxy user and user’s password in the provided fields. I am keeping the user same and changing the password for this setup.
- Add the desired context in the “LDAP search contexts” using the add button. iFolder will search for users in these contexts to authenticate them when they login to iFolder.
- On “Novell ifolder web access configuration window” enter the path, a user should enter in browser to access iFolder web interface. “/ifolder” means that user would enter
"http://<pool i.p address>/ifolder"to access ifolder web interface.
- In “host or i.p address of the iFolder server that will be used by iFolder web access application” field, change the default entry i.e node’s i.p address to clustered pool’s i.p address and click next button.
- On the next screen i.e “Novell iFolder web admin configuration” screen, you can accept the default entries except the i.p address feild which should be changed to clustered pool’s i.p and click next. The “Apache alias” feild is for creating an apache alias for “ifolder web-admin” application . For e.g, if you keep the default entry of “/admin”, you would be able to access “ifolder web admin page” by loading
"http://<cluster_pool i.p address>/admin"in your browser.
- Click next and Yes when asked to restart Apache server and after finishing off the wizard, you would be brought back to terminal prompt.
- iFolder uses a application “mono” to work. After we have completed the wizard, there is a possibility that some of the stale mono process is still running which may interfere with proper working of iFolder server. Run the following commands on the terminal one-by one to restart apache server and to remove any stale “mono” process.
# rcapache2 stop # pgrep mono # pkill mono # rcapache2 start
- You can avoid part of iFolder server reconfiguration on machine-2 by copying “simias.conf” from just configured machine-1 to machine-2.To do this, login to machine-1 as root and execute the following command in a terminal window to scp the file to machine-2.
# scp /etc/apache2/conf.d/simias.conf root@<other cluster node’s i.p>:/etc/apache2/conf.dNote: (This would copy the config file to other node in clusters and avoid some part of reconfiguration on second cluster node)
- On machine-2 type “yast2 novell-ifolder3” as root on a terminal window to start the configuration wizard.
- Check “iFolder web admin” and “iFolder web access” from the three options and click next.
- Accept the default parameters for the fields and change the “apache aliases” fields to the same you had used while configuring the first ncs node (machine-1). After the wizard ends, execute these commands on the terminal to restart apache and remove any stale mono processes.
# rcapache2 stop # pgrep mono # pkill mono # rcapache2 start
- Your iFolder configuration is now complete. You can repeat the above 6.3 (1-5) steps on other nodes in the cluster to configure them as well.
- Launch iManager and login to the tree. Using tree-admin credentials.
Note: You can launch iManager by entering
https://<i.p address of server/nps"in your browser window. Use i.p address of a server iManager where it is installed.
- Click on “Clusters” link and than click on “cluster manager”.
- Use browse button to locate the cluster object in the tree, select it and click ok to load the “cluster State page” for the cluster.
- From the list of cluster resources, click on the pool you are using for your iFolder and NCS configuration.
- From the “cluster pool properties page”, click on scripts tab and you would be displayed the load script for the pool.
Modify the load script and add a line “exit_on_error /etc/init.d/apache2 graceful” to the script just before the last line i.e “exit 0” and click save to save the script. You would be presented with a message window. Click “OK” to close it.
Sample load script with modification marked in blue.
#!/bin/bash . /opt/novell/ncs/lib/ncsfuncs exit_on_error nss /poolact=BCC_IFL_182_238 exit_on_error ncpcon mount V_182_238=244 exit_on_error add_secondary_ipaddress 18.104.22.168 exit_on_error ncpcon bind --ncpservername=CLUSTER_EGYPT_BCC_IFL_182_238_SERVER --ipaddress=22.214.171.124
exit_on_error /etc/init.d/apache2 graceful
- On the “Scripts tab”, click on the “unload script” link and modify it to include the following 3 lines just after the second line in the script.
ignore_error mod-mono-server --filename /tmp/mod_mono_server_simias10 --terminate ignore_error mod-mono-server --filename /tmp/mod_mono_server_admin --terminate ignore_error mod-mono-server --filename /tmp/mod_mono_server_ifolder --terminate
Sample unload script with required modification marked in blue.
#!/bin/bash . /opt/novell/ncs/lib/ncsfuncs
ignore_error mod-mono-server –filename /tmp/mod_mono_server_simias10 –terminate ignore_error mod-mono-server –filename /tmp/mod_mono_server_admin –terminate ignore_error mod-mono-server –filename /tmp/mod_mono_server_ifolder –terminate
ignore_error ncpcon unbind --ncpservername=CLUSTER_EGYPT_BCC_IFL_182_238_SERVER --ipaddress=126.96.36.199 ignore_error del_secondary_ipaddress 188.8.131.52 ignore_error nss /pooldeact=BCC_IFL_182_238 exit 0
- Click on save to save the script.
- Launch your web browser and open
http://<i.p address of ifolder pool>/ifolder
- If you have configured iFolder and cluster scripts correctly, you should see the ifolder web access page. Login with your username and password to start using iFolder.
- Now use iManager to manually failover the iFolder clustered pool from one server to another. After the failover has happened, launch the iFolder web access page again as we did in point 1 above. If you see the login page correctly and can login to the server again, your iFolder server is clustered correctly for high availability.
- Test the ifolder web admin in same way as 1,2,3 above by launching
http://<i.p address of ifolder pool>/admin( or the alias you selected while doing ifolder yast configuration). You can use this page to do administration of your iFolder server.
- You can install ifolder client on your desktops to login and use ifolder features from a easy to use interface and use ifolder web interface when the client is not installed on your desktop.
- Refer to iFolder admin guide for detailed configuration of service and troubleshooting.
To read more about Novell Cluster Services, visit the link below.
To read more about Novell iFolder 3, visit the link below.