Cluster Setup Guide
This page contains some information about setting up bigdata over a Fedora 10 minimum install and presumes that you have root privileges and will install bigdata to run as root (the latter is not necessary, but that is what is shown here). See the ClusterGuide for more general information on a bigdata cluster install.
Install the following packages. Some of these are optional (telnet, emacs, nfs-utils, ntp).
yum -y install man # man page support. yum -y install mlocate # optional (used to locate procmail's lockfile, which is at /usr/bin/lockfile). yum -y install emacs-nox # optional. yum -y install screen # optional job control utility. yum -y install telnet # optional (useful for testing services and firewall settings) yum -y install rpcbind # optional (used by NFS). yum -y install nfs-utils # optional (used iff you will use NFS for the shared volume). yum -y install sysstat # used to collect performance counters from the OS and services. yum -y install ntp # optional, but highly recommended. yum -y install subversion # used to checkout bigdata from its SVN repository (only necessary for the main server). yum -y install ant # used to build bigdata from the source code (only necessary for the main server).
Linux, like many other operating systems, has a very aggressive posture towards free memory. By default, Linux will allow your applications to occupy no more than 1/2 of the available RAM before it begins to swap things out. You can fix this by turning down the swappiness parameter to ZERO.
sysctl -w vm.swappiness=0
You MUST be able to resolve the hostnames in the cluster using DNS. Normally someone is administering DNS so you don't have to worry about this. If that is false, then the easy fix is to edit
/etc/hosts to make sure each host in the cluster knows the name and IP associated with all the hosts in the cluster.
Here is a sample /etc/hosts file. Your file must reflect the IP addresses and host names in your cluster.
127.0.0.1 localhost localhost.localdomain x.y.z.129 BigData0 x.y.z.130 BigData1 x.y.z.131 BigData2
VNC (optional, "main" host only)
VNC can be used to remotely login to the X-Windows desktop on the machines in the cluster. This can be very useful and it can be done securely using an ssh tunnel. This installs X-Windows, the KDE desktop, and the VNC server. See  for more information.
# install X and KDE yum -y install xorg* yum -y install xfce* yum -y update # required to get around kdebase-wallpapers conflict for fc10. yum -y install kde*
It appears that NetworkManager (the network-manager package) can cause a conflict if you are using static IPs, in which case it should be removed. See http://ubuntuforums.org/showthread.php?t=253221 and https://bugs.launchpad.net/ubuntu/+source/knetworkmanager/+bug/280762.
rpm -qa | grep -i network | egrep -i 'manager|management' NetworkManager-0.7.1-1.fc10.x86_64 kde-plasma-networkmanagement-0.1-0.12.20090519svn.fc10.x86_64 NetworkManager-glib-0.7.1-1.fc10.x86_64 NetworkManager-vpnc-0.7.0.99-1.fc10.x86_64 kde-plasma-networkmanagement-openvpn-0.1-0.12.20090519svn.fc10.x86_64 NetworkManager-glib-devel-0.7.1-1.fc10.x86_64 kde-plasma-networkmanagement-vpnc-0.1-0.12.20090519svn.fc10.x86_64 kde-plasma-networkmanagement-devel-0.1-0.12.20090519svn.fc10.x86_64 NetworkManager-openvpn-0.7.0.99-1.fc10.x86_64 NetworkManager-devel-0.7.1-1.fc10.x86_64
Once you have removed those packages, continue with the VNC install.
yum -y install vnc-server #(0:4.1.3-1.fc10)
Set the VNC password.
Edit /etc/sysconfig/vncservers. You must define at least one vncserver here. Choose your own display resolution. Use the "-localhost" option to restrict connections to SSH tunnels. The remote machine should port forward local 5901 to remote localhost:5901 and then connect using "localhost:1".
VNCSERVERS="1:root" VNCSERVERARGS="-geometry 1280x1024 -nolisten tcp -nohttpd -localhost"
Specify KDE as the display manager by editing /etc/sysconfig/display. This only has effect each time you start vncserver. It will not effect a session that is already running.
Start vncserver and configure the vncserver runlevels.
/etc/init.d/vncserver start chkconfig vncserver on
# Uncomment the following two lines for normal desktop: unset SESSION_MANAGER exec /etc/X11/xinit/xinitrc
See the notes above on how to connect using an ssh tunnel.
NFS (optional, done differently for the NFS server and the clients)
Bigdata requires a shared volume to hold the JARs, configuration files, and similar things. This volume must be mounted by each host in the cluster. One way to do this is to use NFS. This section shows you how to setup NFS while leaving iptables enabled. See  and  for more details.
Note: Most of these steps are performed only on the node, which will provide the NFS service. Once you have everything setup, you can mount that NFS share from the other nodes as specified at the end of this section.
edit /etc/sysconfig/nfs to specify the ports that will be used for the services required to support NFS. These port choices are arbitrary, but the same ports MUST be opened up in the iptables firewall in the next step below.
LOCKD_TCPPORT=48620 LOCKD_UDPPORT=48620 MOUNTD_PORT=48621 STATD_PORT=48622 RQUOTAD=no RQUOTAD_PORT=48623
Modify iptables to open your firewall for NFS on the ports configured in
/sbin/iptables -I INPUT -m state --state NEW \ -m tcp -p tcp --dport 111 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m tcp -p tcp --dport 2049 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m tcp -p tcp --dport 48620 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m tcp -p tcp --dport 48621 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m tcp -p tcp --dport 48622 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m tcp -p tcp --dport 48623 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m udp -p udp --dport 111 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m udp -p udp --dport 2049 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m udp -p udp --dport 48620 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m udp -p udp --dport 48621 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m udp -p udp --dport 48622 -j ACCEPT | /sbin/iptables -I INPUT -m state --state NEW \ -m udp -p udp --dport 48623 -j ACCEPT # save the changes to iptables /etc/init.d/iptables save
Next, edit /etc/hosts.allow and /etc/hosts.deny to restrict access to the NFS services. The /etc/hosts.allow file only needs to be modified on the host that is actually providing the NFS share. The other hosts will be clients, so they do not need to allow anything.
This example explicitly enumerates the IP addresses that are allowed to access the NFS services, but you can specify these constraints in a variety of ways. See the hosts.allow man page for more details.
portmap: localhost, x.y.z.129, x.y.z.130, x.y.z.131 lockd: localhost, x.y.z.129, x.y.z.130, x.y.z.131 rquotad: localhost, x.y.z.129, x.y.z.130, x.y.z.131 mountd: localhost, x.y.z.129, x.y.z.130, x.y.z.131 statd: localhost, x.y.z.129, x.y.z.130, x.y.z.131
portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL
Edit /etc/exports on the NFS server. You need to either enumerate all of the IP addresses which can access the NFS share or use a combination of a network address and a bitmask, etc. to accomplish the same ends.
/nas x.y.z.130(rw) x.y.z.131(rw)
Create the directory that you want to export
Start NFS on the server.
/etc/init.d/rpcbind start /etc/init.d/nfslock start /etc/init.d/nfs start
Set the run levels for NFS (rpcbind and nfslock should already be running).
chkconfig nfs on
NFS is now running on the server. The next steps need to be done for each node in the cluster that will mount that NFS share. Note that you have an option to either mount the NFS share by hand or to have automount it. However, the client can stay if there is a problem with the NFS server or the network connectivity if you choose to automount the NFS share.
You will need to install the set of packages listed at the top of this page. Those packages include rpcbind and nfs-utils, to start NFS on the client.
# Start NFS on the clients (if not running, then also do chkconfig service on). /etc/init.d/rpcbind start /etc/init.d/nfslock start # ensure the mount point exists. [ ! -d /nas ] && mkdir /nas # Make the files on that mount point visible to root on other hosts in the cluster. chown -R root.wheel /nas # Either mount the shared volume (not restart safe) mount -t nfs 18.104.22.168:/nas /nas # -or- # Edit fstab and add this line (automount, but will hang if the NFS server is not available): 22.214.171.124:/nas /nas nfs rw,addr=126.96.36.199 0 0
Open up the iptables firewall for log4j, zookeeper and jini
You only need to open these ports on the hosts who will be running these services. However, it is often easier to administer each machine in the same way so you can reconfigure the service locations without changing your firewalls again. Also, ALL the bigdata services export jini proxies, so ALL hosts on which you will run bigdata services MUST open up the ports for jini so that other clients can access those proxies.
# # Open up the firewall for log4j, jini, and zookeeper. # # Note: This uses commonly specified ports for these services. If you # want to use different ports, you must make the changes here and in # build.properties and the bigdata cluster configuration file. # Open the firewall for log4j's SimpleSocketServer (port must agree # with build.properties). iptables -I INPUT -m state --state NEW -p tcp --destination-port 4445 -j ACCEPT # Open the firewall for jini (for each host running a jini registrar). # Jini registrar discovery uses tcp/udp 4160. In order to go all the # way with integrating jini into the firewall you need to be a lot # smarter, especially about the ports which services will allocate # when they expose a service endpoint. See # http://www.ivoa.net/internal/IVOA/Registry19032003/jini_firewall.pdf # Open jini LUS discovery. iptables -I INPUT -m state --state NEW -p tcp --destination-port 4160 -j ACCEPT iptables -I INPUT -m state --state NEW -p udp --destination-port 4160 -j ACCEPT # Opens up tcp for all machines running jini services. This can also # be expressed as 188.8.131.52/29, which allows ip addresses from # in 184.108.40.206 - 220.127.116.11 to establish tcp. See # http://www.subnet-calculator.com/ and the man page for iptables. iptables -I INPUT -m state --state NEW -s 18.104.22.168 -p tcp -j ACCEPT iptables -I INPUT -m state --state NEW -s 22.214.171.124 -p tcp -j ACCEPT iptables -I INPUT -m state --state NEW -s 126.96.36.199 -p tcp -j ACCEPT # Open the firewall for zookeeper on each host running a zookeeper # instance. clientPort=2181; leader port=2888, leader election # port=3888. iptables -I INPUT -m state --state NEW -p tcp --destination-port 2181 -j ACCEPT iptables -I INPUT -m state --state NEW -p tcp --destination-port 2888 -j ACCEPT iptables -I INPUT -m state --state NEW -p tcp --destination-port 3888 -j ACCEPT # Save changes to the firewall. /etc/init.d/iptables save
Install the JDK on each node in the cluster. The JDK must be installed into the same location on each machine. If you like, you can install it on the shared volume instead. Java 7 is required to run Blazegraph.
Checkout, configure and install bigdata
Now that you have the cluster nodes prepped, please see the ClusterGuide for details on how to checkout, configure and install bigdata.