CPSC 441, Fall 2014
Lab 8: NFS, DHCP

In this lab, you will reconfigure your virtual network with the following changes: Some user home directories will be on an NFS network file system, so that they can log in on any of the VMs and see the same home directory. Virtual machines will get their IP addresses by DHCP instead of static configuration. I got much of my information for this lab from these two web sites:

         https://help.ubuntu.com/community/SettingUpNFSHowTo
         https://help.ubuntu.com/community/isc-dhcp-server

To complete the lab, you need to finish parts 1 thought 5 and show me that it is working. Hopefully you can do that in lab. You also need to complete part 6 and turn in the paragraph that it asks you to write next week.

Preliminaries

First of all, if you have not demonstrated your working configuration from Lab 7, you should do so before making any changes. (If you want to save work from that lab, you might want to snapshot your VMs before making any changes for this lab.)

You will be installing some software in your VMs. It looks like an update of the software lists might be necessary before the installations will work. You should give the following command on each of your VMs before installing any software on that VM:

sudo apt-get update

Without this step, you might get errors when you try to do the installation.

(By the way, if you are tired of using sudo, you can use the command "sudo -i" to get a "root prompt" where you can give commands as root (that is, administrator) without saying "sudo". You can also activate the root user so that you can log in as root in the first place just by setting a password for root. To do that, use the command "sudo passwd root". You can then log in with username root.)

You will probably be happiest in this lab if you "ssh -X" to your virtual machine so that you can edit files using nedit instead of vim.

Part 1: NFS Server

NFS is a common network file system. It allows directories that are "exported" from an NFS server machine to be accessed by client machines. NFS is not very secure, and we are going to use it in its most insecure mode. However, this is appropriate for the small private network that you are simulating. In fact, the setup that you will use is typical for a high performance computing cluster in which all the nodes in the cluster are on a private network.

The server software that you need is in the package nfs-kernel-server. Install the server with the command

sudo apt-get install nfs-kernel-server

Before the server will start, you need to configure it by telling it what directories it should export. First, you should create the directory that will be exported. The name and location of the directory is up to you, but you will be using the directory to store user home directories, so I suggest using /nfshome for the exported directory. To create it, use the command

sudo mkdir /nfshome

To export the directory via NFS, you need to edit the file /etc/exports, which is the configuration file that specifies NFS export directories. A line in this file specifies a directory to be exported, the IP addresses that are allowed to access that directory, and options for the export. Here is the line from my file:

/nfshome  10.0.0.0/8(rw,insecure,no_subtree_check,async)

Edit the file /etc/exports (using sudo), and add a similar line. Use the options shown, but use the network number appropriate for your own internal network. Note that there is no space between the network number and the options.

You will need to restart the NFS server to get it to use the new configuration. To do that, use the command

sudo service nsf-kernel-server restart

It should start without error.

Part 2: NFS Clients

You will configure your internal VM's as NFS clients. Start up your client VM's! You first need to install the NFS client software with the command

sudo apt-get install nfs-common

The clients will "mount" the exported directory from the server in their own file system. To do that, you need a directory to serve as the "mount point." The contents of the server directory will appear in the mount point directory, as if they are part of the local file system. You should use the same name for the mount point directory as for the exported directory. (This is not required by NFS, but we want the user home directories to appear at the same location on every machine.) So, create the same directory that you created on the NFS server. For example,

sudo mkdir /nfshome

You want the NFS export to be mounted every time the computer boots. For that to happen you need an entry in the file /etc/fstab (fstab stands for "file system table). Here is the line from my fstab:

10.0.0.1:/nfshome  /nfshome  nfs  auto  0  0

Edit /etc/fstab and add a similar line. Use the IP address for your own server in place of 10.0.0.1. (And use the correct directory name if you did not name your directories /nfshome.)

After editing the file, you can test the configuration by mounting the file system by hand with the command

sudo mount /nfshome

This seems to take an unreasonable amount of time (about 15 seconds), but it should complete without giving an error message.

You should repeat the client configuration on your second VM.


If you go back to the server and create a file inside /nfshome, you should also see that file in /nfshome on the clients.

You can't create the file on a client, because the special privileges of the root user don't extend to the directory mounted by NFS. This is called "root squash." You can change this behavior, but it's not a good idea. Users other than root will have the same privileges on the NFS mount that they do on the local file system. In fact, you should try that. Go back to the server and give commands such as the following, using your own username instead of "someuser":

cd /nfshome
sudo mkdir someuser
sudo chown someuser.users someuser

This creates a directory named someuser and changes the ownership of that folder to the user named someuser and the group named users. That user will be able to access the directory from any computer that mounts the NFS export.

Part 3: Create a Network User

The next problem is to create users for the private network who will have home directories in the NFS file system. There should be some automatic way to create a user on the server machine and automatically make it possible to for that user to use any machine on the private network. In fact, there is a way to do that, but you won't use it at least for now. Instead, you will create the user on each machine by hand.

A user can be created with the adduser command. By default, it creates a home directory for the user in directory /home, with a name equal to the user's username. You need to tell adduser to use a different home director when creating the user. For example:

sudo  adduser  --home  /nfshome/username  username

For username substitute whatever name you want for the user. Use this command on the server VM to create one or two users. The users also need to exist on the client VMs. When you add the user on a client, you do not want the adduser command to create a new home directory for the user, since the home directory already exists. Use commands such as the following on each of the client VMs:

sudo  adduser  --home  /nfshome/username  --no-create-home  username

You should then be able to log into any of the VMs as that user, and see the same files in the home directory.

Basic user information is stored in the file /etc/passwd, including the user's home directory. You can edit that file to change the the user's home. You could, for example, change your own home directory to be on the network file system.

Part 4: DHCP Server

Your next job is to configure a DHCP server, to assign IP addresses to client VMs instead of configuring static IP addresses for the clients. The current DHCP server software for Ubuntu Linux is isc-dhcp-server. Install it on your server VM with the command

apt-get install isc-dhcp-server

Fortunately, the server doesn't start until you have configured it. You will need to edit two files. Edit the file /etc/default/ics-dhcp-server and change the line INTERFACES="" to read

INTERFACES="eth1"

It is very important that the interface that you list here is the interface on the private network. It would be very bad for your server to respond to DHCP requests on the interface connected to the HWS network.

The other configuration file is /etc/dhcp/dhcpd.conf. This is the main configuration file that specifies such things as the range of IP addresses that the server has available for clients. A DHCP server can also tell clients things like the gateway and dns servers that they should use, and that information is in the configuration file. Near the top of the file, you will find a place where you can set the DNS server and domain name. I set mine to use the HWS servers:

# option definitions common to all supported networks...
option domain-name "hws.edu";
option domain-name-servers 172.30.0.101, 172.30.0.110;

You also need to define a "subnet" configuration by adding a subnet definition to the file. Here is what I added to the configuration on my server:

subnet 10.0.0.0 netmask 255.0.0.0 {
  range 10.0.1.1 10.0.1.254;
  option routers 10.0.0.1;
  option subnet-mask 255.0.0.0;
}

The "range" gives the start and the end of a range of addresses that can be given to clients. The "routers" specifies the default gateway to be used by the clients, and the "subnet-mask" is the network mask that the clients should use. All of this information will be different for your network. Remember that you have set up IP forwarding and NAT to work with certain network and IP numbers, and you need to use something compatible here. Edit the file to set up the configuration for your server. After editing, you can test whether the syntax of the file is correct with the command

dhcpd -t

It took me a couple tries to get it correct. Once you have the configuration ready, you can start the server with

sudo service isc-dhcp-server start

Part 5: DHCP Clients

You want to change the network configuration of your client VMs so that they use DHCP instead of static configuration. (Note that the server must still use static configuration, since it's working as the router for the network and must be at a fixed address.) Essentially, you want to reset the configuration to its original state, before you gave the VM a static IP address. Edit the file /etc/network/interfaces and change the network configuration for eth0 so that is reads simply

auto eth0
iface eth0 inet dhcp

Remove the extra lines about gateway, DNS server, etc.

Reboot the client. This is the moment of truth. If everything is done correctly, the client will have an IP address assigned by DHCP. Check it with the ifconfig command. Back on the server, you can look at the file /var/lib/dhcp/dhcp.leases, which contains information about clients that have been configured through the server.

The NFS export should also still be working. You should see the exported files on the client. And you should still be able to access the HWS network from the client.

You should show me that you have this all working!

(Note that when you want to restart your system, you need to start up the server VM first and give it time to boot before starting the clients, since its DHCP and NFS services must be available when the client boots.)

Part 6: DNS

You might already have a DNS server running on your server VM from Lab 4. If it's not working, you might have to go back and review that lab. You might need to review the lab in any case to finish this assignment.

For a final exercise, I would like you to make your client VM's use the DNS server that is running on the server VM. The server VM should be the default, local DNS server for the client VM's. The server will, in turn, be getting all of its information from the HWS servers.

Try to make this idea work. Write a paragraph explaining exactly what you've done, and turn it in next week.