What is the PVFS2 fileystem?
The PVFS filesystem, now called OrangeFS, is a parallel filesystem used to temporarily store data and should be treated as a scratch filesystem. There are no backups. Data is striped across multiple disks on different nodes in the cluster thus increasing effective bandwidth and providing better access speeds. Because of this feature, there is a certain volatility associated with this approach as any single I/O node failure can potentially destroy the entire filesystem. While this turns out to be a rare occurrence, it has happened in the past, and users should take steps to move important data to more permanent storage as soon as possible. All the I/O server nodes are dual purpose PVFS servers as well as general compute nodes.
Binary executables should NOT be executed from this filesystem. They probably will not work and if they happen to, erroneous execution is pretty much guaranteed.
Serial jobs should NOT use this filesystem. Perfomance will be worse than using local scratch space or an NFS mounted common filesystem. Performance gains are only achieved when reading/writing using parallel code, such as the MPI ROMIO calls, or when each process in a parallel job needs to perform I/O.
There are two ways to access files on this filesystem. One is to use the kernel interface which translates the standard Unix commands into PVFS commands. This is by far the easiest method of access but comes at the cost of speed as each transaction needs to be translated into native PVFS.
The second method for access is to use the native PVFS calls. These calls know how to directly contact each of the I/O server nodes so that direct access to the blocks is possible. They resemble the native Unix calls but have a "pvfs2-" prefix and they are the preferred method. The calls are:
System specific notes
adroit: All nodes act as I/O servers with no hardware data protection.
della: there are 16 I/O server nodes with mirrored RAID 1 disks. They simply share a piece of the system disk.
woodhen: there are 32 I/O nodes with a mirrored RAID1 disk configuration to provide redundancy for single disk failures.