Campus network connectivity is provided and maintained by OIT. Currently, there are dual paths, for redundancy and load balancing, between campus the the Internet. Each link provides at least 1 Gbps connectivity. A separate 1 Gbps connection provides access to Internet2. While restricted in accessability to certain internal networks, there is also a 1 Gbps connection to the ESnet. More information concerning these connections can be found on the OIT web site, http://www.net.princeton.edu/internet.html. For current information on the internal, campus network, please see http://www.net.princeton.edu/network-architecture.html.
Head nodes are all connected to the campus network with a 1Gbps connection. Thee machines tigressdata and della4 are connected with 10Gbps links. The internal network on the clusters is very cluster dependent. All clusters use a 1Gbps private network for local communication while the NFS servers are now connected using Infiniband. A high performance, low latency Infiniband network also is attached for use with MPI parallel communication.
All private networks connect to the central GPFS storage (/tigress) over the Infiniband network connection. This is also a private network with fibre connections between the data center and the Lewis Library where some machines are currently housed. This private network also is used for /tigress backups.
Globus Data Transfer
Globus is an infrastructure for transferring large amounts of data between Princeton and any remote system that is also participating in the Globus system. Research Computing supports Globus data transfer to and from the GPFS-based /tigress and scratch disk space connected to the research computing cluster systems. For information about using Globus at Princeton, see the Globus Data Transfer at Princeton web page.