XRootD is high-performance storage service providing robust data access to any kind of file format. Its scalability makes it an excellent solution for data access, even at the peabyte level.
XRootD was originally developed by SLAC/SCCS for the purpose of the BaBar collaboration in order to access ROOT files. Since then, XRootD is part of the ROOT standard distribution.
That said, XRootD also provides performant access to other data formats.
The so-called “generic” instance of XRootD at CC-IN2P3 permits read access to files stored on the mass storage system HPSS in a transparent way, sparing the use of commands RFIO at the client level. The XRootD servers belonging to this instance act as a disk cache, which allows you to reduce access latency to HPSS.
Access to HPSS data through this XRootD service is preferable to using the native
rfio commands for two reasons:
- HPSS access performance is optimized through an internal scheduling mechanism;
- The data that is present (cached) on the XRootD servers allows direct access without the need for a request to the HPSS service.
Example of data recovery:
% ccenv xrootd % xrdcp root://ccxroot.in2p3.fr:1999//hpss/in2p3.fr/<path> <local_file_name>
You can delete files stored on the XRootD disk cache by using the script xrdRemoteClean available on the interactive nodes at CC-IN2P3:
% /usr/bin/xrdRemoteClean /hpss/in2p3.fr/<path>
The ALICE experiment bases its data management model on the XRootD protocol. It benefits from two native XRootD instances at CC-IN2P3:
- a pure disk storage instance;
- an instance serving as a cache in front of the tape system HPSS.
These two instances are used as part of the LHC computing grid, and correspond to the Storage Elements ALICE::CCIN2P3::SE and ALICE::CCIN2P3::TAPE, respectively.