Gestion des noyaux Jupyter
Jupyter notebook servers run via kernels, which define the execution environment (Python or C++ version, required libraries and dependencies, GPU access, …).
On this platform, two types of kernels are available:
default provided kernels;
custom kernels, which you can create according to your needs.
If you are a beginner or have no specific requirements, use the provided kernels.
Provided kernels
The provided kernels are available directly from the login interface via the dropdown menu [CC_Provided_kernels]. They allow you to run various predefined environments without having to set them up yourself: simply select the desired environment to make it available in your notebook server.
These environments are suitable for quick exploration in a validated setup, but fixed (and not necessarily reproducible). They are maintained in a controlled manner, and their versions evolve slowly to ensure stability.
Three types of environments are available:
Usage |
Description |
Available modules |
|---|---|---|
scientific |
Scientific computing and Machine Learning |
|
quantum-computing |
Quantum computing based on Qiskit (IBM) |
Full |
RAG/LLM |
Chatbot / LLM |
|
A collection of ready-to-use examples categorized by usage
Two Python environments are provided by default (grayed-out checkboxes). The other kernels must be selected to be added to your session.
The list and versions of the main modules available in a given environment can be viewed by hovering over the information icon next to the kernel name.
GPU compatibility (depending on the model) is also available in the table (dropdown menu). Refer to the information tooltip to check the CUDA version when applicable, as well as the available GPU frameworks.
Add a custom kernel
Creating a custom kernel is recommended in the following cases:
use of specific libraries;
need for reproducibility (and portability);
use of GPUs or specialized frameworks (such as JAX or PyTorch), or specific versions.
Two (main) approaches are possible:
- Simple Python environment
based on
venv, for a quick usage- Managed environment
based on
uv,condaorpixi, for robust and reproducible projects (allow the creation of complex environments, including those relying on non-Python software in the latter cases)
This method is particularly suitable if you already have a working environment. It relies on creating a Python environment using venv. It is a simple procedure, but not very robust and not reproducible. The full procedure is as follows:
% python -m venv <my env>
% source <my env>/bin/activate
% pip install ipykernel pyzmq
Once the required modules are installed in the environment, you can register the new environment as a Jupyter kernel as follows:
% python -m ipykernel install --user --name <my env> --display-name "Python (<my env>)"
The display_name is the name you want to give to your custom kernel and will be used to identify it in the JupyterLab interface.
This command creates a kernel.json file that will be placed in the user’s HOME directory, in the following path: $HOME/.local/share/jupyter/kernels/python3. Note that the last directory here named python3 is the default name; any other name can be used as long as this directory contains the kernel.json file. This is the file that Jupyter looks for to build the list of available kernels, which are directly accessible in Jupyter.
You can use an environment management tool to define dependencies declaratively, making the environment more robust and, above all, reproducible. We will consider two cases:
a Python-only environment, in which case you can use
uv;an environment built using a package manager, based on the
condaorpixiCLI.
uv (official documentation) is a Python tool that enables fine-grained and reproducible environment management. It is preferably installed within its own Python installation.
% python -m pip install uv
uv allows you to create a Python project and maintain an exact list of dependencies (including their versions).
% uv init <my project>
% cd <my project>
% uv add ipykernel pyzmq ipywidgets
This will generate the Python project configuration file, pyproject.toml, which can be used to recreate the environment identically.
Example of a pyproject.toml
[project]
name = "<my project>"
version = "0.1.0"
dependencies = [
"ipykernel",
"pyzmq",
"ipywidgets",
"ipympl",
"plotly"
]
Once the environment is created, you can generate the Python kernel, kernel.json file, as described previously, i.e.:
% python -m ipykernel install --user --name <name> --display-name "Python (<name>)"
You can use software package managers based on the CLI tools conda or pixi to create and manage your environments. They provide a complete system environment (not just a Python environment). They can be particularly useful when the operating system does not provide the required Python dependencies (or provides outdated versions). As the Python ecosystem evolves rapidly, some modules require recent system libraries (for example, GPU environments or those used to run Large Language Models).
condais a CLI used in the Anaconda, Miniconda, and Micromamba distributions. These distributions tend to consume significant disk space and do not always handle application dependencies very well. Additionally, some software channels are subject to licensing restrictions—please refer to the usage recommendations from Anaconda.Note
micromamba actually uses the
mambaCLI, which is a C++ implementation ofconda. It is therefore faster and more efficient than thecondaCLI.In general, pixi is preferred. It is a Rust-based CLI similar to
conda. It is more robust and faster, allows the creation of fully reproducible environments, and—like the Anaconda variants—provides access to and management of software packages from the conda-forge repository, which contains most of the packages required for scientific computing.
To ensure the environment functions correctly, it may be necessary to modify several environment variables. This can be done via the env field in the kernel.json file. Alternatively, a helper script can be used to allow finer control over the environment configuration, especially when the environment depends on additional software components that need to be set up.
Dependencies
In order for the kernel to be instantiated and communicate with the Jupyter interface, the following two modules (and their dependencies) are essential:
ipykernel
pyzmq
Important
These two modules must be available in your environment; otherwise, communication between the kernel and Jupyter will not be possible (the kernel will then appear in the Unknown state).
To enable interactivity when visualizing data (particularly for matplotlib), the following modules are required:
ipywidgets
ipympl
plotly
Note
ipywidgets and ipympl are two modules that provide various widgets, enabling interaction with plots created using matplotlib. plotly is a more general JavaScript-based layer and can therefore provide widgets for languages other than Python, such as Julia.
For code quality (optionnal):
black
isort
yapf
Creation of the kernel.json file
This kernel.json file, which is required to instanciate a Python kernel, will be placed, by default, in $HOME/.local/share/jupyter/kernels/python3 and will look like this:
{ "argv": [ "/pbs/software/redhat-9-x86_64/jnp/venvs/py3.13.11/bin/python", "-m", "ipykernel_launcher", "-f", "{connection_file}" ], "display_name": "Python (<my env>)", "language": "python" }
To avoid conflicts and make this environment more robust, it is possible to enforce the PYTHONPATH variable as follows (this will prevent Python from searching for modules in other locations):
{ "argv": [ "/pbs/software/redhat-9-x86_64/jnp/venvs/py3.13.11/bin/python", "-m", "ipykernel_launcher", "-f", "{connection_file}" ], "display_name": "Python 3.13.11 - Scientific", "language": "python", "env": { "PYTHONPATH": "/pbs/software/redhat-9-x86_64/jnp/venvs/py3.13.11/lib/python3.13/site-packages" } }
Note
In this env field, it is also possible to redefine any environment variable.
Warning
An incorrect configuration of PYTHONPATH (or any other environment variable) may lead to various issues when running the kernel, such as compatibility problems (wrong module version or module not found).
Note
It is entirely possible to create a new directory in $HOME/.local/share/jupyter/kernels/, for example python3.9, and place a kernel.json file in it that follows the format shown above. Strict adherence to json syntax is essential for Jupyter to correctly read the corresponding kernel configuration.
Usage of the jupyter-helper.sh
In order to use a helper script jupyter-helper.sh, we will have to modify the kernel.json in order to put it in:
{ "display_name": "conda3", "language": "python", "argv": [ "<PATH TO THE SCRIPT>/jupyter-helper.sh", "-f", "{connection_file}" ] }
This script jupyter-helper.sh will provide all the required detail to set up the environment of the custom kernel. It will look as follows:
#!/bin/bash source /usr/share/Modules/init/bash # Use of Environment Module to set up conda and activate the correct env module load Programming_Languages/anaconda/3.9 conda activate <your_env> # Handle the environment variables as required unset PYTHONPATH export PYTHONPATH=/path/to/your_env:$PYTHONPATH # Instanciate the kernel within the set up environment exec python -m ipykernel_launcher "$@"
The first part configures the environment (here we rely on an Anaconda environment), and the last line enable the kernel instanciation. In order to activate this kernel, the script must be executable:
% chmod +x jupyter-helper.sh
This mechanism is particularly useful when the environment configuration is complex:
the environment depends on other softwares or system modules (configured using
module load),multiple environment variables need to be configured.
Note
For some software, such as ROOT/C++, CC-IN2P3 provides ready-to-use kernels. They are located in the following software directory: /pbs/software/redhat-9-x86_64/jupyter_kernels.
If you need help writing the jupyter-helper.sh and kernel.json files to set up your custom kernels, please contact user support.
GPU kernels
A GPU environment has specific characteristics:
the version of the CUDA library that can be used depends on the GPU model (and the installed drivers),
compatibility between the CUDA version and the desired GPU framework is required.
CC-IN2P3 maintains CUDA drivers up to date and only provides versions that are compatible with all available GPUs. However, it is still necessary to verify compatibility between these drivers, the CUDA libraries, and the version of the framework to be installed using those libraries.
There are two main GPU frameworks: PyTorch and JAX. These frameworks are provided by default in some kernels.
It is therefore necessary to install the correct version of the framework depending on the available CUDA libraries, and ensure compatibility with the installed drivers (maintained by CC-IN2P3). To do so, it is recommended to use pip as specified in the installation instructions of their respective official documentation.
Below are examples of installations for JAX on Nvidia GPUs and PyTorch on Nvidia GPUs. Note that in both cases the required CUDA library version is specified, which also allows downloading the corresponding version if needed, ensuring compatibility between the framework and the CUDA libraries.
% pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
% pip install --upgrade pip
# NVIDIA CUDA 13 installation
# Note: wheels only available on linux.
% pip install -U "jax[cuda13]"
To ensure compatibility with the drivers, and therefore the correct use of the GPU, it is necessary to run a simple computation. Below is an example of matrix multiplication using JAX, which can be used to verify that the GPU is properly detected, that XLA compilation (XLA) is effective (jit), and that the computation is indeed executed on the GPU.
import jax
import jax.numpy as jnp
# Print available devices
print("Devices:", jax.devices())
# Create test matrices
key = jax.random.PRNGKey(0)
A = jax.random.normal(key, (2000, 2000))
B = jax.random.normal(key, (2000, 2000))
# JIT-compiled matrix multiplication
@jax.jit
def matmul(a, b):
return a @ b
# Run once (triggers compilation)
C = matmul(A, B)
# Force computation
C.block_until_ready()
print("Done.")
Troubleshooting
The kernel does not start
Check that the ipykernel and pyzmq modules are properly installed in the active environment.
Missing modules
Check the active environment; typically, modules are installed under the following path <PATH/TO/PYTHON/ENV>/lib/python3.11/site-packages/ (python3.11 explicitly corresponding to the Python version installed and available in this environment).
GPU issues
from a Jupyter terminal, verify that you have access to a GPU using the command
nvidia-smi,check compatibility between GPU drivers, CUDA libraries, and the installed framework.