From 1835600c8513862886dbe0b9ba6d23483cab8a3c Mon Sep 17 00:00:00 2001 From: Masaki Yatsu Date: Fri, 21 Nov 2025 00:39:48 +0900 Subject: [PATCH] docs: write about GPU support --- README.md | 7 +++++++ nvidia-device-plugin/README.md | 24 ------------------------ 2 files changed, 7 insertions(+), 24 deletions(-) diff --git a/README.md b/README.md index 7f5f193..d111730 100644 --- a/README.md +++ b/README.md @@ -38,6 +38,12 @@ A remotely accessible Kubernetes home lab with OIDC authentication. Build a mode - **[Longhorn](https://longhorn.io/)**: Distributed block storage - **[MinIO](https://min.io/)**: S3-compatible object storage +### GPU Support (Optional) + +- **[NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin)**: GPU resource management for Kubernetes + - Exposes NVIDIA GPUs to Kubernetes as schedulable resources + - Required for GPU-accelerated workloads in JupyterHub and other applications + ### Data & Analytics (Optional) - **[JupyterHub](https://jupyter.org/hub)**: Interactive computing with collaborative notebooks @@ -178,6 +184,7 @@ Multi-user platform for interactive computing: - **Keycloak Authentication**: OAuth2 integration with SSO - **Persistent Storage**: User notebooks stored in Longhorn volumes - **Collaborative**: Shared computing environment for teams +- **GPU Support**: CUDA-enabled notebooks with nvidia-device-plugin integration [📖 See JupyterHub Documentation](./jupyterhub/README.md) diff --git a/nvidia-device-plugin/README.md b/nvidia-device-plugin/README.md index c3ddfa7..cbd5bfc 100644 --- a/nvidia-device-plugin/README.md +++ b/nvidia-device-plugin/README.md @@ -193,30 +193,6 @@ spec: nvidia.com/gpu: 1 ``` -### Using GPUs in JupyterHub - -Configure JupyterHub to allow GPU access for notebook servers: - -```yaml -# jupyterhub values.yaml -singleuser: - runtimeClassName: nvidia - extraResource: - limits: - nvidia.com/gpu: "1" -``` - -After deploying JupyterHub with this configuration, users can access GPUs in their notebooks: - -```python -import torch - -# Check GPU availability -print(torch.cuda.is_available()) # True -print(torch.cuda.device_count()) # 1 -print(torch.cuda.get_device_name(0)) # NVIDIA GeForce RTX 4070 Ti -``` - ### Multiple GPUs To request multiple GPUs: