docs: write about GPU support

This commit is contained in:
Masaki Yatsu
2025-11-21 00:39:48 +09:00
parent 585c0f5ba3
commit 1835600c85
2 changed files with 7 additions and 24 deletions

View File

@@ -38,6 +38,12 @@ A remotely accessible Kubernetes home lab with OIDC authentication. Build a mode
- **[Longhorn](https://longhorn.io/)**: Distributed block storage
- **[MinIO](https://min.io/)**: S3-compatible object storage
### GPU Support (Optional)
- **[NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin)**: GPU resource management for Kubernetes
- Exposes NVIDIA GPUs to Kubernetes as schedulable resources
- Required for GPU-accelerated workloads in JupyterHub and other applications
### Data & Analytics (Optional)
- **[JupyterHub](https://jupyter.org/hub)**: Interactive computing with collaborative notebooks
@@ -178,6 +184,7 @@ Multi-user platform for interactive computing:
- **Keycloak Authentication**: OAuth2 integration with SSO
- **Persistent Storage**: User notebooks stored in Longhorn volumes
- **Collaborative**: Shared computing environment for teams
- **GPU Support**: CUDA-enabled notebooks with nvidia-device-plugin integration
[📖 See JupyterHub Documentation](./jupyterhub/README.md)

View File

@@ -193,30 +193,6 @@ spec:
nvidia.com/gpu: 1
```
### Using GPUs in JupyterHub
Configure JupyterHub to allow GPU access for notebook servers:
```yaml
# jupyterhub values.yaml
singleuser:
runtimeClassName: nvidia
extraResource:
limits:
nvidia.com/gpu: "1"
```
After deploying JupyterHub with this configuration, users can access GPUs in their notebooks:
```python
import torch
# Check GPU availability
print(torch.cuda.is_available()) # True
print(torch.cuda.device_count()) # 1
print(torch.cuda.get_device_name(0)) # NVIDIA GeForce RTX 4070 Ti
```
### Multiple GPUs
To request multiple GPUs: