docs: write about GPU support
This commit is contained in:
@@ -38,6 +38,12 @@ A remotely accessible Kubernetes home lab with OIDC authentication. Build a mode
|
|||||||
- **[Longhorn](https://longhorn.io/)**: Distributed block storage
|
- **[Longhorn](https://longhorn.io/)**: Distributed block storage
|
||||||
- **[MinIO](https://min.io/)**: S3-compatible object storage
|
- **[MinIO](https://min.io/)**: S3-compatible object storage
|
||||||
|
|
||||||
|
### GPU Support (Optional)
|
||||||
|
|
||||||
|
- **[NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin)**: GPU resource management for Kubernetes
|
||||||
|
- Exposes NVIDIA GPUs to Kubernetes as schedulable resources
|
||||||
|
- Required for GPU-accelerated workloads in JupyterHub and other applications
|
||||||
|
|
||||||
### Data & Analytics (Optional)
|
### Data & Analytics (Optional)
|
||||||
|
|
||||||
- **[JupyterHub](https://jupyter.org/hub)**: Interactive computing with collaborative notebooks
|
- **[JupyterHub](https://jupyter.org/hub)**: Interactive computing with collaborative notebooks
|
||||||
@@ -178,6 +184,7 @@ Multi-user platform for interactive computing:
|
|||||||
- **Keycloak Authentication**: OAuth2 integration with SSO
|
- **Keycloak Authentication**: OAuth2 integration with SSO
|
||||||
- **Persistent Storage**: User notebooks stored in Longhorn volumes
|
- **Persistent Storage**: User notebooks stored in Longhorn volumes
|
||||||
- **Collaborative**: Shared computing environment for teams
|
- **Collaborative**: Shared computing environment for teams
|
||||||
|
- **GPU Support**: CUDA-enabled notebooks with nvidia-device-plugin integration
|
||||||
|
|
||||||
[📖 See JupyterHub Documentation](./jupyterhub/README.md)
|
[📖 See JupyterHub Documentation](./jupyterhub/README.md)
|
||||||
|
|
||||||
|
|||||||
@@ -193,30 +193,6 @@ spec:
|
|||||||
nvidia.com/gpu: 1
|
nvidia.com/gpu: 1
|
||||||
```
|
```
|
||||||
|
|
||||||
### Using GPUs in JupyterHub
|
|
||||||
|
|
||||||
Configure JupyterHub to allow GPU access for notebook servers:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# jupyterhub values.yaml
|
|
||||||
singleuser:
|
|
||||||
runtimeClassName: nvidia
|
|
||||||
extraResource:
|
|
||||||
limits:
|
|
||||||
nvidia.com/gpu: "1"
|
|
||||||
```
|
|
||||||
|
|
||||||
After deploying JupyterHub with this configuration, users can access GPUs in their notebooks:
|
|
||||||
|
|
||||||
```python
|
|
||||||
import torch
|
|
||||||
|
|
||||||
# Check GPU availability
|
|
||||||
print(torch.cuda.is_available()) # True
|
|
||||||
print(torch.cuda.device_count()) # 1
|
|
||||||
print(torch.cuda.get_device_name(0)) # NVIDIA GeForce RTX 4070 Ti
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multiple GPUs
|
### Multiple GPUs
|
||||||
|
|
||||||
To request multiple GPUs:
|
To request multiple GPUs:
|
||||||
|
|||||||
Reference in New Issue
Block a user