Skip to main content
Version Next

Integrations — JupyterLab

Integration overview

The tdp-jupyter chart integrates with Apache Spark via tdpSparkIntegration and the tdp-spark subchart.

Spark

tdpSparkIntegration:
enabled: true
configMap:
sparkConfig:
"spark.executor.instances": "2"
"spark.executor.memory": "4g"
"spark.executor.cores": "3"

tdp-spark:
spark:
worker:
replicaCount: 2
resources:
limits:
cpu: 4
memory: 6Gi
Terminal input
helm upgrade --install <release> \
oci://registry.tecnisys.com.br/tdp/charts/tdp-jupyter \
-n <namespace> \
-f my-values.yaml

Verification

Terminal input
kubectl get configmap tdp-spark-jupyter-integration -n <namespace> -o yaml
kubectl get pods -n <namespace> -l app.kubernetes.io/name=spark

Cleanup

Terminal input
helm uninstall <release> -n <namespace>
kubectl delete configmap tdp-spark-jupyter-integration -n <namespace>

Iceberg catalog or Hive Metastore configuration within the sparkConfig of this chart is outside the scope of this documentation; refer to the Spark/Iceberg charts and documentation for your environment.