Compare commits

..

47 Commits

Author SHA1 Message Date
9851a131d3 yt-dlp-bot: deploy update to b9088d9 2026-02-28 21:42:39 +00:00
20473263ae authentik: enable AUTHENTIK_POSTGRESQL__DISABLE_SERVER_SIDE_CURSORS
Per https://docs.goauthentik.io/install-config/configuration/#using-a-postgresql-connection-pooler
2026-02-26 09:20:16 -05:00
cb3528b17b Merge pull request 'chore(deps): update helm release gitea to v12.5.6' (#90) from renovate/gitea-12.x into main
Reviewed-on: #90
2026-02-26 02:07:26 +00:00
1a25c3fcf3 chore(deps): update helm release gitea to v12.5.6 2026-02-26 02:00:13 +00:00
b2d6545070 authentik: add files support 2026-02-25 18:52:17 -05:00
053c36a877 authentik: increase replicas to 3 after stability testing 2026-02-25 18:45:51 -05:00
968ef8d621 authentik: use dedicated pooler 2026-02-25 18:39:10 -05:00
4215d89d0b Revert "authentik: raise workers to 3"
This reverts commit 3b38a2c3a9.
2026-02-24 22:38:15 -05:00
3b38a2c3a9 authentik: raise workers to 3 2026-02-24 22:34:06 -05:00
42fd1e5a92 authentik: adjust probes from default 2026-02-24 22:26:36 -05:00
8d6b3eb6b6 authentik: try upgrade again with only 1 replica 2026-02-24 22:08:19 -05:00
442ba532cd Revert "chore(deps): update helm release authentik to v2026"
Reverting Authentik update as pods have been crashing
2026-02-24 21:52:28 -05:00
c71e4765e1 Merge pull request 'chore(deps): update helm release authentik to v2026' (#88) from renovate/authentik-2026.x into main
Reviewed-on: #88
2026-02-25 02:38:47 +00:00
b1e62ed191 chore(deps): update helm release authentik to v2026 2026-02-24 22:00:15 +00:00
5855b78976 yt-dlp-bot: deploy update to f688ee0 2026-02-21 22:27:54 +00:00
d849c4ca19 gitea-runner: ram scratch space 2026-02-19 18:06:15 -05:00
101be3512a attic: enable S3 support 2026-02-18 19:29:12 -05:00
893f10a45c gitea-runner: secure with rootless 2026-02-18 19:28:58 -05:00
11f881c24b attic: update tag to c4ffb5e86e928572e867bd3f81545293313e0a08 2026-02-17 21:08:37 -05:00
f59574bda1 Merge pull request 'chore(deps): update helm release authentik to v2025.12.4' (#87) from renovate/authentik-2025.x into main
Reviewed-on: #87
2026-02-14 11:28:56 +00:00
a2273c4336 chore(deps): update helm release authentik to v2025.12.4 2026-02-12 17:00:10 +00:00
f0ac9bbd6d gitea-runner: create configmap for custom config, enable host networking within dind 2026-02-08 12:33:48 -05:00
68026b743c Merge pull request 'chore(deps): update helm release gitea to v12.5.5' (#86) from renovate/gitea-12.x into main
Reviewed-on: #86
2026-02-08 13:03:51 +00:00
980420d1cd chore(deps): update helm release gitea to v12.5.5 2026-02-08 04:00:08 +00:00
392e56b6ba gitea-runner: disable host networking as it breaks connectivity to buildkitd 2026-02-07 22:49:54 -05:00
ae1fc4ca71 Revert "integrate buildkitd in runner containers"
This reverts commit 6c953ba4f3.
2026-02-07 22:48:39 -05:00
7d39f524de Revert "gitea-runner: use link-local port for buildkitd"
This reverts commit 8f39602c94.
2026-02-07 22:48:34 -05:00
8f39602c94 gitea-runner: use link-local port for buildkitd 2026-02-07 22:42:30 -05:00
6c953ba4f3 integrate buildkitd in runner containers 2026-02-07 22:35:57 -05:00
e5609b6503 gitea-runner: use externalsecret token 2026-02-07 22:10:32 -05:00
92dc80f873 gitea-runner: enable host networking to prevent double-NAT timeouts 2026-02-07 22:02:05 -05:00
b9eeadee05 add README 2026-02-07 12:20:35 -05:00
ac9b7d3f67 rm inactive projects 2026-02-07 12:20:28 -05:00
fffddc9a39 gitea-runner: integrate buildkit, migrate runner to statefulset 2026-02-07 11:50:50 -05:00
61a12bdab2 yt-dlp-bot: deploy update to d7ad90a 2026-02-07 02:48:18 +00:00
68c91de84e yt-dlp-bot: deploy update to ac5abff 2026-02-05 00:45:06 +00:00
a4929cd9fd yt-dlp-bot: deploy update to bef0a4d 2026-02-04 02:04:44 +00:00
1cddb5abef Merge pull request 'chore(deps): update helm release authentik to v2025.12.3' (#85) from renovate/authentik-2025.x into main
Reviewed-on: #85
2026-02-03 18:31:24 +00:00
7d593772a3 chore(deps): update helm release authentik to v2025.12.3 2026-02-02 19:00:09 +00:00
7cb320981e Merge pull request 'chore(deps): update helm release gitea to v12.5.4' (#84) from renovate/gitea-12.x into main
Reviewed-on: #84
2026-02-01 23:06:06 +00:00
1d905eebe7 chore(deps): update helm release gitea to v12.5.4 2026-02-01 23:00:09 +00:00
41a3556c50 yt-dlp-bot: deploy update to 2709346 2026-02-01 15:01:45 +00:00
7377757e96 yt-dlp-bot: deploy update to 70d7275 2026-01-31 19:17:52 +00:00
28ba9d64b7 Merge pull request 'chore(deps): update helm release authentik to v2025.12.2' (#83) from renovate/authentik-2025.x into main
Reviewed-on: #83
2026-01-31 02:36:42 +00:00
2110ffd473 Merge pull request 'chore(deps): update helm release grafana to v10.5.15' (#82) from renovate/grafana-10.x into main
Reviewed-on: #82
2026-01-31 02:36:20 +00:00
7c761c6de6 chore(deps): update helm release authentik to v2025.12.2 2026-01-30 19:00:09 +00:00
1cf343eeab chore(deps): update helm release grafana to v10.5.15 2026-01-30 08:00:10 +00:00
36 changed files with 328 additions and 749 deletions

29
README.md Normal file
View File

@@ -0,0 +1,29 @@
# Core Apps
**Production-grade application deployments for my Kubernetes homelab**
This repository contains the core applications deployed to my Kubernetes homelab. Applications are deployed using either Kubernetes manifests or Helm charts (with upstream subcharts and custom values).
**Why Helm?** I prefer Helm charts when upstream versions exist because Renovate can automatically track new chart versions, whereas image tags in raw manifests aren't always semantically versioned.
**GitOps Workflow:** This repository is monitored by ArgoCD and serves as the source of truth for deployments. Each top-level directory is its own ArgoCD Application, with subdirectories representing components within that application.
**Automated Commits:** Apps that I wrote/maintain directly (such as yt-dlp-bot and zap2xml) get their manifests automatically updated via an Actions workflow in their respective repositories
- `arr-stack/` - Arr Stack (manifests)
- `flaresolverr/` - Flaresolverr (captcha processor)
- `prowlarr/` - Prowlarr (indexer manager)
- `radarr/` - Radarr (movie media manager)
- `sonarr/` - Sonarr (TV series media manager)
- `tunnel/` - Custom SSH tunnel to my seedbox to securely communicate with Deluge
- `attic/` - Attic NixOS cache server (manifests)
- `authentik/` - [Authentik](https://auth.dubyatp.xyz) SSO server (Helm chart)
- `gitea/` - [Gitea](https://git.dubyatp.xyz) Git Server (Helm chart)
- `gitea-runner/` - Gitea Runner (manifests)
- `buildkitd/` - Docker Buildkitd build environment
- `grafana/` - [Grafana](https://grafana.dubyatp.xyz) observability dashboard (Helm chart)
- `jellyfin/` - [Jellyfin](https://jellyfin.dubyatp.xyz) media server (Helm chart)
- `renovate/` - [Renovate](https://git.dubyatp.xyz/renovate-bot) automated dependency manager (manifests)
- `vaultwarden/` - [Vaultwarden](https://vaultwarden.dubyatp.xyz) password manager (manifests)
- `whatismyip/` - [Simple "what is my IP" HTTP service](https://whatismyip.dubyatp.xyz) (manifests)
- `yt-dlp-bot/` - [yt-dlp bot](https://git.dubyatp.xyz/williamp/yt-dlp-bot) (manifests); a custom Discord bot i created for downloading and storing YouTube videos ad-hoc
- `zap2xml/` - [kube-zap2xml](https://git.dubyatp.xyz/williamp/kube-zap2xml) (manifests); modified version of zap2xml (zap2it TV listings scraper) designed for use as Kubernetes jobs and sends the result XMLTV format to a Rook-Ceph S3 bucket

10
attic/bucket.yaml Normal file
View File

@@ -0,0 +1,10 @@
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: attic-bucket
namespace: attic
spec:
additionalConfig:
maxSize: 100Gi
bucketName: attic-bucket
storageClassName: weyma-s3-bucket

36
attic/config.yaml Normal file
View File

@@ -0,0 +1,36 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: attic-config
data:
server.toml: |
listen = "[::]:8080"
allowed-hosts = []
#api-endpoint = "https://nix-cache.dubyatp.xyz/"
[database]
url = "sqlite:///var/empty/.local/share/attic/server.db"
[storage]
path = "/data/.local/share/attic/storage"
type = "local"
#region = "us-east-1"
#bucket = "attic-bucket"
#endpoint = "https://weyma-s3.infra.dubyatp.xyz"
[chunking]
nar-size-threshold = 65536
min-size = 16384
avg-size = 65536
max-size = 262144
[compression]
type = "zstd"
[garbage-collection]
interval = "12 hours"
[jwt]
[jwt.signing]

View File

@@ -3,6 +3,7 @@ kind: Deployment
metadata: metadata:
name: attic name: attic
spec: spec:
replicas: 1
selector: selector:
matchLabels: matchLabels:
app: attic app: attic
@@ -13,17 +14,24 @@ spec:
spec: spec:
containers: containers:
- name: attic - name: attic
image: ghcr.io/zhaofengli/attic:ff8a897d1f4408ebbf4d45fa9049c06b3e1e3f4e image: ghcr.io/zhaofengli/attic:c4ffb5e86e928572e867bd3f81545293313e0a08
envFrom: envFrom:
- secretRef: - secretRef:
name: attic-secret name: attic-secret
- secretRef:
name: attic-bucket
volumeMounts: volumeMounts:
- name: attic-pvc - name: attic-pvc
mountPath: /var/empty mountPath: /var/empty/
resources: resources:
limits: limits:
memory: "2Gi" memory: "2Gi"
cpu: "500m" cpu: "500m"
- name: multitool
image: wbitt/network-multitool
volumeMounts:
- name: attic-pvc
mountPath: /var/empty/
volumes: volumes:
- name: attic-pvc - name: attic-pvc
persistentVolumeClaim: persistentVolumeClaim:

View File

@@ -24,5 +24,5 @@ appVersion: "1.0"
dependencies: dependencies:
- name: authentik - name: authentik
version: 2025.12.1 version: 2026.2.0
repository: https://charts.goauthentik.io repository: https://charts.goauthentik.io

View File

@@ -15,6 +15,35 @@ authentik:
service: service:
labels: labels:
metrics_enabled: "true" metrics_enabled: "true"
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
httpGet:
path: "{{ .Values.authentik.web.path }}-/health/live/"
port: http
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
httpGet:
path: "{{ .Values.authentik.web.path }}-/health/ready/"
port: http
startupProbe:
failureThreshold: 60
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
httpGet:
path: "{{ .Values.authentik.web.path }}-/health/live/"
port: http
worker: worker:
replicas: 3 replicas: 3
volumeMounts: volumeMounts:
@@ -32,8 +61,10 @@ authentik:
secretKeyRef: secretKeyRef:
name: authentik-credentials name: authentik-credentials
key: authentik-secret-key key: authentik-secret-key
- name: AUTHENTIK_POSTGRESQL__DISABLE_SERVER_SIDE_CURSORS
value: "true"
- name: AUTHENTIK_POSTGRESQL__HOST - name: AUTHENTIK_POSTGRESQL__HOST
value: pooler-weyma-rw.cloudnativepg.svc.cluster.local value: pooler-weyma-rw-authentik.cloudnativepg.svc.cluster.local
- name: AUTHENTIK_POSTGRESQL__NAME - name: AUTHENTIK_POSTGRESQL__NAME
value: authentik value: authentik
- name: AUTHENTIK_POSTGRESQL__USER - name: AUTHENTIK_POSTGRESQL__USER
@@ -58,6 +89,22 @@ authentik:
key: smtp-password key: smtp-password
- name: AUTHENTIK_EMAIL__TIMEOUT - name: AUTHENTIK_EMAIL__TIMEOUT
value: "30" value: "30"
- name: AUTHENTIK_STORAGE__BACKEND
value: "s3"
- name: AUTHENTIK_STORAGE__S3__ENDPOINT
value: "https://weyma-s3.infra.dubyatp.xyz"
- name: AUTHENTIK_STORAGE__S3__BUCKET_NAME
value: "authentik-files"
- name: AUTHENTIK_STORAGE__S3__ACCESS_KEY
valueFrom:
secretKeyRef:
name: authentik-files
key: AWS_ACCESS_KEY_ID
- name: AUTHENTIK_STORAGE__S3__SECRET_KEY
valueFrom:
secretKeyRef:
name: authentik-files
key: AWS_SECRET_ACCESS_KEY
additionalObjects: additionalObjects:
- apiVersion: networking.k8s.io/v1 - apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
@@ -146,3 +193,12 @@ authentik:
creationPolicy: Owner creationPolicy: Owner
deletionPolicy: Retain deletionPolicy: Retain
name: authentik-db-auth name: authentik-db-auth
- apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: authentik-files
spec:
additionalConfig:
maxSize: 20Gi
bucketName: authentik-files
storageClassName: weyma-s3-bucket

View File

@@ -1,61 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: dispatcharr
spec:
selector:
matchLabels:
app: dispatcharr
template:
metadata:
labels:
app: dispatcharr
annotations:
backup.velero.io/backup-volumes: data
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: extensions.talos.dev/i915
operator: Exists
nodeSelector:
kubernetes.io/hostname: weyma-talos-testw04
containers:
- name: dispatcharr
image: ghcr.io/dispatcharr/dispatcharr:0.8.0-amd64
env:
- name: DISPATCHARR_ENV
value: aio
- name: REDIS_HOST
value: localhost
- name: CELERY_BROKER_URL
value: redis://localhost:6379/0
- name: DISPATCHARR_LOG_LEVEL
value: info
- name: UWSGI_NICE_LEVEL
value: "-5"
- name: CELERY_NICE_LEVEL
value: "-5"
volumeMounts:
- name: dispatcharr-data
mountPath: /data
- name: dev-dri
mountPath: /dev/dri
resources:
limits:
memory: "3Gi"
cpu: "1"
requests:
memory: "256Mi"
cpu: "500m"
securityContext:
privileged: true
volumes:
- name: dispatcharr-data
persistentVolumeClaim:
claimName: dispatcharr
- name: dev-dri
hostPath:
path: /dev/dri

View File

@@ -1,18 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dispatcharr
labels:
app.kubernetes.io/name: dispatcharr
spec:
rules:
- host: dispatcharr.dubyatp.xyz
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: dispatcharr-svc
port:
number: 9191

View File

@@ -1,11 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dispatcharr
spec:
resources:
requests:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: dispatcharr-svc
spec:
selector:
app: dispatcharr
ports:
- port: 9191
targetPort: 9191

View File

@@ -0,0 +1,40 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: buildkitd
namespace: gitea-runner
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: buildkitd
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: buildkitd
spec:
containers:
- args:
- --addr
- tcp://0.0.0.0:1234
image: moby/buildkit:v0.27.1
imagePullPolicy: Always
name: buildkitd
ports:
- containerPort: 1234
protocol: TCP
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: buildkitd
namespace: gitea-runner
spec:
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 1234
selector:
app: buildkitd

41
gitea-runner/config.yaml Normal file
View File

@@ -0,0 +1,41 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: runner-config
data:
config.yaml: |-
log:
level: info
runner:
file: /data/.runner
capacity: 1
env_file: .env
timeout: 3h
shutdown_timeout: 0s
insecure: false
fetch_timeout: 5s
fetch_interval: 2s
labels:
- "ubuntu-latest:docker://docker.gitea.com/runner-images:ubuntu-latest"
- "ubuntu-22.04:docker://docker.gitea.com/runner-images:ubuntu-22.04"
- "ubuntu-20.04:docker://docker.gitea.com/runner-images:ubuntu-20.04"
cache:
enabled: true
dir: ""
host: ""
port: 0
external_server: ""
container:
network: "host"
privileged: false
options:
workdir_parent: /scratch
valid_volumes:
- /scratch/**
docker_host: ""
force_pull: true
force_rebuild: false
require_docker: false
docker_timeout: 0s
host:
workdir_parent:

View File

@@ -1,79 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
labels:
app: act-runner
name: act-runner
namespace: gitea-runner
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: act-runner
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: act-runner
spec:
containers:
- command:
- sh
- -c
- while ! nc -z localhost 2376 </dev/null; do echo 'waiting for docker daemon...';
sleep 5; done; /sbin/tini -- run.sh
env:
- name: DOCKER_HOST
value: tcp://localhost:2376
- name: DOCKER_CERT_PATH
value: /certs/client
- name: DOCKER_TLS_VERIFY
value: "1"
- name: GITEA_INSTANCE_URL
value: https://git.dubyatp.xyz
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
key: token
name: runner-secret
image: gitea/act_runner:nightly
imagePullPolicy: Always
name: runner
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /certs
name: docker-certs
- mountPath: /data
name: runner-data
- env:
- name: DOCKER_TLS_CERTDIR
value: /certs
image: docker:23.0.6-dind
imagePullPolicy: IfNotPresent
name: daemon
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /certs
name: docker-certs
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- name: docker-certs
- name: runner-data
persistentVolumeClaim:
claimName: act-runner-vol

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-runner-pvc
spec:
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: weyma-shared

View File

@@ -0,0 +1,86 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: act-runner
namespace: gitea-runner
labels:
app: act-runner
spec:
serviceName: ""
selector:
matchLabels:
app: act-runner
replicas: 3
template:
metadata:
labels:
app: act-runner
spec:
initContainers:
- name: sysctl
image: busybox
securityContext:
privileged: true
command:
- sh
- -c
- echo 28633 > /proc/sys/user/max_user_namespaces
- name: chown-data
image: busybox
securityContext:
runAsUser: 0
command:
- sh
- -c
- chown -R 1000:1000 /data
volumeMounts:
- name: runner-data
mountPath: /data
containers:
- name: runner
image: gitea/act_runner:nightly-dind-rootless
imagePullPolicy: Always
env:
- name: CONFIG_FILE
value: /config/config.yaml
- name: DOCKER_HOST
value: unix:///run/user/1000/docker.sock
- name: GITEA_INSTANCE_URL
value: https://git.dubyatp.xyz
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
key: registration-token
name: gitea-runner-token
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: runner-config
mountPath: /config
- name: runner-data
mountPath: /data
- name: runner-scratch
mountPath: /scratch
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- name: runner-scratch
emptyDir:
medium: Memory
sizeLimit: 5Gi
- name: runner-config
configMap:
name: runner-config
volumeClaimTemplates:
- metadata:
name: runner-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: weyma-shared
resources:
requests:
storage: 32Gi

View File

@@ -24,5 +24,5 @@ appVersion: "1.0"
dependencies: dependencies:
- name: gitea - name: gitea
version: 12.5.3 version: 12.5.6
repository: https://weyma-s3.infra.dubyatp.xyz/helm-bucket-ea34bc44-ef19-480d-a16a-1e583991f123/charts/ repository: https://weyma-s3.infra.dubyatp.xyz/helm-bucket-ea34bc44-ef19-480d-a16a-1e583991f123/charts/

View File

@@ -24,5 +24,5 @@ appVersion: "1.0"
dependencies: dependencies:
- name: grafana - name: grafana
version: 10.5.14 version: 10.5.15
repository: https://grafana.github.io/helm-charts repository: https://grafana.github.io/helm-charts

View File

@@ -1,9 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: immich-config
data:
immich-config.yaml: |
trash:
enabled: true
days: 30

View File

@@ -1,22 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: immich
labels:
name: immich
spec:
rules:
- host: immich.dubyatp.xyz
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: immich
port:
number: 2283
tls:
- secretName: cert-dubyatp-xyz
hosts:
- immich.dubyatp.xyz

View File

@@ -1,11 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: immich-library
spec:
resources:
requests:
storage: 50Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany

View File

@@ -1,94 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: immich-ml
spec:
selector:
matchLabels:
app: immich-ml
template:
metadata:
labels:
app: immich-ml
spec:
containers:
- name: immich-ml
image: ghcr.io/immich-app/immich-machine-learning:v1.134.0
volumeMounts:
- name: model-cache
mountPath: /cache
- name: config
mountPath: /config/immich-config.yaml
- name: dev-dri
mountPath: /dev/dri
env:
- name: DB_HOSTNAME
value: "immich-rw.cloudnativepg.svc.cluster.local"
- name: DB_DATABASE_NAME
value: "immich"
- name: DB_USERNAME
valueFrom:
secretKeyRef:
key: username
name: postgres-credentials
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: postgres-credentials
- name: REDIS_HOSTNAME
value: redis
- name: REDIS_PORT
value: "6379"
- name: IMMICH_PORT
value: "3003"
livenessProbe:
httpGet:
path: /ping
port: 3003
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /ping
port: 3003
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /ping
port: 3003
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 30
securityContext:
privileged: true
resources:
limits:
memory: "8Gi"
cpu: "2"
requests:
memory: "2Gi"
cpu: "500m"
volumes:
- name: model-cache
emptyDir:
sizeLimit: 10Gi
- name: config
configMap:
name: immich-config
- name: dev-dri
hostPath:
path: /dev/dri
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: extensions.talos.dev/i915
operator: Exists

View File

@@ -1,25 +0,0 @@
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: postgres-credentials
spec:
data:
- remoteRef:
conversionStrategy: Default
decodingStrategy: None
key: cloudnativepg
metadataPolicy: None
property: immich_pw
secretKey: password
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: weyma-vault
target:
template:
data:
username: immich
password: "{{ .password }}"
creationPolicy: Owner
deletionPolicy: Retain
name: postgres-credentials

View File

@@ -1,94 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: immich-server
spec:
selector:
matchLabels:
app: immich-server
template:
metadata:
labels:
app: immich-server
spec:
containers:
- name: immich-server
image: ghcr.io/immich-app/immich-server:v1.134.0
volumeMounts:
- name: library
mountPath: /usr/src/app/upload
- name: config
mountPath: /config/immich-config.yaml
- name: dev-dri
mountPath: /dev/dri
env:
- name: DB_HOSTNAME
value: "immich-rw.cloudnativepg.svc.cluster.local"
- name: DB_DATABASE_NAME
value: "immich"
- name: DB_USERNAME
valueFrom:
secretKeyRef:
key: username
name: postgres-credentials
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: postgres-credentials
- name: REDIS_HOSTNAME
value: redis
- name: REDIS_PORT
value: "6379"
- name: IMMICH_PORT
value: "2283"
livenessProbe:
httpGet:
path: /api/server/ping
port: 2283
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /api/server/ping
port: 2283
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /api/server/ping
port: 2283
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 30
securityContext:
privileged: true
resources:
limits:
memory: "8Gi"
cpu: "2"
requests:
memory: "2Gi"
cpu: "500m"
volumes:
- name: library
persistentVolumeClaim:
claimName: immich-library
- name: config
configMap:
name: immich-config
- name: dev-dri
hostPath:
path: /dev/dri
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: extensions.talos.dev/i915
operator: Exists

View File

@@ -1,23 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: immich
spec:
selector:
app: immich-server
ports:
- port: 2283
targetPort: 2283
name: http
---
apiVersion: v1
kind: Service
metadata:
name: immich-ml
spec:
selector:
app: immich-ml
ports:
- port: 3003
targetPort: 3003
name: http

View File

@@ -1,38 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
serviceName: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
command: ["redis-server"]
args:
- "--port"
- "6379"
- "--dir"
- "/data"
- "--appendonly"
- "yes"
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: rook-ceph-block
resources:
requests:
storage: 10Gi
metadata:
name: data

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379

View File

@@ -1,10 +0,0 @@
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: peertube-bucket
namespace: peertube
spec:
generateBucketName: peertube
storageClassName: weyma-s3-bucket
additionalConfig:
maxSize: "100Gi"

View File

@@ -1,35 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: peertube-config
data:
PEERTUBE_INSTANCE_NAME: "dubyatp peertube"
PEERTUBE_INSTANCE_DESCRIPTION: "duby's peertube instance"
POSTGRES_USER: peertube
POSTGRES_DB: peertube
PEERTUBE_DB_USERNAME: peertube
PEERTUBE_DB_HOSTNAME: pooler-weyma-rw.cloudnativepg.svc.cluster.local
PEERTUBE_DB_PORT: "5432"
PEERTUBE_WEBSERVER_HOSTNAME: "tube.dubyatp.xyz"
PEERTUBE_TRUST_PROXY: '["127.0.0.1", "loopback", "172.18.0.0/16"]'
PEERTUBE_SMTP_USERNAME: "peertube_dubyatp"
PEERTUBE_SMTP_HOSTNAME: "mail.smtp2go.com"
PEERTUBE_SMTP_PORT: "465"
PEERTUBE_SMTP_TLS: "true"
PEERTUBE_SMTP_FROM: "peertube@em924671.dubyatp.xyz"
PEERTUBE_ADMIN_EMAIL: "me@williamtpeebles.com"
#PEERTUBE_OBJECT_STORAGE_ENABLED: "true"
#PEERTUBE_OBJECT_STORAGE_ENDPOINT: "https://weyma-s3.infra.dubyatp.xyz"
#PEERTUBE_OBJECT_STORAGE_REGION: ""
#PEERTUBE_OBJECT_STORAGE_STREAMING_PLAYLISTS_BUCKET_NAME: "peertube-953221d2-7649-48b2-b79f-5a9e59daedbb"
#PEERTUBE_OBJECT_STORAGE_STREAMING_PLAYLISTS_PREFIX: "streaming/"
#PEERTUBE_OBJECT_STORAGE_WEB_VIDEOS_BUCKET_NAME: "peertube-953221d2-7649-48b2-b79f-5a9e59daedbb"
#PEERTUBE_OBJECT_STORAGE_WEB_VIDEOS_PREFIX: "videos/"
#PEERTUBE_OBJECT_STORAGE_USER_EXPORTS_BUCKET_NAME: "peertube-953221d2-7649-48b2-b79f-5a9e59daedbb"
#PEERTUBE_OBJECT_STORAGE_USER_EXPORTS_PREFIX: "exports/"
#PEERTUBE_OBJECT_STORAGE_ORIGINAL_VIDEO_FILES_BUCKET_NAME: "peertube-953221d2-7649-48b2-b79f-5a9e59daedbb"
#PEERTUBE_OBJECT_STORAGE_ORIGINAL_VIDEO_FILES_PREFIX: "original-videos/"
#PEERTUBE_OBJECT_STORAGE_CAPTIONS_BUCKET_NAME: "peertube-953221d2-7649-48b2-b79f-5a9e59daedbb"
#PEERTUBE_OBJECT_STORAGE_CAPTIONS_PREFIX: "captions/"
#PEERTUBE_OBJECT_STORAGE_UPLOAD_ACL_PUBLIC: "public-read"
#PEERTUBE_OBJECT_STORAGE_UPLOAD_ACL_PRIVATE: "private"

View File

@@ -1,69 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: peertube
labels:
app: peertube
spec:
replicas: 1
selector:
matchLabels:
app: peertube
template:
metadata:
labels:
app: peertube
spec:
containers:
- name: peertube
image: chocobozzz/peertube:v7.2.3-bookworm
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
- containerPort: 9000
name: peertube
- containerPort: 1935
name: rtmp
envFrom:
- secretRef:
name: peertube-secret
- secretRef:
name: peertube-bucket
- configMapRef:
name: peertube-config
env:
- name: PEERTUBE_REDIS_HOSTNAME
value: "localhost"
- name: PEERTUBE_REDIS_AUTH
value: ""
volumeMounts:
- name: peertube-data
mountPath: /data
resources:
requests:
cpu: "0.5"
memory: 1Gi
limits:
cpu: "1"
memory: 2Gi
- name: redis
image: redis:8.2.1-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
name: redis
resources:
requests:
cpu: "0.2"
memory: 256Mi
limits:
cpu: "0.5"
memory: 1Gi
volumes:
- name: peertube-data
persistentVolumeClaim:
claimName: peertube-data

View File

@@ -1,18 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: peertube
labels:
app.kubernetes.io/name: peertube
spec:
rules:
- host: tube.dubyatp.xyz
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: peertube
port:
number: 9000

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: peertube-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi

View File

@@ -1,42 +0,0 @@
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: peertube-secret
spec:
data:
- remoteRef:
conversionStrategy: Default
decodingStrategy: None
key: peertube
metadataPolicy: None
property: PEERTUBE_SECRET
secretKey: PEERTUBE_SECRET
- remoteRef:
conversionStrategy: Default
decodingStrategy: None
key: peertube
metadataPolicy: None
property: PEERTUBE_DB_PASSWORD
secretKey: PEERTUBE_DB_PASSWORD
- remoteRef:
conversionStrategy: Default
decodingStrategy: None
key: peertube
metadataPolicy: None
property: PEERTUBE_SMTP_PASSWORD
secretKey: PEERTUBE_SMTP_PASSWORD
- remoteRef:
conversionStrategy: Default
decodingStrategy: None
key: peertube
metadataPolicy: None
property: POSTGRES_PASSWORD
secretKey: POSTGRES_PASSWORD
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: weyma-vault
target:
creationPolicy: Owner
deletionPolicy: Retain
name: peertube-secret

View File

@@ -1,24 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: peertube
spec:
selector:
app: peertube
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
- protocol: TCP
port: 25
targetPort: 25
name: smtp
- protocol: TCP
port: 9000
targetPort: 9000
name: peertube
- protocol: TCP
name: rtmp
port: 1935
targetPort: 1935

View File

@@ -1,16 +0,0 @@
apiVersion: hyperspike.io/v1
kind: Valkey
metadata:
name: peertube-kv
labels:
app.kubernetes.io/instance: peertube
spec:
anonymousAuth: true
certIssuerType: ClusterIssuer
clusterDomain: cluster.local
clusterPreferredEndpointType: ip
nodes: 1
prometheus: false
replicas: 3
tls: false
volumePermissions: true

View File

@@ -14,7 +14,7 @@ spec:
spec: spec:
containers: containers:
- name: yt-dlp-bot - name: yt-dlp-bot
image: 'git.dubyatp.xyz/williamp/yt-dlp-bot:b496d14' image: 'git.dubyatp.xyz/williamp/yt-dlp-bot:b9088d9'
env: env:
- name: OUT_PATH - name: OUT_PATH
value: /data/youtube-vids value: /data/youtube-vids