I'm one of the maintainers of SigNoz. For the past few months I've been building a tool called Foundry and I would appreciate feedback on it.
The problem
Self-hosting an observability stack means deploying and configuring multiple services. SigNoz needs ClickHouse, ClickHouse Keeper, PostgreSQL, an OpenTelemetry Collector, and the SigNoz server. That's five services, each with its own config.
If you go with the Grafana stack, you're setting up Loki, Tempo, Mimir, and Grafana separately, each with its own deployment and config. Uptrace requires you to install ClickHouse, PostgreSQL, and Redis before the server can start. HyperDX has a single docker run for local testing, but for production you're back to managing Docker Compose configs manually.
And if you want to move from Docker to bare metal or to a cloud platform, you're mostly starting over.
What did I build?
Foundry is a CLI tool (foundryctl) that takes one YAML file and deploys the entire SigNoz stack.
Minimal config:
apiVersion: v1alpha1 metadata: name: signoz spec: deployment: mode: docker flavor: compose
Then:
foundryctl cast -f casting.yaml
It checks your system for prerequisites, generates all the deployment and config files into a pours/ directory, and runs the deployment.
Supports Docker Compose, systemd (bare metal), and Render today.
I've been working on this for a while and I know it can save people time, but I haven't been able to get much feedback so far. If you self-host observability tools or have tried to in the past, I would really appreciate it if you gave this a look.
- Does the config format make sense?
- What deployment targets do you want? (K8s is on the roadmap.)
- Did you try it? Did it break? Tell me how.
I want to make this better. Your feedback will help me a lot.
P.S. The naming (casting, moldings, forging, pours) is from my background in industrial engineering. Left the field, but metalworking metaphor stuck :)
submitted by