Why I built my own deployment platform — and what it taught me about infrastructure, developer experience, and knowing when to stop.
Every time I started a new project, the same ritual awaited. Write the application code — the fun part — then spend hours configuring deployment. Set up a server. Configure a reverse proxy. Generate SSL certificates. Wire up Docker. Debug networking. Set up CI/CD. Configure DNS.
The code was the easy part. The infrastructure was the tax.
I'd used Vercel for frontends, Railway for quick deploys, and bare VPS instances for anything that didn't fit the mold. Each solved part of the problem. None handled the full picture — a modular application with a .NET backend, React frontend, PostgreSQL database, and multiple services that all needed to talk to each other.
So I built the thing I wished existed.
One command to deploy anything. Not just a frontend. Not just a single container. A full application — multiple services, automatic SSL, custom domains, dynamic routing — all managed from the terminal.
compozerr deploy
That's it. The platform figures out the rest.
But getting to that single command required building three interconnected systems, each handling a different layer of the problem.
Built with Bun and TypeScript. It's the developer's interface — login with GitHub, create projects, add modules, deploy. Four commands cover 90% of what you need:
compozerr new project my-app
compozerr new module auth
compozerr deploy
compozerr ssh
The CLI talks to the compozerr web API for anything that requires persistence or coordination. But it also handles local work: scaffolding projects, managing git submodules for modules, running local development environments.
I chose Bun over Node.js for the runtime. TypeScript without a transpilation step. Faster startup. And Ink for the few places where terminal UI actually matters — like watching deployment logs stream in real time.
A .NET 9 backend with a React 19 frontend. The interesting decision here was the modular architecture — every feature implements an IFeature interface that's discovered through reflection at startup.
public class AuthFeature : IFeature
{
public bool IsEnabled => true;
void IFeature.ConfigureServices(
IServiceCollection services,
IConfiguration config)
{
services.AddDbContext<AuthDbContext>();
services.AddScoped<IAuthService, AuthService>();
}
}
Authentication, billing, project management, deployment orchestration — each is a self-contained module. No registration files. No startup configuration ceremony. Drop in a class, and the system finds it.
The frontend generates TypeScript types from the backend's OpenAPI spec. Change a C# model, regenerate, and the compiler catches every place the frontend needs updating. Type safety from database to browser — the kind of guarantee you don't appreciate until you've chased a typo through three layers of an application.
This is where the complexity lives — and where the platform earns its keep.
A Node.js manager service runs on the hosting machine. When a deployment arrives, it orchestrates the full sequence:
The developer sees "Deployed." The platform handled VM provisioning, Docker orchestration, reverse proxy configuration, and certificate management behind a single word.
Nginx is excellent for serving static files. But for dynamic, multi-service routing with automatic SSL? Traefik was the clear choice. Its file-based configuration means the manager can update routing without restarting anything:
http:
routers:
my-app-frontend:
rule: "Host(`myapp.com`)"
service: "my-app-3000"
tls:
certResolver: "leresolver"
services:
my-app-3000:
loadBalancer:
servers:
- url: "http://192.168.1.100:3000"
Add a domain, update a YAML file, and Traefik picks it up. No restarts. No downtime. Each project gets its own routing rules — frontend, backend, database — all wired through the same proxy with automatic HTTPS.
One deployment at a time. This sounds like a limitation, but it's deliberate. Parallel deployments compete for CPU, memory, and network bandwidth on the hosting machine. Sequential processing means predictable build times and linear, debuggable logs.
When something fails at 3 AM, you want a single timeline of events to read through — not interleaved output from three concurrent builds.
Each module in a compozerr project is its own git repository, added as a submodule. This means modules can be shared across projects, versioned independently, and contributed to by different teams. A database module used in ten projects gets updated once and pulled everywhere.
Git submodules have a reputation for being painful. They are. But the alternative — a monorepo with manual dependency management — is worse at scale. The CLI abstracts away the submodule commands, so developers rarely need to think about the underlying machinery.
Every backend operation goes through MediatR — commands for writes, queries for reads. It's verbose. But every operation is independently testable, consistently validated through FluentValidation, and easy to trace through the codebase. When a deployment fails, the stack trace tells you exactly which handler broke and why.
Building a deployment platform teaches you things that building applications never does.
Infrastructure is state. Applications are mostly stateless — deploy a new version, the old one disappears. Infrastructure accumulates. VMs need cleanup. SSL certificates expire. Traefik configs grow. Disk space fills. Managing state over time is a fundamentally different challenge than serving a request.
Reliability beats features. Nobody cares about a beautiful dashboard if deployments fail randomly. The boring work — retry logic, health checks, graceful error handling, sequential queues — is what makes a platform trustable. I spent more time on error recovery than on any single feature.
Developer experience is the product. The technical architecture matters, but the CLI UX determines adoption. Fast feedback, clear error messages, and sensible defaults do more than any architectural diagram. A deploy command that takes 30 seconds to respond — even with a spinner — feels broken. One that streams logs in real time feels alive.
You will rewrite the deployment pipeline. Three times. The first version was bash scripts. The second was a Node.js orchestrator with too many edge cases. The third — the current one — finally understood that deployments are a state machine, not a script. Each state has clear success and failure conditions. Each transition is logged and reversible.
The best infrastructure is the kind you don't think about. compozerr isn't trying to be innovative — it's trying to be invisible. Write your code, run one command, and move on to the next problem.
That was always the vision. It took three repositories, two runtime languages, a reverse proxy, a VM orchestrator, and a lot of 3 AM debugging sessions to get there. But now, compozerr deploy does what it promises.
Everything else is plumbing.