You do not need [tool] for simple gitops (most times)
My starting title was “You do not need ansible for simple gitops”, but writing it I realized I was using ansible as a straw man and in reality what I just wanted to trash about was the general complexity of our professional delivery pipelines.
There’s a big gap between ssh-ing into remote servers and issuing commands by hand (considered bad) and having an [over]engineered pipeline that depends on a variety of random tooling (the eidetic perfection of a platonic ▲).
If your deploys are simple and can be done on an executable script and you are only managing a handful of servers, a bare git repo and some bash is all you will need.
Step 1: create a git
user on a server.
You can call this server your git server, or your manager server, ozymandias, or whatever. Set up public and private keys accordingly so you do not need to write passwords every time.
Step 2: create a bare repo on which to push.
You decide how complex you want to make this. Maybe this repo will have its own codebase related to deploying. Or maybe this repo will just be a mirror of another repo that lives in github.
git init --bare mytool.git
Step 3: add an executable post-receive hook.
This script will be called every time you issue a push on this remote.
The following scaffolding script gives you $oldrev
$newrev
and $refname
values that you can use if you want to deploy a particular version. Never trust deploy scripts that uses only master.
#!/usr/bin/env bash
# /path/to/mytool.git/hooks/post-receive
while read oldrev newrev refname
do
echo "Deploying $newrev"
/some/path/to/a/deploy/script
echo "Done"
# or maybe you need to deploy to other servers
ssh -T someuser@someserver << EOF
pushd /some/path/somewhere
./deploy $newref
popd
EOF
ssh -T someuser@anotherserver << EOF
pushd /some/path/somewhere
./deploy $newref
popd
EOF
done
Step 4: Write a deploy script
How does a simple deploy script look like? In general, it will look very similar to all the steps you manually punch every time you are deploying.
#!/usr/bin/env bash
ref=$1
# stop some services maybe
supervisorctl -c supervisord.conf stop all
# install something
pip install mytool@$ref --upgrade
# start some services again
supervisorctl -c supervisord.conf start all
pushd some/other/path
./run-a-bunch
./of-more-scripts
# or maybe you have some docker stuff around
docker compose pull
docker compose restart
# you get my point
popd
Step 5: Add your deploy server as a remote
git remote add ozymandias git@some-server:mytool.git
Step 6: Deploy any time you want by just pushing to that remote and rejoice.
# upgrade
git push ozymandias master
# downgrade
git push ozymandias master~1
What if?
What if your git server gets compromised and a bad actor issues deploys for you?
Too bad. If your git server gets compromised, you can assume all your systems are compromised too. Try to look at the bright side of it, now you have a bad actor issuing deploys for you, less work!
What if you want an auditable track of your deploys / multiple users with roles and permissions / productivity graphs?
Use something else, like jenkins. Or start investing time in a neverending rabbit hole of external services poorly attached together with duct tape and yaml.
What if my setup is more complex than this, my servers are not pets but cattle and flock, etc?
I would say then you are already knee-deep on the mud and already drowning in complexity. There’s nothing I can say that will help you escape your fate.
Turn all your microservices into little helm charts, use terraform to bring your infrastructure up and down, remote build your docker images, push everything into private registries, write ci actions where every step runs on a docker in docker containerized alpine inside an ubuntu-latest container, do a blue green deploy on your kubernetes clusters, write thousdands of yaml lines and put them into the next shiny tool we have decided will bring us one step closer to
the eidetic perfection of a platonic ▲.