Running things in docker is a serious anti-pattern to me. You lose the advantages of shared libraries, you make system upgrades more complicated (because you now have to upgrade N docker images), everything takes considerably more memory because you don't get common code shared between processes, and you lose nice process management. People deploying docker images are doing it out of laziness because it's easier to just shove some static build with fixed dependencies into a docker container rather than having to deal with--gasp--dependencies.The only useful case I've ever come across for docker is in short-lived containers to do test builds (e.g. continuous integration).
It *doesn't* unify it: the vast majority of docker images have outdated software and libraries with security vulnerabilities in them because they don't get updated for all the things *inside* it other than the one developer binary.
@jagerman42 I use gentoo for all my pc, and when I try to install lokid, I don't want to deal with os differences issues, So, I'll containerize the lokid daemon, That's my initial reason running lokid within docker.
I think that you would probably be better off running *each* of lokid/loki-ss/lokinet in its docker container. But then the problem is that they have to talk to each other, and docker makes *that* difficult.
When you need an alternative linux env, lxc is perfect for this purpose.But deployment of lxc is not as easy as docker image, after docker image is built, You can just upload the image to http server, and cat http://example.com/path-to-image.tbz | docker load && docker run --name xxx image-id