- It might be a card grabber.
- Don’t put real card details of course.
For me the value of podman is how easily it works without root. Just install and run, no need for sudo or adding myself to docker group.
I use it for testing and dev work, not for running any services.
1 a : to propel oneself in water by natural means (such as movements of the limbs, fins, or tail) b : to play in the water (as at a beach or swimming pool) 2 : to move with a motion like that of swimming : glide a cloud swam slowly across the moon 3 a : to float on a liquid : not sink b : to surmount difficulties : not go under sink or swim, live or die, survive or perish— Daniel Webster 4 : to become immersed in or flooded with or as if with a liquid potatoes swimming in gravy 5 : to have a floating or reeling appearance or sensation
https://www.merriam-webster.com/dictionary/swim
Apparently, swimming inherently requires a liquid.
TPM stores the encryption key against secure boot. That way, if attacker disables/alters secure boot then TPM won’t unseal the key. I use clevis to decrypt the drive.
Thank you… I had to learn kubernetes for work and it was around 2 weeks of time investment and then I figured out I could use it to fix my docker-compose pains at home.
If you run a lot of services, I can attest that kubernetes is definitely not overkill, it is a good tool for managing complexity. I have 8 services on a single-node kubernetes and I like how I can manage configuration for each service independent of each other and also the underlying infrastructure.
don’t create one network with Gitlab, Redmine and OpenLDAP - do two, one with Gitlab and OpenLDAP, and one with Redmine and OpenLDAP.
This was the setup I had, but now I am already using kubernetes with no intention to switch back.
I was writing my own compose files, but see my response to a sibling comment for the issue I had.
If one service needs to connect to another service then I have to add a shared network between them. In that case, the services essentially shared a common namespace regarding DNS. DNS resolution would routinely leak from one service to another and cause outages, e.g if I connect Gitlab container and Redmine container with OpenLDAP container then sometimes Redmine’s nginx container would access Gitlab container instead of Redmine container and Gitlab container would access Redmine’s DB instead of its own DB.
I maintained some workarounds, like starting Gitlab after starting Redmine would work fine but starting them other way round would have this issue. But switching to Kubernetes and replacing the cross-service connections with network policies solved the issue for me.
Nothing will ever top “Galaxy Note 7”. Super fun in planes, especially if they’re flying.
As someone who is operating kubernetes for 2 years in my home server, using containers is much more maintainable compared to installing everything directly on the server.
I tried using docker-compose first to manage my services. It works well for 2-3 services, but as the number of services grew they started to interfere with each other, at that point I switched to kubernetes.
I am glad that atleast it ends there. It could have been worse.
I just uploaded to Github: https://github.com/akashrawal/nsd4k
I only made it for myself, so expect very rough edges in there.
I run a crude automation on top of OpenSSL CA. It checks for certain labels attached to kubernetes services. Based on that it creates kubernetes secrets containing the generated certificates.
It might be a failing fan. I have an Intel nuc whose fan started sounding like an air raid siren, so I took the fan out, drilled a hole into its bearing and added coconut oil into it. It is working fine till this date, but buying a new fan is probably better.
It is for a challenge, the goal is to build a cloud with workload decoupled from servers decoupled from users who’d deploy the workload, with redundant network and storage, no single choke point for network traffic, and I am trying to achieve this with a small budget.
The level1 video shows thunderbolt networking though. It is an interesting concept, but it requires nodes with at-least 2 thunderbolt ports in order to have more than 2 nodes.
If redundant everything is important then you need to change your planning toward proper rack servers and switches
I ain’t got that budget man.
Yes, the entire network is supposed to be redundant and load-balanced, except for some clients that can only connect to one switch (but if a switch fails it should be trivial to just move them to another switch.)
I am choosing dell optiplex boxes because it is the smallest x86 nodes I can get my hands on. There is no pcie slot in it other than m.2 SSD slot which will be used for SSD.
I plan to have 2 switches.
Of course, if a switch fails, client devices connected to the switch would drop out, but any computer connected to both switches should have link redundancy.
There would be some quality-of-life improvements like being able to replace a switch without pulling down entire cluster, but it is mostly for a challenge.