• 0 Posts
  • 37 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle
  • I work at an Infrastructure Cloud company. I design and implement API and Database schemas, I plan out backend workflows and then implement the code to perform the incremental steps of each workflow. That’s lots of code, and a little openapi and other documentation. I dig into bugs or other incidents. That’s spent deep in Linux and Kubernetes environments. I hopefully build monitors or dashboards for better visibility into issues. That’s spent clicking around observability tooling, and then exporting things I want to keep into our gitops repo. Occasionally, I’ll update our internal WebUI for a new feature that needs to be exposed to internal users. That’s react and CSS coding. Our external facing UI and API is handled by a dedicated team.

    When it comes to learning, Id say find a problem you have and try to build something to improve that problem. Building a home lab is a great way to give yourself lots of problems. Ultimately, it’s about being goal oriented in a way where your goal isn’t just “finish this class”.



  • ramielrowe@lemmy.worldtoTechnology@lemmy.worlddatacenter liquid cooling solution
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Yea, it’s the combo of the chiller and cooling tower is analogous to a swamp cooler. The cooling tower provides the evaporative cooling. The difference is that rather than directly cooling the environment around the cooling tower, the chiller allows indirect cooling of the DC via heat exchange. And isolated chiller providing heat exchange is why humidity inside the DC isn’t impacted by the evaporative cooling. And sure, humidity is different between hot and cold isles. That is just a function of temperature and relative humidity. But, no moisture is exchanged into the DC to cool the DC.

    Edit: Turns out I’m a bit misinformed. Apparently in dry environments that can deal with the added moisture, DCs are built that indeed use simple direct evaporative cooling.


  • Practically all even semi-modern DCs are built for servers themselves to be air cooled. The air itself is cooled via a heat exchanger with a separate and isolated chiller and cooling tower. The isolated chiller is essentially the swamp cooler, but it’s isolated from the servers.

    There are cases where servers are directly liquid cooled, but it’s mostly just the recent Nvidia GPUs and niche things like high-frequency-trading and crypto ASICs.

    All this said… For the longest time I water cooled my home lab’s compute server because I thought it was necessary to reduce noise. But, with proper airflow and a good tower cooler, you can get basically just as quiet. All without the maintenance and risk of water, pumps, tubing, etc.






  • ramielrowe@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    9
    ·
    7 months ago

    Honestly, I don’t mind them adding ads. They’ve got a business to support. But, calling them “quests” and treating them as “rewards” for their users is just so tone-deaf and disingenuous. Likewise, if I’ve boosted even a single server, I shouldn’t see this crap anywhere, let alone on the server I’ve boosted.





  • In general, on bare-metal, I mount below /mnt. For a long time, I just mounted in from pre-setup host mounts. But, I use Kubernetes, and you can directly specify a NFS mount. So, I eventually migrated everything to that as I made other updates. I don’t think it’s horrible to mount from the host, but if docker-compose supports directly defining an NFS volume, that’s one less thing to set up if you need to re-provision your docker host.

    (quick edit) I don’t think docker compose reads and re-reads compose files. They’re read when you invoke docker compose but that’s it. So…

    If you’re simply invoking docker compose to interact with things, then I’d say store the compose files where ever makes the most sense for your process. Maybe think about setting up a specific directory on your NFS share and mount that to your docker host(s). I would also consider version controlling your compose files. If you’re concerned about secrets, store them in encrypted env files. Something like SOPS can help with this.

    As long as the user invoking docker compose can read the compose files, you’re good. When it comes to mounting data into containers from NFS… yes permissions will matter and it might be a pain as it depends on how flexible the container you’re using is in terms of user and filesystem permissions.



  • In general, container root filesystems and the images backing them will not function on NFS. When deploying containers, you should be mounting data volumes into the containers rather than storing things on the container root filesystems. Hopefully you are already doing that, otherwise you’re going to need to manually copy data out of the containers. Personally, if all you’re talking about is 32 gigs max, I would just stop all of the containers, copy everything to the new NFS locations, and then re-create the containers to point at the new NFS locations.

    All this said though, some applications really don’t like their data stored on NFS. I know Plex really doesn’t function well when it’s database is on NFS. But, the Plex media directories are fine to host from NFS.





  • This isn’t about social platforms or using the newest-hottest tech. It’s about following industry standard practices. You act like source control is such a pain in the ass and that it’s some huge burden. And that I just don’t understand. Getting started with git is so simple, and setting up an account with a repo host is a one time thing. I find it hard to believe that you don’t already have ssh keys set up too. What I find more controversial and concerning is your ho-hum opinion on automated testing, and your belief that “most software doesn’t do it”. You’re writing software that you expect people to not only run on their infra, but also expose to the public internet. Not only that, but it also needs to protect the traffic between the server on public infra and client on private infra. There is a much higher expectation of good practices being in place. And it is clear that you are willingly disregarding basic industry standard practices.