Do you guys expose the docker socket to any of your containers or is that a strict no-no? What are your thoughts behind it if you don’t? How do you justify this decision from a security standpoint if you do?

I am still fairly new to docker but I like the idea of something like Watchtower. Even though I am not a fan of auto-updates and I probably wouldn’t use that feature I still find it interesting to get a notification if some container needs an update. However, it needs to have access to the docker socket to do its work and I read a lot about that and that this is a bad idea which can result in root access on your host filesystem from within a container.

There are probably other containers as well especially in this whole monitoring and maintenance category, that need that privilege, so I wanted to ask how other people handle this situation.

Cheers!

  • 5ymm3trY@discuss.tchncs.deOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 days ago

    There are lots of articles out there that say the opposite. Not about Watchtower per se, but giving a container access to the socket is generally considered to be a bad idea from a security point of view.

    • i_am_not_a_robot@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Giving a container access to the docker socket allows container escapes, but if you’re doing it on purpose with a service designed for that purpose there is no problem. Either you trust Watchtower to manage the other containers on your system or you don’t. Whether it’s managing the containers through a mounted docker socket or with direct socket access doesn’t make a difference in security.

      I don’t know if anybody seriously uses Watchtower, but I wouldn’t be surprised. I know that companies use tools like Argo CD, which has a larger attack surface and a similar level of system access via its Kubernetes service user.