210 Matching Annotations
  1. Aug 2022
    1. The following command connects an already-running my-nginx container to an already-existing my-net network

      将一台运行中的container连接到已有的network

  2. Jul 2022
    1. By default, this function reads template files in /etc/nginx/templates/*.template and outputs the result of executing envsubst to /etc/nginx/conf.d.

      '

    1. It is "guaranteed" as long as you are on the default network. 172.17.0.1 is no magic trick, but simply the gateway of the network bridge, which happens to be the host. All containers will be connected to bridge unless specified otherwise.
    2. For example I use docker on windows, using docker-toolbox (OG) so that it has less conflicts with the rest of my setup and I don't need HyperV.
    1. # This ensures that the pid namespace is shared between the host # and the container. It's not necessary to be able to run spring # commands, but it is necessary for "spring status" and "spring stop" # to work properly. pid: host
    1. 修改 Docker 运行中 Container 的映射端口。

      (1)停止服务

      停止容器服务

      docker stop <container id>

      停止 docker 服务 (Linux)

      systemctl stop docker

      (2)修改配置

      查看 container 的 id hash 值

      docker inspect <container_name>

      C:\Users\xxj87>docker inspect b61792d860f2 [ { "Id": "b61792d860f24c7ba47f4e270e211736a1a88546375e97380884c577d31dab66", "Created": "2022-07-01T07:46:03.516440885Z", "Path": "/bin/sh",

      配置目录

      [nux]: cd /var/lib/docker/containers/4fd7/

      修改文件 hostconfig.json 中的 PortBindings

      vim hostconfig.json

      "PortBindings":{"2222/tcp":[{"HostIp":"","HostPort":"2222"}],"5000/tcp":[{"HostIp":"","HostPort":"5000"}],"80/tcp":[{"HostIp":"","HostPort":"40001"}],"8070/tcp":[{"HostIp":"","HostPort":"8070"}],"8081/tcp":[{"HostIp":"","HostPort":"8081"}]},

      "80/tcp":[{"HostIp":"","HostPort":"40001"}] 80 容器内部端口 40001 外部映射端口

      修改 config.v2.json 中的 ExposedPorts

      vi config.v2.json "ExposedPorts":{"2222/tcp":{},"5000/tcp":{},"80/tcp":{},"8081/tcp":{},"8070/tcp":{}},

      重启服务

      systemctl start docker

      启动容器

      docker start <container id>

      验证修改

      docker ps -a

  3. Jun 2022
    1. This is a neat Docker trick for those who have an ARM development machine (Apple M1), but sometimes need to build x86/amd64 images locally to push up to a registry.

      Since Apple M1 is based on the ARM architecture, it is still possible to build images based on Linux x86/amd64 architecture using docker buildx:

      docker buildx build -f Dockerfile --platform linux/amd64 .

      However, building such images can be really slow, so we can create a builder profile (see the paragraphs below / my other annotation to this article).

    2. So, we can create this builder on our local machine. The nice part about this creation is that it is idempotent, so you can run this command many times without changing the result. All we have to do is to create a builder profile and in this case I have named it amd64_builder.

      Example of creating a Docker buildx builder profile on the Apple M1 machine. This will allow for around 10x faster builds on the amd64 architecture pushed to a registry, than on the amd64 emulation on the M1 chip.

    1. Vous connaissez maintenant la différence entre conteneur et machine virtuelle ; vous avez ainsi pu voir les différences entre la virtualisation lourde et la virtualisation légère.Un conteneur doit être léger, il ne faut pas ajouter de contenu superflu dans celui-ci afin de le démarrer rapidement, mais il apporte une isolation moindre. À contrario, les machines virtuelles offrent une très bonne isolation, mais elle sont globalement plus lentes et bien plus lourdes.
  4. May 2022
    1. A normal Makefile for building projects with Docker would look more or less like this:

      Sample of a Makefile for building and tagging Docker images

    2. One of the main benefits from tagging your container images with the corresponding commit hash is that it's easy to trace back who a specific point in time, know how the application looked and behaved like for that specifc point in history and most importantly, blame the people that broke the application ;)

      Why tagging Docker images with SHA is useful

    1. Software Bill Of Materials (SBOM) is analogous to a packing list for a shipment; it’s all the components that make up the software, or were used to build it. For container images, this includes the operating system packages that are installed (e.g.: ca-certificates) along with language specific packages that the software depends on (e.g.: log4j). The SBOM could include only some of this information or even more details, like the versions of components and where they came from.

      Software Bill Of Materials (SBOM)

    2. Included in Docker Desktop 4.7.0 is a new, experimental docker sbom CLI command that displays the SBOM (Software Bill Of Materials) of any Docker image.

      New docker sbom CLI command

    1. As of today, the Docker Engine is to be intended as an open source software for Linux, while Docker Desktop is to be intended as the freemium product of the Docker, Inc. company for Mac and Windows platforms. From Docker's product page: "Docker Desktop includes Docker Engine, Docker CLI client, Docker Build/BuildKit, Docker Compose, Docker Content Trust, Kubernetes, Docker Scan, and Credential Helper".

      About Docker Engine and Docker Desktop

    2. The diagram below tries to summarise the situation as of today, and most importantly to clarify the relationships between the various moving parts.

      Containers (the backend):

    1. Without accounting for what we install or add inside, the base python:3.8.6-buster weighs 882MB vs 113MB for the slim version. Of course it's at the expense of many tools such as build toolchains3 but you probably don't need them in your production image.4 Your ops teams should be happier with these lighter images: less attack surface, less code that can break, less transfer time, less disk space used, ... And our Dockerfile is still readable so it should be easy to maintain.

      See sample Dockerfile above this annotation (below there is a version tweaked even further)

    2. scratch is a special empty image with no operating system.
  5. Mar 2022
    1. Have you ever built an image only to realize that you actually need it on a user account other than root, requiring you to rebuild the image again in rootless mode? Or have you built an image on one machine but run containers on the image using multiple different machines? Now you need to set up an account on a registry, push the image to the registry, Secure Shell (SSH) to each device you want to run the image on, and then pull the image. The podman image scp command solves both of these annoying scenarios as quickly as they occur.

      Podman 4.0 can transfer container images without a registry.

      For example: * You can copy a root image to a non-root account:

      $ podman image scp root@localhost::IMAGE USER@localhost:: * Or copy an image from one machine to another with this command:

      $ podman image scp me@192.168.68.122::IMAGE you@192.168.68.128::

    1. But the problem with Poetry is arguably down to the way Docker’s build works: Dockerfiles are essentially glorified shell scripts, and the build system semantic units are files and complete command runs. There is no way in a normal Docker build to access the actually relevant semantic information: in a better build system, you’d only re-install the changed dependencies, not reinstall all dependencies anytime the list changed. Hopefully someday a better build system will eventually replace the Docker default. Until then, it’s square pegs into round holes.

      Problem with Poetry/Docker

    2. Third, you can use poetry-dynamic-versioning, a plug-in for Poetry that uses Git tags instead of pyproject.toml to set your application’s version. That way you won’t have to edit pyproject.toml to update the version. This seems appealing until you realize you now need to copy .git into your Docker build, which has its own downsides, like larger images unless you’re using multi-stage builds.

      Approach of using poetry-dynamic-versioning plugin

    3. But if you’re doing some sort of continuous deployment process where you’re continuously updating the version field, your Docker builds are going to be slow.

      Be careful when updating the version field of pyproject.toml around Docker

    1. 21 containers running .. docker compose 👍 Create a docker user for easy permissions.4AntwortenTeilenMeldenSpeichernFolgenLevel 2Reiep · vor 2 JahrenCould you give us more info on that? I have some weird permissinos issues sometimes, that I can't figure out how to resolve.And how do you use compose on a Synology NAS, also. Thanks!1AntwortenAuszeichnenTeilenMeldenSpeichernFolgenLevel 3[deleted] · vor 2 JahrenMost of the containers from linuxserver on dockerhub have a PGID and PUID env, if you fill in the created docker user, it can access the volumes you want.Just make sure to add the created docker user to synology permissions on the dirs.3AntwortenTeilenMeldenSpeichernFolgen
    1. The ENTRYPOINT specifies a command that will always be executed when the container starts. The CMD specifies arguments that will be fed to the ENTRYPOINT.

      Another great comparison of ENTRYPOINT and CMD command

  6. Feb 2022
    1. LXC, is a serious contender to virtual machines. So, if you are developing a Linux application or working with servers, and need a real Linux environment, LXC should be your go-to. Docker is a complete solution to distribute applications and is particularly loved by developers. Docker solved the local developer configuration tantrum and became a key component in the CI/CD pipeline because it provides isolation between the workload and reproducible environment.

      LXC vs Docker

    1. Jeżeli masz dylemat czy użyć CMD, czy ENTRYPOINT jako punkt startowy twojego kontenera, odpowiedz sobie na następujące pytanie.Czy zawsze moje polecenie MUSI się wykonać? Jeśli odpowiedź brzmi tak, użyj ENTRYPOINT. Co więcej, jeśli potrzebujesz przekazać dodatkowe parametry, które mogą być nadpisane podczas uruchomienia kontenera — użyj również instrukcji CMD.

      How to simply decide if to use CMD or ENTRYPOINT in a Dockerfile

  7. Jan 2022
    1. I was seeing this same issue. Updating values for: hub: containerSecurityContext: privileged: true Seems to have been the fix for me. At least things are a lot more stable now. I changed it based on the explanation for --privileged in the README.

      Für

    1. This basic example compiles a simple Go program. The naive way on the left results in a 961 MB image. When using a multi-stage build, we copy just the compiled binary which results in a 7 MB image.
      # Image size: 7 MB
      
      FROM golang:1.17.5 as builder
      
      WORKDIR /workspace
      COPY . .
      ENV CGO_ENABLED=0
      RUN go get && go build -o main .
      
      FROM scratch
      WORKDIR /workspace
      COPY --from=builder \
           /workspace/main \
           /workspace/main
      
      CMD ["/workspace/main"]
      
    2. Docker introduced multi-stage builds starting from Docker Engine v17.05. This allows us to perform all preparations steps as before, but then copy only the essential files or output from these steps.

      Multi-stage builds are great for Dockerfile steps that aren't used at runtime

    3. Making a small change to a file or moving it will create an entire copy of the file. Deleting a file will only hide it from the final image, but it will still exist in its original layer, taking up space. This is all a result of how images are structured as a series of read-only layers. This provides reusability of layers and efficiencies with regards to how images are stored and executed. But this also means we need to be aware of the underlying structure and take it into account when we create our Dockerfile.

      Summary of file duplication topic in Docker images

    4. In this example, we created 3 copies of our file throughout different layers of the image. Despite removing the file in the last layer, the image still contains the file in other layers which contributes to the overall size of the image.
      FROM debian:bullseye
      
      COPY somefile.txt . #1
      
      # Small change but entire file is copied
      RUN echo "more data" >> somefile.txt #2
      
      # File moved but layer now contains an entire copy of the file
      RUN mv somefile.txt somefile2.txt #3
      
      # File won't exist in this layer,
      # but it still takes up space in the previous ones.
      RUN rm somefile2.txt
      
    5. We’re just chmod'ing an existing file, but Docker can’t change the file in its original layer, so that results in a new layer where the file is copied in its entirety with the new permissions.In newer versions of Docker, this can now be written as the following to avoid this issue using Docker’s BuildKit:

      Instead of this:

      FROM debian:bullseye
      
      COPY somefile.txt .
      
      RUN chmod 777 somefile.txt
      

      Try to use this:

      FROM debian:bullseye
      
      COPY --chmod=777 somefile.txt .
      
    6. when you make changes to files that come from previous layers, they’re copied into the new layer you’re creating.
    7. Many processes will create temporary files, caches, and other files that have no benefit to your specific use case. For example, running apt-get update will update internal files that you don’t need to persist because you’ve already installed all the packages you need. So we can add rm -rf /var/lib/apt/lists/* as part of the same layer to remove those (removing them with a separate RUN will keep them in the original layer, see “Avoid duplicating files”). Docker recognize this is an issue and went as far as adding apt-get clean automatically for their official Debian and Ubuntu images.

      Removing cache

    8. An important way to ensure you’re not bringing in unintended files is to define a .dockerignore file.

      .dockerignore sample:

      # Ignore git and caches
      .git
      .cache
      
      # Ignore logs
      logs
      
      # Ignore secrets
      .env
      
      # Ignore installed dependencies
      node_modules
      
      ...
      
    9. You can save any local image as a tar archive and then inspect its contents.

      Example of inspecting docker image:

      bash-3.2$ docker save <image-digest> -o image.tar
      bash-3.2$ tar -xf image.tar -C image
      bash-3.2$ cd image
      bash-3.2$ tar -xf <layer-digest>/layer.tar
      bash-3.2$ ls
      

      One can also use Dive or Contains.dev

  8. Dec 2021
    1. docker scan elastic/logstash:7.13.3 | grep 'Arbitrary Code Execution'

      Example of scanning docker image for a log4j vulnerability

  9. Nov 2021
    1. I’d probably choose the official Docker Python image (python:3.9-slim-bullseye) just to ensure the latest bugfixes are always available.

      python:3.9-slim-bullseye may be the sweet spot for a Python Docker image

    2. So which should you use? If you’re a RedHat shop, you’ll want to use their image. If you want the absolute latest bugfix version of Python, or a wide variety of versions, the official Docker Python image is your best bet. If you care about performance, Debian 11 or Ubuntu 20.04 will give you one of the fastest builds of Python; Ubuntu does better on point releases, but will have slightly larger images (see above). The difference is at most 10% though, and many applications are not bottlenecked on Python performance.

      Choosing the best Python base Docker image depends on different factors.

    3. There are three major operating systems that roughly meet the above criteria: Debian “Bullseye” 11, Ubuntu 20.04 LTS, and RedHat Enterprise Linux 8.

      3 candidates for the best Python base Docker image

  10. Oct 2021
    1. You probably shouldn't use Alpine for Python projects, instead use the slim Docker image versions.

      (have a look below this highlight for a full reasoning)

  11. Sep 2021
    1. Simply put: alias docker=podman.
    2. What is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System.
    1. You can attach Visual Studio Code to this container by right-clicking on it and choosing the Attach Visual Studio Code option. It will open a new window and ask you which folder to open.

      It seems like VS Code offers a better way to manage Docker containers

    2. You don’t have to download them manually, as a docker-compose.yml will do that for you. Here’s the code, so you can copy it to your machine:

      Sample docker-compose.yml file to download both: Kafka and Zookeeper containers

    1. In 2020, 35% of respondents said they used Docker. In 2021, 48.85% said they used Docker. If you look at estimates for the total number of developers, they range from 10 to 25 million. That's 1.4 to 3 million new users this year.

      Rapidly growing popularity of Docker (2020 - 2021)

    1. kind, microk8s, or k3s are replacements for Docker Desktop. False. Minikube is the only drop-in replacement. The other tools require a Linux distribution, which makes them a non-starter on macOS or Windows. Running any of these in a VM misses the point – you don't want to be managing the Kubernetes lifecycle and a virtual machine lifecycle. Minikube abstracts all of this.

      At the current moment the best approach is to use minikube with a preferred backend (Docker Engine and Podman are already there), and you can simply run one command to configure Docker CLI to use the engine from the cluster.

  12. Aug 2021
    1. There are multiple tools for running Kubernetes on your local machine, but it basically boils down to two approaches on how it is done

      We can run Kubernetes locally as a:

      1. binary package
      2. container using dind
  13. Jul 2021
    1. there is a drawback, docker-compose runs on a single node which makes scaling hard, manual and very limited. To be able to scale services across multiple hosts/nodes, orchestrators like docker-swarm or kubernetes comes into play.
      • docker-compose runs on a single node (hard to scale)
      • docker-swarm or kubernetes run on multiple nodes
    2. We had to build the image for our python API every-time we changed the code, we had to run each container separately and manually insuring that out database container is running first. Moreover, We had to create a network before hand so that we connect the containers and we had to add these containers to that network and we called it mynet back then. With docker-compose we can forget about all of that.

      Things being resolved by a docker-compose

    1. Even though Kubernetes is moving away from Docker, it will always support the OCI and Docker image formats. Kubernetes doesn’t pull and run images itself, instead the Kubelet relies on container engines like CRI-O and containerd to pull and run the images. These are the two main container engines used with CRI-O and they both support the Docker and OCI image formats, so no worries on this one.

      Reason why one should not be worried about k8s depreciating Docker

    1. We comment out the failed line, and the Dockerfile now looks like this:

      To test a failing Dockerfile step, it is best to comment it out, successfully build an image, and then run this command from inside of the Dockerfile

    1. When you have one layer that downloads a large temporary file and you delete it in another layer, that has the result of leaving the file in the first layer, where it gets sent over the network and stored on disk, even when it's not visible inside your container. Changing permissions on a file also results in the file being copied to the current layer with the new permissions, doubling the disk space and network bandwidth for that file.

      Things to watch out for in Dockerfile operations

    2. making sure the longest RUN command come first and in their own layer (again to be cached), instead of being chained with other RUN commands: if one of those fail, the long command will have to be re-executed. If that long command is isolated in its own (Dockerfile line)/layer, it will be cached.

      Optimising Dockerfile is not always as simple as MIN(layers). Sometimes, it is worth keeping more than a single RUN layer

    1. Docker has a default entrypoint which is /bin/sh -c but does not have a default command.

      This StackOverflow answer is a good explanation of the purpose behind the ENTRYPOINT and CMD command

    1. Spotify has a CLI that helps users build Docker images for Kubeflow Pipelines components. Users rarely need to write Docker files.

      Spotify approach towards writing Dockerfiles for Kubeflow Pipelines

  14. Jun 2021
    1. local physical subnet

      how is this physical subnet is defined or decided? for example, my docker container inet address of eth0 is 172.17.0.2. How 172.17.0 is defined?

    1. It basically takes any command line arguments passed to entrypoint.sh and execs them as a command. The intention is basically "Do everything in this .sh script, then in the same shell run the command the user passes in on the command line".

      What is the use of this part in a Docker entry point:

      #!/bin/bash
      set -e
      
      ... code ...
      
      exec "$@"
      
    1. Docker container can call out to a secrets manager for its secrets. But, a secrets manager is an extra dependency. Often you need to run a secrets manager server and hit an API. And even with a secrets manager, you may still need Bash to shuttle the secret into your target application.

      Secrets manager in Docker is not a bad option but adds more dependencies

  15. May 2021
  16. Apr 2021
    1. Note: Building a container image using docker build on-cluster is very unsafe and is shown here only as a demonstration. Use kaniko instead.

      Why?

  17. Feb 2021
    1. It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the “builder pattern”.

      Builder pattern - maintaining two Dockerfiles: 1st for development, 2nd for production. It's not an ideal solution and we shall aim for multi-stage builds.

      Multi-stage build - uses multiple FROM commands in the same Dockerfile. The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all

    1. volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container.

      Aim for using volumes instead of bind mounts in Docker. Also, if your container generates non-persistent data, consider using a tmpfs mount to avoid storing the data permanently

      One case where it is appropriate to use bind mounts is during development, when you may want to mount your source directory or a binary you just built into your container. For production, use a volume instead, mounting it into the same location as you mounted a bind mount during development.

  18. Jan 2021
    1. We recommend the Alpine image as it is tightly controlled and small in size (currently under 5 MB), while still being a full Linux distribution. This is fine advice for Go, but bad advice for Python, leading to slower builds, larger images, and obscure bugs.

      Alipne Linux isn't the most convenient OS for Python, but fine for Go

    2. If a service can run without privileges, use USER to change to a non-root user. This is excellent advice. Running as root exposes you to much larger security risks, e.g. a CVE in February 2019 that allowed escalating to root on the host was preventable by running as a non-root user. Insecure: However, the official documentation also says: … you should use the common, traditional port for your application. For example, an image containing the Apache web server would use EXPOSE 80. In order to listen on port 80 you need to run as root. You don’t want to run as root, and given pretty much every Docker runtime can map ports, binding to ports <1024 is completely unnecessary: you can always map port 80 in your external port. So don’t bind to ports <1024, and do run as a non-privileged user.

      Due to security reasons, if you don't need the root privileges, bind to ports >=1024

    3. Multi-stage builds allow you to drastically reduce the size of your final image, without struggling to reduce the number of intermediate layers and files. This is true, and for all but the simplest of builds you will benefit from using them. Bad: However, in the most likely image build scenario, naively using multi-stage builds also breaks caching. That means you’ll have much slower builds.

      Multi-stage builds claim to reduce image size but it can also break caching

    4. layer caching is great: it allows for faster builds and in some cases for smaller images. Insecure: However, it also means you won’t get security updates for system packages. So you also need to regularly rebuild the image from scratch.

      Layer caching is great for speeding up the processes, but it can bring some security issues

    1. So, what I've discovered in a meanwhile. It was an ubuntu-docker issue. Recently I upgraded my ubuntu from 16.04 to 18.04. This change seems to be incompatible with the docker version I had, 1.11.0.
    1. Running all that manually (more than 100 scripts across all devices) is an awful job for a human. I want to set them up once and more or less forget about it, only checking now and then.

      My ideals for all of my regular processes and servers:

      • Centralized configuration and control - I want to go into a folder and configure everything I'm running everywhere.
      • Configuration file has the steps needed to set up from scratch - so I can just back up the configuration and data folders and not worry about backing up the programs.
      • Control multiple machines from the central location. Dictate where tasks can run.
      • [nice to have] Allow certain tasks to running externally, e.g. in AWS ECS or Lambda or similar
      • Command-line access for management (web is great for monitoring)
      • Flexible scheduling (from strict every minute to ~daily)
      • Support for daemons, psuedo-daemons (just run repeatedly with small delays), and periodic tasks.
      • Smart alerts - some processes can fail occasionally, but needs to run at least once per day - some processes should never fail. A repeating inaccurate alert is usually just as bad as no alert at all.
      • Error code respect (configurable)
      • Logs - store the program output, organize it, keep it probably in a date-based structure
      • Health checks - if it's a web server, is it still responding to requests? Has it logged something recently? Touched a database file? If not, it's probably dead.
      • Alerts support in Telegram and email
      • Monitor details about the run - how long did it take? How much CPU did it use? Has it gotten slower over time?
      • Dashboard - top-level stats, browse detailed run stats and logs

      So much of the configuration/control stuff screams containers, so more and more I'm using Docker for my scripts, even simpler ones.

      I'm pretty sure a lot of this is accomplished by existing Docker orchestration tools. Been delaying that rabbit hole for a long time.

      I think the key thing that makes this not just a "cron" problem for me, is I want something that monitors and manages both itself and the tasks I want to run, including creating/setting up if not already. I also want to ideally focus my mental energy into a single controller that handles my "keep this running" things all together, be they servers or infrequent tasks.

      Doesn't have to be a single project. Might be multiple pieces glued together somehow.

  19. Dec 2020
  20. Nov 2020
    1. docker stack deploy -c docker-compose.yml dev

      I wasn't able to get the mysql image to build right unless I ran this from within the same directory containing docker-compose.yml.

    2. apache/vhosts.conf

      The crucial thing here is making sure the directory paths in your Apache vhosts file take into account the local to container mapping you made earlier.

    1. Unfortunately, this image was built months ago. No one has the build any more. We are left with a descendant image that has all the original content but on lower layers.
    2. docker build --cache-from=base_image:2.2.1,custom-gource:0.1 . -t custom-gource:0.2
    3. This is addressing a security issue; and the associated threat model is "as an attacker, I know that you are going to do FROM ubuntu and then RUN apt-get update in your build, so I'm going to trick you into pulling an image that ​_pretents_​ to be the result of ubuntu + apt-get update so that next time you build, you will end up using my fake image as a cache, instead of the legit one." With that in mind, we can start thinking about an alternate solution that doesn't compromise security.
    1. On a user-defined bridge network, containers can resolve each other by name or alias.But, The containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy.
  21. Oct 2020
    1. I debugged docker-compose and docker-py and figured out that you should either use environment variables or options in command. You should not mix these . If you even specify --tls in command then you will have to specify all the options as the TLSConfig object, as now TLSConfig object is created completely from the command options and operide the TFSConfig object created from the environment variable.
    1. To be clear: this setup works great with just docker daemon, but something about -compose is amiss.
    2. Using the docker client I have good success accessing the remote docker server. We call the remote server up to a hundred thousand times a day with good success. Attempting to use docker-compose, installed either via curl OR pip install --upgrade with python 2.7, we get an SSL error:
    1. docker --tlsverify ps executes just fine, while docker-compose --tlsverify up -d --force-recreate gives me an error: SSL error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
    2. I only have one set of certs. And I can't see how they can be different because docker commands work using the endpoint. It's just the docker-compose command that fails
    3. docker-compose command you can not mix environment variable and command option. You can specify setting in env variable and then just use docker-compose ps. The connection will be secured with TLS protocol if DOCKER_TLS_VERIFY variable is set.
    4. You dont need to pass --tls or --tlsverify option in the docker-config path as the task already sets DOCKER_TSL_VERIFY environment varaible. I debugged docker-compose and docker-py library and verified that if you pass any flag --tls or --tlsverify flag it tries to create tslConfig object out of options and not from environment
    1. you'll run into the error you've run into if your remote Docker host has a certificate signed by something other than the ca.pem that you've got at that location.
  22. Aug 2020
    1. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime

      Official definition of a container image

  23. Jul 2020
  24. Jun 2020
  25. May 2020
    1. if [ -z "${DOCKER_HOST:-}" ]; then if _should_tls || [ -n "${DOCKER_TLS_VERIFY:-}" ]; then export DOCKER_HOST='tcp://docker:2376' else export DOCKER_HOST='tcp://docker:2375' fi fi
    1. Image consumers can enable DCT to ensure that images they use were signed. If a consumer enables DCT, they can only pull, run, or build with trusted images. Enabling DCT is a bit like applying a “filter” to your registry. Consumers “see” only signed image tags and the less desirable, unsigned image tags are “invisible” to them.
    1. Authors of third-party tools should prefix each label key with the reverse DNS notation of a domain they own, such as com.example.some-label.
    1. NOTE: Note: If you have 2 Factor Authentication enabled in your account, you need to pass a personal access token instead of your password in order to login to GitLab's Container Registry.
  26. Apr 2020
    1. AinD launches Android apps in Docker, by nesting Anbox containers inside Docker.

      AinD - useful tool when we need to run an Android app 24/7 in the cloud.

      Unlike the alternatives, AinD is not VM, but IaaS based

    1. To use Gunicorn as your web server, it must be included in the requirements.txt file as an app dependency. It does not need to be installed in your virtual environment/host machine.
    1. docker-compose rm -f -s -v yourService

      useful commands for launching a single service in a docker-compose file without running it in the background so you can see the logs:

      docker-compose rm -fsv service
      docker-compose up service
      
  27. Mar 2020
    1. from Docker Compose on a single machine, to Heroku and similar systems, to something like Snakemake for computational pipelines.

      Other alternatives to Kubernetes:

      • Docker Compose on a single machine
      • Heroku and similar systems
      • Snakemake for computational pipelines
    1. That makes sense, the new file gets created in the upper directory.

      If you add a new file, such as with:

      $ echonew file> merged/new_file

      It will be created in the upper directory

    2. Combining the upper and lower directories is pretty easy: we can just do it with mount!

      Combining lower and upper directories using mount:

      $ sudo mount -t overlay overlay -o lowerdir=/home/bork/test/lower,upperdir=/home/bork/test/upper,workdir=/home/bork/test/work /home/bork/test/merged

    3. Overlay filesystems, also known as “union filesystems” or “union mounts” let you mount a filesystem using 2 directories: a “lower” directory, and an “upper” directory.

      Docker doesn't make copies of images, but instead uses an overlay.

      Overlay filesystems, let you mount a system using 2 directories:

      • the lower directory (read-only)
      • the upper directory (read and write).

      When a process:

      • reads a file, the overlayfs filesystem driver looks into the upper directory and if it's not present, it looks into the lower one
      • writes a file, overlayfs will just use the upper directory
  28. Feb 2020
    1. when we ran it natively on the source machine (i.e. not Dockerized, which reduces performance for all the tools by ~40%)
    1. docker-compose up -d

      Error for me here...

      ➜ hello-world docker-compose up -d zsh: command not found: docker-compose

  29. Jan 2020