- Sep 2024
-
github.com github.com
-
一个 Docker Registry 中可以包含多个仓库(Repository);每个仓库可以包含多个标签(Tag);每个标签对应一个镜像。 通常,一个仓库会包含同一个软件不同版本的镜像,而标签就常用于对应该软件的各个版本。我们可以通过 <仓库名>:<标签> 的格式来指定具体是这个软件哪个版本的镜像。如果不给出标签,将以 latest 作为默认标签。
-
容器不应该向其存储层内写入任何数据,容器存储层要保持无状态化。所有的文件写入操作,都应该使用 数据卷(Volume)、或者绑定宿主目录,在这些位置的读写会跳过容器存储层,直接对宿主(或网络存储)发生读写,其性能和稳定性更高。
-
什么是 Docker Docker 属于 Linux 容器的一种封装,提供简单易用的容器使用接口。 它是目前最流行的 Linux 容器解决方案。 Docker 将应用程序与该程序的依赖,打包在一个文件里面。运行这个文件,就会生成一个虚拟容器。
- 更高效的利用系统资源
- 更快速的启动时间 一致的运行环境 持续交付和部署 更轻松的迁移 更轻松的维护和扩展
-
- Jun 2024
-
www.kenmuse.com www.kenmuse.com
-
Sample
.devcontainer/devcontainer.json
:json { "name": "Global", "build": { "context": "..", "dockerfile": "Dockerfile" }, "containerEnv": { "PYTHONPATH": "." }, "customizations": { "vscode": { "settings": { "extensions.verifySignature": false }, "extensions": [ "GitHub.copilot", "ms-python.vscode-pylance", "ms-python.python", "eamodio.gitlens" ] } }, "initializeCommand": "/bin/bash -c '[[ -d ${HOME}/.aws ]] || { echo \"Error: ${HOME}/.aws directory not found.\"; exit 1; }; [[ -f ${HOME}/.netrc ]] || { echo \"Error: ${HOME}/.netrc file not found.\"; exit 1; }; [[ -d ${HOME}/.ssh ]] || { echo \"Error: ${HOME}/.ssh directory not found.\"; exit 1; }; echo \"\n> All required mounts found on the host machine.\"'", "onCreateCommand": { "hadolint": "apt-get update && apt-get install wget -y && wget -O /bin/hadolint https://github.com/hadolint/hadolint/releases/download/v2.12.0/hadolint-Linux-x86_64 && chmod u+x /usr/bin/hadolint", "precommit": "pip install pre-commit" }, "updateContentCommand": "/bin/bash -c 'if grep -A 2 \"machine gitlab.com\" ~/.netrc | grep -q \"password\" && GITLAB_TOKEN=$(grep -A 2 \"machine gitlab.com\" ~/.netrc | grep -oP \"(?<=password ).*\" | tr -d \"\\n\") && [ -n \"$GITLAB_TOKEN\" ]; then echo \"\n> Token found in ~/.netrc\"; else read -sp \"\n> Enter your GitLab token: \" GITLAB_TOKEN && echo; fi; echo \"export GITLAB_TOKEN=$GITLAB_TOKEN\" >> ~/.bashrc && . ~/.bashrc && poetry config http-basic.abc __token__ $GITLAB_TOKEN'", "postCreateCommand": ". ~/.bashrc && curl -s --location 'https://gitlab.com/api/v4/projects/12345/repository/files/.pre-commit-config.yaml/raw?ref=main' --header \"PRIVATE-TOKEN: $GITLAB_TOKEN\" -o .pre-commit-config.yaml", "postAttachCommand": "/bin/bash -c '. ~/.bashrc && read -p \"\n> Do you want to update the content of devcontainer.json? (y/n): \" response; if [[ \"$response\" == \"y\" ]]; then curl -s --location \"https://gitlab.com/api/v4/projects/12345/repository/files/devcontainer.json/raw?ref=main\" --header \"PRIVATE-TOKEN: $GITLAB_TOKEN\" -o .devcontainer/devcontainer.json; else echo \"\n> Skipping update of devcontainer.json\"; fi'", "mounts": [ "source=${localEnv:HOME}/.aws/,target=/root/.aws/,type=bind,readonly", "source=${localEnv:HOME}/.netrc,target=/root/.netrc,type=bind,readonly", "source=${localEnv:HOME}/.ssh/,target=/root/.ssh/,type=bind,readonly" ] }
-
Some more of my recent learning with
devcontainer.json
(its Dev Container metadata):- Interactive commands (those waiting for user input like
read
) do not display the input request in (at leastonCreateCommand
andpostCreateCommand
sections), so it is better to keep them inupdateContentCommand
orpostAttachCommand
. - If there are 2
read
commands in a single section, likeupdateContentCommand
, only the 1st one is displayed to the user, and the 2nd one is ignored. - When I put a
read
command within a dictionary (with at lest 2 key/values) ofpostAttachCommand
, the interactive command wasn't being displayed. - We need to use
/bin/bash -c
to be able to useread -s
(the-s
flag) which allows for securely passing the password so that it does not stay in the VS Code console. Also, I had trouble with interactive commands andif
statements without it. - Using
"GITLAB_TOKEN": "${localEnv:GITLAB_TOKEN}"
does not easily work as it is looking forGITLAB_TOKEN
env variable set locally on our host computers, and I believe no one does it. - The dictionary seems to be executing its scripts in parallel; therefore, it is not easily possible to break down long lines which have to execute in a chronological sequence.
- JSON does not allow for human-readable line breaks; therefore, indeed, it seems impossible to improve the long one-liners.
- The files/folders mentioned within
mounts
need to exist locally (otherwise, Docker container build fails). They are mounted before any other section. Technically, we can protect ourselves with the following command to find an extra message in VS Code container logs:
json "initializeCommand": "/bin/bash -c '[[ -d ${HOME}/.aws ]] || { echo \"Error: ${HOME}/.aws directory not found.\"; exit 1; }; [[ -f ${HOME}/.netrc ]] || { echo \"Error: ${HOME}/.netrc file not found.\"; exit 1; }; [[ -d ${HOME}/.ssh ]] || { echo \"Error: ${HOME}/.ssh directory not found.\"; exit 1; }'",
Other option is to get rid of the error completely, but this creates files on the host machine; therefore, it is not an ideal solution:
json "initializeCommand": "mkdir -p ~/.ssh ~/.aws && touch ~/.netrc",
- Interactive commands (those waiting for user input like
-
["bash", "-i", "-c", "read -p 'Type a message: ' -t 10 && echo Attach $REPLY"],
I would also simply put the following:
bash /bin/bash -c 'read -p 'Type a message: ' -t 10 && echo Attach $REPLY'
-
Consequently, it’s one of the only commands that consistently allows interactions with users.
I also found that
updateContentCommand
allows for the user interaction (it displays interactive command in the VS Code console). -
There are six available lifecycle script hooks
Explanation of 6 available
devcontainer.json
(Dev Container in VS Code) hooks.
-
-
github.com github.com
-
How can I wait for container X before starting Y? This is a common problem and in earlier versions of docker-compose requires the use of additional tools and scripts such as wait-for-it and dockerize. Using the healthcheck parameter the use of these additional tools and scripts is often no longer necessary.
-
-
github.com github.com
-
docker inspect --format='{{.State.Health.Status}}'
-
-
www.howtogeek.com www.howtogeek.com
-
Running Docker inside Docker lets you build images and start containers within an already containerized environment.
-
If your use case means you absolutely require dind, there is a safer way to deploy it. The modern Sysbox project is a dedicated container runtime that can nest other runtimes without using privileged mode. Sysbox containers become VM-like so they're able to support software that's usually run bare-metal on a physical or virtual machine. This includes Docker and Kubernetes without any special configuration.
-
Bind mounting your host's daemon socket is safer, more flexible, and just as feature-complete as starting a dind container.
-
Docker-in-Docker via dind has historically been widely used in CI environments. It means the "inner" containers have a layer of isolation from the host. A single CI runner container supports every pipeline container without polluting the host's Docker daemon.
-
While it often works, this is fraught with side effects and not the intended use case for dind. It was added to ease the development of Docker itself, not provide end user support for nested Docker installations.
-
This means containers created by the inner Docker will reside on your host system, alongside the Docker container itself. All containers will exist as siblings, even if it feels like the nested Docker is a child of the parent.
-
- May 2024
-
gitlab.com gitlab.com
-
return &container.HostConfig{ DNS: e.Config.Docker.DNS, DNSSearch: e.Config.Docker.DNSSearch, RestartPolicy: neverRestartPolicy, ExtraHosts: e.Config.Docker.ExtraHosts, Privileged: e.Config.Docker.Privileged, NetworkMode: e.networkMode, Binds: e.volumesManager.Binds(), ShmSize: e.Config.Docker.ShmSize, Tmpfs: e.Config.Docker.ServicesTmpfs, LogConfig: container.LogConfig{ Type: "json-file", },
-
-
stackoverflow.com stackoverflow.com
-
For Linux systems, you can – starting from major version 20.04 of the Docker engine – now also communicate with the host via host.docker.internal. This won't work automatically, but you need to provide the following run flag: --add-host=host.docker.internal:host-gateway
-
-
developers.redhat.com developers.redhat.com
-
Podman commands are the same as Docker’s When building Podman, the goal was to make sure that Docker users could easily adapt. So all the commands you are familiar with also exist with Podman. In fact, the claim is made that if you have existing scripts that run Docker you can create a docker alias for podman and all your scripts should work (alias docker=podman). Try it.
-
This article does not get into the detailed pros and cons of the Docker daemon process. There is much to be said in favor of this approach and I can see why, in the early days of Docker, it made a lot of sense. Suffice it to say that there were several reasons why Docker users were concerned about this approach as usage went up. To list a few: A single process could be a single point of failure. This process owned all the child processes (the running containers). If a failure occurred, then there were orphaned processes. Building containers led to security vulnerabilities. All Docker operations had to be conducted by a user (or users) with the same full root authority.
-
- Apr 2024
-
qiita.com qiita.com
-
docker composeを用いる場合、docker-compose.ymlの設定項目の、「stdin_open」と「tty」をtrueにすることで設定を有効にできます。
-
- Feb 2024
-
-
docker init will scan your project and ask you to confirm and choose the template that best suits your application. Once you select the template, docker init asks you for some project-specific information, automatically generating the necessary Docker resources for your project.
docker init
-
-
-
The result? Our runtime image just got 6x smaller! Six times! From > 1.1 GB to 170 MB.
See (above this annotation) the most optimized & CI friendly Python Docker build with Poetry (until this issue gets resolved)
-
This final trick is not known to many as it’s rather newer compared to the other features I presented. It leverages Buildkit cache mounts, which basically instruct Buildkit to mount and manage a folder for caching reasons. The interesting thing is that such cache will persist across builds!By plugging this feature with Poetry cache (now you understand why I did want to keep caching?) we basically get a dependency cache that is re-used every time we build our project. The result we obtain is a fast dependency build phase when building the same image multiple times on the same environment.
Combining Buildkit cache and Poetry cache
-
-
marvelousmlops.substack.com marvelousmlops.substack.com
-
We’ve (painstakingly) manually reviewed 310 live MLOps positions, advertised across various platforms in Q4 this year
They went through 310 role descriptions and, even though role descriptions may vary significantly, they found 3 core skills that a large percentage of MLOps roles required:
📦 Docker and Kubernetes 🐍 Python 🌥 Cloud
-
- Jan 2024
-
github.com github.com
-
docker-spacemacs from Harmonic Software Systems
Tags
Annotators
URL
-
- Dec 2023
-
docs.docker.com docs.docker.com
-
Docker overviewDocker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker's methodologies for shipping, testing, and deploying code, you can significantly reduce the delay between writing code and running it in production.
short info for Docker
-
-
pythonspeed.com pythonspeed.com
-
Rebuild your images regularly
If you want both the benefits of caching, and to get security updates within a reasonable amount of time, you will need two build processes:
- The normal image build process that happens whenever you release new code.
- Once a week, or every night, rebuild your Docker image from scratch using
docker build --pull --no-cache
to ensure you have security updates.
-
Disabling caching
That suggests that sometimes you’re going to want to bypass the caching. You can do so by passing two arguments to docker build:
- --pull: This pulls the latest version of the base Docker image, instead of using the locally cached one.
- --no-cache: This ensures all additional layers in the Dockerfile get rebuilt from scratch, instead of relying on the layer cache.
If you add those arguments to docker build you will be ensured that the new image has the latest (system-level) packages and security updates.
-
As long as you’re relying on caching, you’ll still get the old, insecure packages distributed in your images
-
caching means no updates
-
caching can lead to insecure images.
-
- Nov 2023
-
-
RUN poetry install --without dev && rm -rf $POETRY_CACHE_DIR
The ideal way of
poetry install
within aDockerfile
to omit a bunch of cache that would eventually take a lot of space (which we could discover with tools like dive)
-
-
docs.docker.com docs.docker.com
-
Rosetta is now Generally Available for all users on macOS 13 or later. It provides faster emulation of Intel-based images on Apple Silicon. To use Rosetta, see Settings. Rosetta is enabled by default on macOS 14.1 and later.
Tested it on my side, and
poetry install
of one Python project took 44 seconds instead of 2 minutes 53 seconds, so it's nearly a 4x speed increase!
Tags
Annotators
URL
-
- Oct 2023
-
wasmlabs.dev wasmlabs.dev
-
PHP would serve WordPress when it's run as a standalone Wasm application.
php.wasm
can essentially run in: 1. Wasm application (runtime) 2. Docker+Wasm container 3. Any app that embeds a Wasm runtime (e.g. Apache HTTPD) 4. Web browser -
WebAssembly brings true portability to the picture. You can build a binary once and run it everywhere.
-
However, on top of the big image size, traditional containers are also bound to the architecture of the platform on which they run.
-
Wasm container images are much smaller than the traditional ones. Even the alpine version of the php container is bigger than the Wasm one.
php
(166MB),php-alpine
(30.1MB),php-wasm
(5.35 MB) -
With WASI SDK we can build a Wasm module out of PHP's codebase, written in C. After that, it takes a very simple Dockerfile based on scratch for us to make an OCI image that can be run with Docker+Wasm.
Building a WASM container that can be run with Docker+Wasm
-
Docker Desktop now includes support for WebAssembly. It is implemented with a containerd shim that can run Wasm applications using a Wasm runtime called WasmEdge. This means that instead of the typical Windows or Linux containers which would run a separate process from a binary in the container image, you can now run a Wasm application in the WasmEdge runtime, mimicking a container. As a result, the container image does not need to contain OS or runtime context for the running application - a single Wasm binary suffices.
Docker Desktop can run Wasm applications (binaries) instead of OS (Linux/Windows)
-
If WASM+WASI existed in 2008, we wouldn't have needed to create Docker. That's how important it is. WebAssembly on the server is the future of computing.
Quote from one of the co-founders of Docker
Tags
Annotators
URL
-
-
collabnix.com collabnix.com
-
the new Docker+Wasm integration allows you to run a Wasm application alongside your Linux containers at much faster speed.
```bash time docker run hello-world ... 0.07s user 0.05s system 1% cpu 8.912 total time docker run --runtime=io.containerd.wasmedge.v1 --platform=wasi/wasm32 ajeetraina/hello-wasm-docker
0.05s user 0.03s system 19% cpu 0.393 total ```
-
Docker Desktop and CLI can now manage both Linux containers and Wasm containers side by side.
-
- Sep 2023
- Jun 2023
-
github.com github.com
-
What is the Docker host filesystem owner matching problem?
-
-
alexwlchan.net alexwlchan.net
-
This is the script, which I’ve named docker and put before the real Docker CLI in my PATH
Script to automatically start Docker if it's not running when we trigger a
docker
command
Tags
Annotators
URL
-
-
blog.devgenius.io blog.devgenius.io
-
// save to tar filedocker save nodeversion > nodeversion.tar// load from tar filedocker load < nodeversion.tar
Saving and loading Docker images locally
-
- May 2023
-
stackoverflow.com stackoverflow.com
-
Host machine: docker run -it -p 8888:8888 image:version Inside the Container : jupyter notebook --ip 0.0.0.0 --no-browser --allow-root Host machine access this url : localhost:8888/tree
3 ways of running
jupyter notebook
in a container
-
- Apr 2023
-
pythonspeed.com pythonspeed.com
-
If you install a package with pip’s --user option, all its files will be installed in the .local directory of the current user’s home directory.
One of the recommendations for Python multi-stage Docker builds. Thanks to
pip install --user
, the packages won't be spread across 3 different paths.
Tags
Annotators
URL
-
- Mar 2023
-
-
Using pex in combination with S3 for storing the pex files, we built a system where the fast path avoids the overhead of building and launching Docker images.Our system works like this: when you commit code to GitHub, the GitHub action either does a full build or a fast build depending on if your dependencies have changed since the previous deploy. We keep track of the set of dependencies specified in setup.py and requirements.txt.For a full build, we build your project dependencies into a deps.pex file and your code into a source.pex file. Both are uploaded to Dagster cloud. For a fast build we only build and upload the source.pex file.In Dagster Cloud, we may reuse an existing container or provision a new container as the code server. We download the deps.pex and source.pex files onto this code server and use them to run your code in an isolated environment.
Fast vs full deployments
Tags
Annotators
URL
-
-
github.com github.com
-
This could be because the size can be misleading, there is on disk size, push/pull payload size, and sum of ungzipped tars. The size of the ungzipped tars is often used to represent the size of the image in Docker but the actual size on disk is dependent on the graph driver. From the registry perspective, the sum of the gzipped layers is most important because it represent what the registry is storing and what needs to be transferred.
Docker image size on a local drive will be different
Tags
Annotators
URL
-
-
pythonspeed.com pythonspeed.com
-
depending on how smart the framework is, you might find yourself installing Conda packages over and over again on every run. This is inefficient, even when using a faster installer like Mamba.
-
there’s the bootstrapping problem: depending on the framework you’re using, you might need to install Conda and the framework driver before you can get anything going. A Docker image would come prepackaged with both, in addition to your code and its dependencies. So even if your framework supports Conda directly, you might want to use Docker anyway.
-
Mlflow supports both Conda and Docker-based projects.
-
The only thing that will depend on the host operating system is glibc, pretty much everything else will be packaged by Conda. So a pinned environment.yml or conda-lock.yml file is a reasonable alternative to a Docker image as far as having consistent dependencies.
Conda can be a sufficient alternative to Docker
-
To summarize, for the use case of Python development environments, here’s how you might choose alternatives to Docker:
(see table below)
-
Conda packages everything but the standard C library, from C libraries to the Python interpreter to command-line tools to compilers.
Tags
Annotators
URL
-
- Jan 2023
-
pythonspeed.com pythonspeed.com
-
Solution #3: Switch to Conda-Forge
Yet another possible solution for M1 users, but you need to use conda and expect less packages than in PyPI
-
In general, the environment variable is too heavy-handed and should be avoided, since it will impact all images you build or run. Given the speed impact, you don’t for example want to run your postgres image with emulation, to no benefit.
Which options to avoid
-
The problem with this approach is that the emulation slows down your runtime. How much slower it is? Once benchmark I ran was only 6% of the speed of the host machine!
Speed is the core problem with emulation
-
The other option is to run x86_64 Docker images on your ARM64 Mac machine, using emulation. Docker is packaged with software that will translate or emulate x86_64 machine code into ARM64 machine code on the fly; it’s slow, but the code will run.
Another possible solution for M1 users (see snippets below)
-
Third, you can pre-compile wheels, store them somewhere, and install those directly instead of downloading the packages from PyPI.
Third possible solution for M1 users
-
If you have a compiler installed in your Docker image and any required native libraries and development headers, you can compile a native package from the source code. Basically, you add a RUN apt-get upgrade && apt-get install -y gcc and iterate until the package compiles successfully.
Second possible solution for M1 users
-
First, it’s possible the relevant wheels are available in newer versions of the libraries.
First possible solution for M1 users
-
When you pip install murmurhash==1.0.6 on a M1/M2 Mac inside Docker, again it looks at the available files
Other possible steps that
pip
will do when trying to install a Python package without a relevant CPU instruction set -
When you pip install filprofiler==2022.05.0 on a M1/M2 Mac inside Docker, pip will look at the available files for that version, and then
3 steps that
pip
will do when trying to install a Python package without a relevant CPU instruction set -
In either case, pure Python will Just Work, because it’s interpreted at runtime: there’s no CPU-specific machine code, it’s just text that the Python interpreter knows how to run. The problems start when we start using compiled Python extensions. These are machine code, and therefore you need a version that is specific to your particular CPU instruction set.
M1 Python issues
-
on an Intel/AMD PC or Mac, docker pull will pull the linux/amd64 image. On a newer Mac using M1/M2/Silicon chips, docker pull will the pull the linux/arm64/v8 image.
Reason of all the M1 Docker issues
-
In order to meet its build-once-run-everywhere promise, Docker typically runs on Linux. Since macOS is not Linux, on macOS this is done by running a virtual machine in the background, and then the Docker images run inside the virtual machine. So whether you’re on a Mac, Linux, or Windows, typically you’ll be running linux Docker images.
-
- Dec 2022
-
www.docker.com www.docker.com
-
Docker Desktop is an easy-to-install application and includes Docker Engine, Docker CLI client, Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper
-
Docker Hub is to Docker what GitHub is to Git
-
A container packages code and all its dependencies into a single unit, thus letting an application run quickly and reliably from one computing environment to another
-
-
docs.docker.com docs.docker.com
-
Compose V2 has been re-written in Go, which improves integration with other Docker command-line features, and allows it to run natively on macOS on Apple silicon, Windows, and Linux, without dependencies such as Python.
-
Introduction of the Compose specification makes a clean distinction between the Compose YAML file model and the docker-compose implementation.
-
-
github.com github.com
-
# Due to lack of "expressivity" in Compose, we define our own couple of service # "pseudo-types": # # - image-only services (name: *-image) # # The only goal of these is to build images. No other services build images. # # These have entrypoint overridden to exit immediately. # # - base services (name: *-base) # # These contain common configuration and are intended to be extended. # # Their command (not entrypoint, to keep the original one) is overridden to # exit immediately. Service must support a command to exit immediately. # # - task services (name: *-task) # # These are intended for running one-off commands. # # Their default command is overridden to exit immediately. Service must # support a command to exit immediately. # # - "real" services # # These are actual services that stay up and running.
Tags
Annotators
URL
-
-
jasonkayzk.github.io jasonkayzk.github.io
-
使用Docker-Compose部署单节点ELK-Stack
-
-
pytimer.github.io pytimer.github.io
-
Docker容器无法连接外部网络原因排查
Tags
Annotators
URL
-
-
www.zhihu.com www.zhihu.com
-
JAVA大军,开始把目光从spring cloud转向k8s甚至k8s+istio了么?
Tags
Annotators
URL
-
-
www.zhihu.com www.zhihu.com
-
Java8在Docker里性能不好是真的吗?
Tags
Annotators
URL
-
-
www.zhihu.com www.zhihu.com
-
为什么游戏公司的server不愿意微服务化?
Tags
Annotators
URL
-
-
www.simplilearn.com www.simplilearn.com
-
Docker Desktop is a free, easy-to-install, downstream application for a Mac or Windows environment. The application lets you build and share containerized applications and microservices. Docker consists of Docker Engine, Docker Compose, Docker CLI client, Docker Content Trust, Kubernetes, and Credential Helper.
-
virtual environment has a hypervisor layer, whereas Docker has a Docker engine layer
-
-
betterprogramming.pub betterprogramming.pub
-
A container is a runtime instance of an image
-
-
www.freecodecamp.org www.freecodecamp.org
-
Docker engine is the layer on which Docker runs. It’s a lightweight runtime and tooling that manages containers, images, builds, and more
-
A Dockerfile is where you write the instructions to build a Docker image
-
-
www.digitalocean.com www.digitalocean.com
-
While a full dive into container orchestration is beyond the scope of this article, two prominent players are Docker with Docker Compose and Docker Swarm mode, and Kubernetes. In roughly order of complexity, Docker Compose is a container orchestration solution that deals with multi-container deployments on a single host. When there are multiple hosts involved, Docker Swarm mode is required.
-
- Nov 2022
-
tech.oeru.org tech.oeru.org
-
Dave Lane at OERu where they have been running an instance for a few years at https://mastodon.oeru.org/ – he has some Docker stuff written - he is super generous / helpful
<small><cite class='h-cite via'>ᔥ <span class='p-author h-card'>cogdog</span> in How About A Fediverse Space? - Feature Requests - Reclaim Hosting Community Forums (<time class='dt-published'>11/11/2022 11:32:46</time>)</cite></small>
-
-
www.freecodecamp.org www.freecodecamp.org
-
Docker is an open-source project based on Linux containers. It uses Linux Kernel features like namespaces and control groups to create containers on top of an operating system.
-
it’s important to understand some of the fundamental concepts around what a “container” is and how it compares to a Virtual Machine (VM)
-
- Oct 2022
-
github.com github.com
-
Requirements for rpi_gpio not found
Tags
Annotators
URL
-
-
github.com github.com
-
passenger-docker images contain an Ubuntu 20.04 operating system. You may want to update this OS from time to time, for example to pull in the latest security updates. OpenSSL is a notorious example. Vulnerabilities are discovered in OpenSSL on a regular basis, so you should keep OpenSSL up-to-date as much as you can. While we release passenger-docker images with the latest OS updates from time to time, you do not have to rely on us. You can update the OS inside passenger-docker images yourself, and it is recommend that you do this instead of waiting for us.
-
- Sep 2022
-
stackoverflow.com stackoverflow.com
-
the problem with docker builds is the made-up concept of "context". Dockerfiles are not sufficient to define a build, unless they are placed under a strategic directory (aka context), i.e. "/" as an extreme, so you can access any path (note that that's not the right thing to do in a sane project either..., plus it makes docker builds very slow because docker scans the entire context at start).
-
I would not change the project structure to accommodate Docker (or any build tools).
-
- Aug 2022
-
www.jianshu.com www.jianshu.com
-
docker-compose搭建kong和konga,开箱即用
kong
Tags
Annotators
URL
-
-
docs.docker.com docs.docker.com
-
The following command connects an already-running my-nginx container to an already-existing my-net network
将一台运行中的container连接到已有的network
Tags
Annotators
URL
-
-
blog.terrynow.com blog.terrynow.com
-
/opt/rabbitmq_delayed_message_exchange-3.10.2.ez复制到容器里去:
docker
-
-
stackoverflow.com stackoverflow.com
-
docker cp c:\path\to\local\file container_name:/path/to/target/dir/
docker
-
- Jul 2022
-
github.com github.com
-
By default, this function reads template files in /etc/nginx/templates/*.template and outputs the result of executing envsubst to /etc/nginx/conf.d.
'
-
-
stackoverflow.com stackoverflow.com
-
It is "guaranteed" as long as you are on the default network. 172.17.0.1 is no magic trick, but simply the gateway of the network bridge, which happens to be the host. All containers will be connected to bridge unless specified otherwise.
-
For example I use docker on windows, using docker-toolbox (OG) so that it has less conflicts with the rest of my setup and I don't need HyperV.
-
-
github.com github.com
-
# This ensures that the pid namespace is shared between the host # and the container. It's not necessary to be able to run spring # commands, but it is necessary for "spring status" and "spring stop" # to work properly. pid: host
-
-
www.cnblogs.com www.cnblogs.com
-
修改 Docker 运行中 Container 的映射端口。
(1)停止服务
停止容器服务
docker stop <container id>
停止 docker 服务 (Linux)
systemctl stop docker
(2)修改配置
查看 container 的 id hash 值
docker inspect <container_name>
C:\Users\xxj87>docker inspect b61792d860f2 [ { "Id": "b61792d860f24c7ba47f4e270e211736a1a88546375e97380884c577d31dab66", "Created": "2022-07-01T07:46:03.516440885Z", "Path": "/bin/sh",
配置目录
[nux]: cd /var/lib/docker/containers/4fd7/
修改文件 hostconfig.json 中的 PortBindings
vim hostconfig.json
"PortBindings":{"2222/tcp":[{"HostIp":"","HostPort":"2222"}],"5000/tcp":[{"HostIp":"","HostPort":"5000"}],"80/tcp":[{"HostIp":"","HostPort":"40001"}],"8070/tcp":[{"HostIp":"","HostPort":"8070"}],"8081/tcp":[{"HostIp":"","HostPort":"8081"}]},
"80/tcp":[{"HostIp":"","HostPort":"40001"}] 80 容器内部端口 40001 外部映射端口
修改 config.v2.json 中的 ExposedPorts
vi config.v2.json "ExposedPorts":{"2222/tcp":{},"5000/tcp":{},"80/tcp":{},"8081/tcp":{},"8070/tcp":{}},
重启服务
systemctl start docker
启动容器
docker start <container id>
验证修改
docker ps -a
Tags
Annotators
URL
-
- Jun 2022
-
blog.driftingruby.com blog.driftingruby.com
-
This is a neat Docker trick for those who have an ARM development machine (Apple M1), but sometimes need to build x86/amd64 images locally to push up to a registry.
Since Apple M1 is based on the ARM architecture, it is still possible to build images based on Linux x86/amd64 architecture using docker buildx:
docker buildx build -f Dockerfile --platform linux/amd64 .
However, building such images can be really slow, so we can create a builder profile (see the paragraphs below / my other annotation to this article).
-
So, we can create this builder on our local machine. The nice part about this creation is that it is idempotent, so you can run this command many times without changing the result. All we have to do is to create a builder profile and in this case I have named it amd64_builder.
Example of creating a Docker buildx builder profile on the Apple M1 machine. This will allow for around 10x faster builds on the amd64 architecture pushed to a registry, than on the amd64 emulation on the M1 chip.
Tags
Annotators
URL
-
-
openclassrooms.com openclassrooms.com
-
Vous connaissez maintenant la différence entre conteneur et machine virtuelle ; vous avez ainsi pu voir les différences entre la virtualisation lourde et la virtualisation légère.Un conteneur doit être léger, il ne faut pas ajouter de contenu superflu dans celui-ci afin de le démarrer rapidement, mais il apporte une isolation moindre. À contrario, les machines virtuelles offrent une très bonne isolation, mais elle sont globalement plus lentes et bien plus lourdes.
-
- May 2022
-
blog.container-solutions.com blog.container-solutions.com
-
A normal Makefile for building projects with Docker would look more or less like this:
Sample of a Makefile for building and tagging Docker images
-
One of the main benefits from tagging your container images with the corresponding commit hash is that it's easy to trace back who a specific point in time, know how the application looked and behaved like for that specifc point in history and most importantly, blame the people that broke the application ;)
Why tagging Docker images with SHA is useful
-
-
www.docker.com www.docker.com
-
Software Bill Of Materials (SBOM) is analogous to a packing list for a shipment; it’s all the components that make up the software, or were used to build it. For container images, this includes the operating system packages that are installed (e.g.: ca-certificates) along with language specific packages that the software depends on (e.g.: log4j). The SBOM could include only some of this information or even more details, like the versions of components and where they came from.
Software Bill Of Materials (SBOM)
-
Included in Docker Desktop 4.7.0 is a new, experimental docker sbom CLI command that displays the SBOM (Software Bill Of Materials) of any Docker image.
New docker sbom CLI command
-
-
sarusso.github.io sarusso.github.io
-
As of today, the Docker Engine is to be intended as an open source software for Linux, while Docker Desktop is to be intended as the freemium product of the Docker, Inc. company for Mac and Windows platforms. From Docker's product page: "Docker Desktop includes Docker Engine, Docker CLI client, Docker Build/BuildKit, Docker Compose, Docker Content Trust, Kubernetes, Docker Scan, and Credential Helper".
About Docker Engine and Docker Desktop
-
The diagram below tries to summarise the situation as of today, and most importantly to clarify the relationships between the various moving parts.
Containers (the backend):
-
-
-
Without accounting for what we install or add inside, the base python:3.8.6-buster weighs 882MB vs 113MB for the slim version. Of course it's at the expense of many tools such as build toolchains3 but you probably don't need them in your production image.4 Your ops teams should be happier with these lighter images: less attack surface, less code that can break, less transfer time, less disk space used, ... And our Dockerfile is still readable so it should be easy to maintain.
See sample Dockerfile above this annotation (below there is a version tweaked even further)
-
scratch is a special empty image with no operating system.
FROM scratch
-
- Mar 2022
-
-
Have you ever built an image only to realize that you actually need it on a user account other than root, requiring you to rebuild the image again in rootless mode? Or have you built an image on one machine but run containers on the image using multiple different machines? Now you need to set up an account on a registry, push the image to the registry, Secure Shell (SSH) to each device you want to run the image on, and then pull the image. The podman image scp command solves both of these annoying scenarios as quickly as they occur.
Podman 4.0 can transfer container images without a registry.
For example: * You can copy a root image to a non-root account:
$ podman image scp root@localhost::IMAGE USER@localhost::
* Or copy an image from one machine to another with this command:$ podman image scp me@192.168.68.122::IMAGE you@192.168.68.128::
-
-
pythonspeed.com pythonspeed.com
-
But the problem with Poetry is arguably down to the way Docker’s build works: Dockerfiles are essentially glorified shell scripts, and the build system semantic units are files and complete command runs. There is no way in a normal Docker build to access the actually relevant semantic information: in a better build system, you’d only re-install the changed dependencies, not reinstall all dependencies anytime the list changed. Hopefully someday a better build system will eventually replace the Docker default. Until then, it’s square pegs into round holes.
Problem with Poetry/Docker
-
Third, you can use poetry-dynamic-versioning, a plug-in for Poetry that uses Git tags instead of pyproject.toml to set your application’s version. That way you won’t have to edit pyproject.toml to update the version. This seems appealing until you realize you now need to copy .git into your Docker build, which has its own downsides, like larger images unless you’re using multi-stage builds.
Approach of using poetry-dynamic-versioning plugin
-
But if you’re doing some sort of continuous deployment process where you’re continuously updating the version field, your Docker builds are going to be slow.
Be careful when updating the
version
field ofpyproject.toml
around Docker
-
-
www.reddit.com www.reddit.com
-
21 containers running .. docker compose 👍 Create a docker user for easy permissions.4AntwortenTeilenMeldenSpeichernFolgenLevel 2Reiep · vor 2 JahrenCould you give us more info on that? I have some weird permissinos issues sometimes, that I can't figure out how to resolve.And how do you use compose on a Synology NAS, also. Thanks!1AntwortenAuszeichnenTeilenMeldenSpeichernFolgenLevel 3[deleted] · vor 2 JahrenMost of the containers from linuxserver on dockerhub have a PGID and PUID env, if you fill in the created docker user, it can access the volumes you want.Just make sure to add the created docker user to synology permissions on the dirs.3AntwortenTeilenMeldenSpeichernFolgen
-
-
stackoverflow.com stackoverflow.com
-
The ENTRYPOINT specifies a command that will always be executed when the container starts. The CMD specifies arguments that will be fed to the ENTRYPOINT.
Another great comparison of
ENTRYPOINT
andCMD
command
-
- Feb 2022
-
-
LXC, is a serious contender to virtual machines. So, if you are developing a Linux application or working with servers, and need a real Linux environment, LXC should be your go-to. Docker is a complete solution to distribute applications and is particularly loved by developers. Docker solved the local developer configuration tantrum and became a key component in the CI/CD pipeline because it provides isolation between the workload and reproducible environment.
LXC vs Docker
Tags
Annotators
URL
-
-
szkoladockera.pl szkoladockera.pl
-
Jeżeli masz dylemat czy użyć CMD, czy ENTRYPOINT jako punkt startowy twojego kontenera, odpowiedz sobie na następujące pytanie.Czy zawsze moje polecenie MUSI się wykonać? Jeśli odpowiedź brzmi tak, użyj ENTRYPOINT. Co więcej, jeśli potrzebujesz przekazać dodatkowe parametry, które mogą być nadpisane podczas uruchomienia kontenera — użyj również instrukcji CMD.
How to simply decide if to use CMD or ENTRYPOINT in a Dockerfile
Tags
Annotators
URL
-
- Jan 2022
-
-
I was seeing this same issue. Updating values for: hub: containerSecurityContext: privileged: true Seems to have been the fix for me. At least things are a lot more stable now. I changed it based on the explanation for --privileged in the README.
Für
Tags
Annotators
URL
-
-
contains.dev contains.dev
-
This basic example compiles a simple Go program. The naive way on the left results in a 961 MB image. When using a multi-stage build, we copy just the compiled binary which results in a 7 MB image.
# Image size: 7 MB FROM golang:1.17.5 as builder WORKDIR /workspace COPY . . ENV CGO_ENABLED=0 RUN go get && go build -o main . FROM scratch WORKDIR /workspace COPY --from=builder \ /workspace/main \ /workspace/main CMD ["/workspace/main"]
-
Docker introduced multi-stage builds starting from Docker Engine v17.05. This allows us to perform all preparations steps as before, but then copy only the essential files or output from these steps.
Multi-stage builds are great for Dockerfile steps that aren't used at runtime
-
Making a small change to a file or moving it will create an entire copy of the file. Deleting a file will only hide it from the final image, but it will still exist in its original layer, taking up space. This is all a result of how images are structured as a series of read-only layers. This provides reusability of layers and efficiencies with regards to how images are stored and executed. But this also means we need to be aware of the underlying structure and take it into account when we create our Dockerfile.
Summary of file duplication topic in Docker images
-
In this example, we created 3 copies of our file throughout different layers of the image. Despite removing the file in the last layer, the image still contains the file in other layers which contributes to the overall size of the image.
FROM debian:bullseye COPY somefile.txt . #1 # Small change but entire file is copied RUN echo "more data" >> somefile.txt #2 # File moved but layer now contains an entire copy of the file RUN mv somefile.txt somefile2.txt #3 # File won't exist in this layer, # but it still takes up space in the previous ones. RUN rm somefile2.txt
-
We’re just chmod'ing an existing file, but Docker can’t change the file in its original layer, so that results in a new layer where the file is copied in its entirety with the new permissions.In newer versions of Docker, this can now be written as the following to avoid this issue using Docker’s BuildKit:
Instead of this:
FROM debian:bullseye COPY somefile.txt . RUN chmod 777 somefile.txt
Try to use this:
FROM debian:bullseye COPY --chmod=777 somefile.txt .
-
when you make changes to files that come from previous layers, they’re copied into the new layer you’re creating.
-
Many processes will create temporary files, caches, and other files that have no benefit to your specific use case. For example, running apt-get update will update internal files that you don’t need to persist because you’ve already installed all the packages you need. So we can add rm -rf /var/lib/apt/lists/* as part of the same layer to remove those (removing them with a separate RUN will keep them in the original layer, see “Avoid duplicating files”). Docker recognize this is an issue and went as far as adding apt-get clean automatically for their official Debian and Ubuntu images.
Removing cache
-
An important way to ensure you’re not bringing in unintended files is to define a .dockerignore file.
.dockerignore sample:
# Ignore git and caches .git .cache # Ignore logs logs # Ignore secrets .env # Ignore installed dependencies node_modules ...
-
You can save any local image as a tar archive and then inspect its contents.
Example of inspecting docker image:
bash-3.2$ docker save <image-digest> -o image.tar bash-3.2$ tar -xf image.tar -C image bash-3.2$ cd image bash-3.2$ tar -xf <layer-digest>/layer.tar bash-3.2$ ls
One can also use Dive or Contains.dev
Tags
Annotators
URL
-
-
www.jianshu.com www.jianshu.com
-
同一宿主机下不同docker之间的通信
-
- Dec 2021
-
www.docker.com www.docker.com
-
docker scan elastic/logstash:7.13.3 | grep 'Arbitrary Code Execution'
Example of scanning docker image for a log4j vulnerability
-
- Nov 2021
-
pythonspeed.com pythonspeed.com
-
I’d probably choose the official Docker Python image (python:3.9-slim-bullseye) just to ensure the latest bugfixes are always available.
python:3.9-slim-bullseye may be the sweet spot for a Python Docker image
-
So which should you use? If you’re a RedHat shop, you’ll want to use their image. If you want the absolute latest bugfix version of Python, or a wide variety of versions, the official Docker Python image is your best bet. If you care about performance, Debian 11 or Ubuntu 20.04 will give you one of the fastest builds of Python; Ubuntu does better on point releases, but will have slightly larger images (see above). The difference is at most 10% though, and many applications are not bottlenecked on Python performance.
Choosing the best Python base Docker image depends on different factors.
-
There are three major operating systems that roughly meet the above criteria: Debian “Bullseye” 11, Ubuntu 20.04 LTS, and RedHat Enterprise Linux 8.
3 candidates for the best Python base Docker image
-
- Oct 2021
-
github.com github.com
-
You probably shouldn't use Alpine for Python projects, instead use the slim Docker image versions.
(have a look below this highlight for a full reasoning)
-
-
opensourcelibs.com opensourcelibs.com
-
vincent.bernat.ch vincent.bernat.ch
Tags
Annotators
URL
-
- Sep 2021
-
www.opensourceflare.com www.opensourceflare.com
-
Running containers without Docker is possible with Podman.
first sighting: podman
-
-
podman.io podman.io
-
daemonless container engine
Tags
Annotators
URL
-
-
-
Simply put: alias docker=podman.
-
What is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System.
-
-
betterdatascience.com betterdatascience.com
-
You can attach Visual Studio Code to this container by right-clicking on it and choosing the Attach Visual Studio Code option. It will open a new window and ask you which folder to open.
It seems like VS Code offers a better way to manage Docker containers
-
You don’t have to download them manually, as a docker-compose.yml will do that for you. Here’s the code, so you can copy it to your machine:
Sample
docker-compose.yml
file to download both: Kafka and Zookeeper containers
-
-
matt-rickard.com matt-rickard.com
-
In 2020, 35% of respondents said they used Docker. In 2021, 48.85% said they used Docker. If you look at estimates for the total number of developers, they range from 10 to 25 million. That's 1.4 to 3 million new users this year.
Rapidly growing popularity of Docker (2020 - 2021)
Tags
Annotators
URL
-
-
matt-rickard.com matt-rickard.com
-
kind, microk8s, or k3s are replacements for Docker Desktop. False. Minikube is the only drop-in replacement. The other tools require a Linux distribution, which makes them a non-starter on macOS or Windows. Running any of these in a VM misses the point – you don't want to be managing the Kubernetes lifecycle and a virtual machine lifecycle. Minikube abstracts all of this.
At the current moment the best approach is to use minikube with a preferred backend (Docker Engine and Podman are already there), and you can simply run one command to configure Docker CLI to use the engine from the cluster.
-
-
jaceklaskowski.gitbooks.io jaceklaskowski.gitbooks.io
- Aug 2021
-
docs.docker.com docs.docker.com
-
docker compose up
Deploy to AWS with docker compose.
Tags
Annotators
URL
-
-
yankee.dev yankee.dev
-
There are multiple tools for running Kubernetes on your local machine, but it basically boils down to two approaches on how it is done
We can run Kubernetes locally as a:
- binary package
- container using dind
Tags
Annotators
URL
-
- Jul 2021
-
medium.com medium.com
-
there is a drawback, docker-compose runs on a single node which makes scaling hard, manual and very limited. To be able to scale services across multiple hosts/nodes, orchestrators like docker-swarm or kubernetes comes into play.
- docker-compose runs on a single node (hard to scale)
- docker-swarm or kubernetes run on multiple nodes
-
We had to build the image for our python API every-time we changed the code, we had to run each container separately and manually insuring that out database container is running first. Moreover, We had to create a network before hand so that we connect the containers and we had to add these containers to that network and we called it mynet back then. With docker-compose we can forget about all of that.
Things being resolved by a docker-compose
-
-
www.openshift.com www.openshift.com
-
Even though Kubernetes is moving away from Docker, it will always support the OCI and Docker image formats. Kubernetes doesn’t pull and run images itself, instead the Kubelet relies on container engines like CRI-O and containerd to pull and run the images. These are the two main container engines used with CRI-O and they both support the Docker and OCI image formats, so no worries on this one.
Reason why one should not be worried about k8s depreciating Docker
-
-
pythonspeed.com pythonspeed.com
-
We comment out the failed line, and the Dockerfile now looks like this:
To test a failing Dockerfile step, it is best to comment it out, successfully build an image, and then run this command from inside of the Dockerfile
-
-
github.com github.com
-
Some options (you will have to use your own judgment, based on your use case)
4 different options to install Poetry through a Dockerfile
-
-
stackoverflow.com stackoverflow.com
-
When you have one layer that downloads a large temporary file and you delete it in another layer, that has the result of leaving the file in the first layer, where it gets sent over the network and stored on disk, even when it's not visible inside your container. Changing permissions on a file also results in the file being copied to the current layer with the new permissions, doubling the disk space and network bandwidth for that file.
Things to watch out for in Dockerfile operations
-
making sure the longest RUN command come first and in their own layer (again to be cached), instead of being chained with other RUN commands: if one of those fail, the long command will have to be re-executed. If that long command is isolated in its own (Dockerfile line)/layer, it will be cached.
Optimising Dockerfile is not always as simple as MIN(layers). Sometimes, it is worth keeping more than a single RUN layer
-