155 Matching Annotations
  1. Jul 2021
    1. there is a drawback, docker-compose runs on a single node which makes scaling hard, manual and very limited. To be able to scale services across multiple hosts/nodes, orchestrators like docker-swarm or kubernetes comes into play.
      • docker-compose runs on a single node (hard to scale)
      • docker-swarm or kubernetes run on multiple nodes
    2. We had to build the image for our python API every-time we changed the code, we had to run each container separately and manually insuring that out database container is running first. Moreover, We had to create a network before hand so that we connect the containers and we had to add these containers to that network and we called it mynet back then. With docker-compose we can forget about all of that.

      Things being resolved by a docker-compose

    1. Even though Kubernetes is moving away from Docker, it will always support the OCI and Docker image formats. Kubernetes doesn’t pull and run images itself, instead the Kubelet relies on container engines like CRI-O and containerd to pull and run the images. These are the two main container engines used with CRI-O and they both support the Docker and OCI image formats, so no worries on this one.

      Reason why one should not be worried about k8s depreciating Docker

    1. We comment out the failed line, and the Dockerfile now looks like this:

      To test a failing Dockerfile step, it is best to comment it out, successfully build an image, and then run this command from inside of the Dockerfile

    1. When you have one layer that downloads a large temporary file and you delete it in another layer, that has the result of leaving the file in the first layer, where it gets sent over the network and stored on disk, even when it's not visible inside your container. Changing permissions on a file also results in the file being copied to the current layer with the new permissions, doubling the disk space and network bandwidth for that file.

      Things to watch out for in Dockerfile operations

    2. making sure the longest RUN command come first and in their own layer (again to be cached), instead of being chained with other RUN commands: if one of those fail, the long command will have to be re-executed. If that long command is isolated in its own (Dockerfile line)/layer, it will be cached.

      Optimising Dockerfile is not always as simple as MIN(layers). Sometimes, it is worth keeping more than a single RUN layer

    1. Docker has a default entrypoint which is /bin/sh -c but does not have a default command.

      This StackOverflow answer is a good explanation of the purpose behind the ENTRYPOINT and CMD command

    1. Spotify has a CLI that helps users build Docker images for Kubeflow Pipelines components. Users rarely need to write Docker files.

      Spotify approach towards writing Dockerfiles for Kubeflow Pipelines

  2. Jun 2021
    1. local physical subnet

      how is this physical subnet is defined or decided? for example, my docker container inet address of eth0 is 172.17.0.2. How 172.17.0 is defined?

    1. It basically takes any command line arguments passed to entrypoint.sh and execs them as a command. The intention is basically "Do everything in this .sh script, then in the same shell run the command the user passes in on the command line".

      What is the use of this part in a Docker entry point:

      #!/bin/bash
      set -e
      
      ... code ...
      
      exec "$@"
      
    1. Docker container can call out to a secrets manager for its secrets. But, a secrets manager is an extra dependency. Often you need to run a secrets manager server and hit an API. And even with a secrets manager, you may still need Bash to shuttle the secret into your target application.

      Secrets manager in Docker is not a bad option but adds more dependencies

  3. May 2021
  4. Apr 2021
    1. Note: Building a container image using docker build on-cluster is very unsafe and is shown here only as a demonstration. Use kaniko instead.

      Why?

  5. Feb 2021
    1. The Open Container Initiative is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes
    1. Docker Hub is the world's easiest way to create, manage, and deliver your teams' container applications
    1. It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the “builder pattern”.

      Builder pattern - maintaining two Dockerfiles: 1st for development, 2nd for production. It's not an ideal solution and we shall aim for multi-stage builds.

      Multi-stage build - uses multiple FROM commands in the same Dockerfile. The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all

    1. volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container.

      Aim for using volumes instead of bind mounts in Docker. Also, if your container generates non-persistent data, consider using a tmpfs mount to avoid storing the data permanently

      One case where it is appropriate to use bind mounts is during development, when you may want to mount your source directory or a binary you just built into your container. For production, use a volume instead, mounting it into the same location as you mounted a bind mount during development.

  6. Jan 2021
    1. We recommend the Alpine image as it is tightly controlled and small in size (currently under 5 MB), while still being a full Linux distribution. This is fine advice for Go, but bad advice for Python, leading to slower builds, larger images, and obscure bugs.

      Alipne Linux isn't the most convenient OS for Python, but fine for Go

    2. If a service can run without privileges, use USER to change to a non-root user. This is excellent advice. Running as root exposes you to much larger security risks, e.g. a CVE in February 2019 that allowed escalating to root on the host was preventable by running as a non-root user. Insecure: However, the official documentation also says: … you should use the common, traditional port for your application. For example, an image containing the Apache web server would use EXPOSE 80. In order to listen on port 80 you need to run as root. You don’t want to run as root, and given pretty much every Docker runtime can map ports, binding to ports <1024 is completely unnecessary: you can always map port 80 in your external port. So don’t bind to ports <1024, and do run as a non-privileged user.

      Due to security reasons, if you don't need the root privileges, bind to ports >=1024

    3. Multi-stage builds allow you to drastically reduce the size of your final image, without struggling to reduce the number of intermediate layers and files. This is true, and for all but the simplest of builds you will benefit from using them. Bad: However, in the most likely image build scenario, naively using multi-stage builds also breaks caching. That means you’ll have much slower builds.

      Multi-stage builds claim to reduce image size but it can also break caching

    4. layer caching is great: it allows for faster builds and in some cases for smaller images. Insecure: However, it also means you won’t get security updates for system packages. So you also need to regularly rebuild the image from scratch.

      Layer caching is great for speeding up the processes, but it can bring some security issues

    1. So, what I've discovered in a meanwhile. It was an ubuntu-docker issue. Recently I upgraded my ubuntu from 16.04 to 18.04. This change seems to be incompatible with the docker version I had, 1.11.0.
    1. Running all that manually (more than 100 scripts across all devices) is an awful job for a human. I want to set them up once and more or less forget about it, only checking now and then.

      My ideals for all of my regular processes and servers:

      • Centralized configuration and control - I want to go into a folder and configure everything I'm running everywhere.
      • Configuration file has the steps needed to set up from scratch - so I can just back up the configuration and data folders and not worry about backing up the programs.
      • Control multiple machines from the central location. Dictate where tasks can run.
      • [nice to have] Allow certain tasks to running externally, e.g. in AWS ECS or Lambda or similar
      • Command-line access for management (web is great for monitoring)
      • Flexible scheduling (from strict every minute to ~daily)
      • Support for daemons, psuedo-daemons (just run repeatedly with small delays), and periodic tasks.
      • Smart alerts - some processes can fail occasionally, but needs to run at least once per day - some processes should never fail. A repeating inaccurate alert is usually just as bad as no alert at all.
      • Error code respect (configurable)
      • Logs - store the program output, organize it, keep it probably in a date-based structure
      • Health checks - if it's a web server, is it still responding to requests? Has it logged something recently? Touched a database file? If not, it's probably dead.
      • Alerts support in Telegram and email
      • Monitor details about the run - how long did it take? How much CPU did it use? Has it gotten slower over time?
      • Dashboard - top-level stats, browse detailed run stats and logs

      So much of the configuration/control stuff screams containers, so more and more I'm using Docker for my scripts, even simpler ones.

      I'm pretty sure a lot of this is accomplished by existing Docker orchestration tools. Been delaying that rabbit hole for a long time.

      I think the key thing that makes this not just a "cron" problem for me, is I want something that monitors and manages both itself and the tasks I want to run, including creating/setting up if not already. I also want to ideally focus my mental energy into a single controller that handles my "keep this running" things all together, be they servers or infrequent tasks.

      Doesn't have to be a single project. Might be multiple pieces glued together somehow.

  7. Dec 2020
  8. Nov 2020
    1. docker stack deploy -c docker-compose.yml dev

      I wasn't able to get the mysql image to build right unless I ran this from within the same directory containing docker-compose.yml.

    2. apache/vhosts.conf

      The crucial thing here is making sure the directory paths in your Apache vhosts file take into account the local to container mapping you made earlier.

    1. Unfortunately, this image was built months ago. No one has the build any more. We are left with a descendant image that has all the original content but on lower layers.
    2. docker build --cache-from=base_image:2.2.1,custom-gource:0.1 . -t custom-gource:0.2
    3. This is addressing a security issue; and the associated threat model is "as an attacker, I know that you are going to do FROM ubuntu and then RUN apt-get update in your build, so I'm going to trick you into pulling an image that ​_pretents_​ to be the result of ubuntu + apt-get update so that next time you build, you will end up using my fake image as a cache, instead of the legit one." With that in mind, we can start thinking about an alternate solution that doesn't compromise security.
    1. On a user-defined bridge network, containers can resolve each other by name or alias.But, The containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy.
  9. Oct 2020
    1. I debugged docker-compose and docker-py and figured out that you should either use environment variables or options in command. You should not mix these . If you even specify --tls in command then you will have to specify all the options as the TLSConfig object, as now TLSConfig object is created completely from the command options and operide the TFSConfig object created from the environment variable.
    1. To be clear: this setup works great with just docker daemon, but something about -compose is amiss.
    2. Using the docker client I have good success accessing the remote docker server. We call the remote server up to a hundred thousand times a day with good success. Attempting to use docker-compose, installed either via curl OR pip install --upgrade with python 2.7, we get an SSL error:
    1. docker --tlsverify ps executes just fine, while docker-compose --tlsverify up -d --force-recreate gives me an error: SSL error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
    2. I only have one set of certs. And I can't see how they can be different because docker commands work using the endpoint. It's just the docker-compose command that fails
    3. docker-compose command you can not mix environment variable and command option. You can specify setting in env variable and then just use docker-compose ps. The connection will be secured with TLS protocol if DOCKER_TLS_VERIFY variable is set.
    4. You dont need to pass --tls or --tlsverify option in the docker-config path as the task already sets DOCKER_TSL_VERIFY environment varaible. I debugged docker-compose and docker-py library and verified that if you pass any flag --tls or --tlsverify flag it tries to create tslConfig object out of options and not from environment
    1. you'll run into the error you've run into if your remote Docker host has a certificate signed by something other than the ca.pem that you've got at that location.
  10. Aug 2020
    1. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime

      Official definition of a container image

  11. Jul 2020
  12. Jun 2020
  13. May 2020
    1. if [ -z "${DOCKER_HOST:-}" ]; then if _should_tls || [ -n "${DOCKER_TLS_VERIFY:-}" ]; then export DOCKER_HOST='tcp://docker:2376' else export DOCKER_HOST='tcp://docker:2375' fi fi
    1. Image consumers can enable DCT to ensure that images they use were signed. If a consumer enables DCT, they can only pull, run, or build with trusted images. Enabling DCT is a bit like applying a “filter” to your registry. Consumers “see” only signed image tags and the less desirable, unsigned image tags are “invisible” to them.
    1. Authors of third-party tools should prefix each label key with the reverse DNS notation of a domain they own, such as com.example.some-label.
    1. NOTE: Note: If you have 2 Factor Authentication enabled in your account, you need to pass a personal access token instead of your password in order to login to GitLab's Container Registry.
  14. Apr 2020
    1. AinD launches Android apps in Docker, by nesting Anbox containers inside Docker.

      AinD - useful tool when we need to run an Android app 24/7 in the cloud.

      Unlike the alternatives, AinD is not VM, but IaaS based

    1. To use Gunicorn as your web server, it must be included in the requirements.txt file as an app dependency. It does not need to be installed in your virtual environment/host machine.
    1. docker-compose rm -f -s -v yourService

      useful commands for launching a single service in a docker-compose file without running it in the background so you can see the logs:

      docker-compose rm -fsv service
      docker-compose up service
      
  15. Mar 2020
    1. from Docker Compose on a single machine, to Heroku and similar systems, to something like Snakemake for computational pipelines.

      Other alternatives to Kubernetes:

      • Docker Compose on a single machine
      • Heroku and similar systems
      • Snakemake for computational pipelines
    1. That makes sense, the new file gets created in the upper directory.

      If you add a new file, such as with:

      $ echonew file> merged/new_file

      It will be created in the upper directory

    2. Combining the upper and lower directories is pretty easy: we can just do it with mount!

      Combining lower and upper directories using mount:

      $ sudo mount -t overlay overlay -o lowerdir=/home/bork/test/lower,upperdir=/home/bork/test/upper,workdir=/home/bork/test/work /home/bork/test/merged

    3. Overlay filesystems, also known as “union filesystems” or “union mounts” let you mount a filesystem using 2 directories: a “lower” directory, and an “upper” directory.

      Docker doesn't make copies of images, but instead uses an overlay.

      Overlay filesystems, let you mount a system using 2 directories:

      • the lower directory (read-only)
      • the upper directory (read and write).

      When a process:

      • reads a file, the overlayfs filesystem driver looks into the upper directory and if it's not present, it looks into the lower one
      • writes a file, overlayfs will just use the upper directory
  16. Feb 2020
    1. when we ran it natively on the source machine (i.e. not Dockerized, which reduces performance for all the tools by ~40%)
    1. docker-compose up -d

      Error for me here...

      ➜ hello-world docker-compose up -d zsh: command not found: docker-compose

  17. Jan 2020
    1. For a port to be accessible to containers or non-Docker hosts on different networks, that port must be published using the -p or --publish flag.
    1. But the reason is that, if your host system does not have the vm.overcommit_memory=1 enabled, you will not be able to switch it inside container.

      Fixed redis issue on harbor: "Can't save in background: fork: Cannot allocate memory"

      Added on /root/harbor/docker-compose.yml:

      command: sh -c 'echo 1 > /proc/sys/vm/overcommit_memory'

      Also executed command: sh -c 'echo 1 > /proc/sys/vm/overcommit_memory' on the main server harbor (not only on the container)

  18. Nov 2019
  19. Oct 2019
  20. Sep 2019
    1. use the REPOSITORY:TAG combination rather than IMAGE ID

      Error response from daemon: conflict: unable to delete c565603bc87f (cannot be forced) - image has dependent child images

      I really feel like this should be the accepted answer here but it does depend on the root cause of the problem. When you create a tag it creates a dependency and thus you have to delete the tag and the image in that order. If you delete the image by using the tag rather than the id then you are effectively doing just that.

  21. Aug 2019
  22. Jul 2019
    1. Docke

      Docker is a set of coupled software-as-a-service and platform-as-a-service products that use operating-system-level virtualization to develop and deliver software in packages called containers.

    Tags

    Annotators

  23. May 2019
    1. Allan Moraes - Automatizando o Monitoramento de Infraestrutura

      Docker, Grafana e Ansible fazem parte da palestra do Allan e também são tópicos cobertos na prova DevOps Tools do Linux Professional Institute.

      705.1 IT Operations and Monitoring (weight: 4)

    2. Tiago Roberto Lammers - Nossa jornada DevOps na Delivery Much para microserviços e o que aprendemos

      Microservices é um dos temas cobertos pela certificação DevOps Tools do Linux Professional Institute e também é um assunto determinante na escolha de ferramentas do cinto de utilidade de um profissional DevOps. Aproveite para conversar com o Tiago sobre a sua experiência com o uso do Docker, assunto que também cai na prova.

      Tópicos (dentre outros):

      701.1 Modern Software Development (weight: 6) 701.4 Continuous Integration and Continuous Delivery (weight: 5) 702.1 Container Usage (weight: 7)

  24. Apr 2019
    1. 取image的大概过程如下
      1. 从registry获取manifest(image 配置文件)
      2. 读取manifest配置文件的digest,这个就是image id
      3. 根据image id查看本地有无相同id的image
      4. 如果没有,会给registry服务发送请求,获取image的配置文件
      5. 查看本地每一个layer是否存在
      6. 如果不存在,则会去服务器拿相应的layer
      7. 等所有的layer下载完成后,image就下载完成了
    1. If this is a production situation, and security and stability are important, then just "convenience" is likely not the best deciding factor (any more than leaving your house unlocked all the time might be "convenient").

      如果这是生产情况,安全性和稳定性很重要,那么“便利”可能不是最好的决定因素(不仅仅是让你的房子一直解锁可能是“方便的”)

      • 您可以考虑将每个push to registry的版本 - 以某种形式(毕竟,您发布了新版本的代码,并使其他人可以访问)。
      • :latest与Git存储库中的master分支相当。是否每个push to master都考虑准备投入生产?
      • Releases将(通常)通过验证过程(CI/QA /acceptance/etc)。是否应首先验证master中的更改,并且仅在验证(标记并)部署到生产之后?
      • 发行版(Releases)带有版本;这可以是显式版本(标记),也可以是隐式(不可变标记:图像的摘要)

      显式版本 -- image tag<br> 隐式版本 -- 不可变标记 :image digest

    2. This is now a problem, because different instances of the same service now run different versions of the application; this can lead to hard-to-find issues, such as:

      现在这是一个问题,因为同一服务的不同实例现在运行不同版本的应用程序;这可能导致难以发现的问题,例如:

      • 根据访问者最终的节点(或实例),可能会向他们提供不同的内容
      • 对服务进行了安全更新,但某些实例仍然运行旧版本
      • 修复了一个错误,但由于“某些原因”,一些节点仍然暴露了该错误
      • 最新的更新包含一个错误,但它没有引起注意,因为大多数实例仍在运行以前的版本
    3. Doing so would revert to the old behavior, where images are just pulled on each node. This used to cause quite some issues and was intended as a stopgap solution at the time (until pinning by digest was implemented). This section illustrates some of the problems with this approach.

      docker stack deploy中不推荐使用--resolve-image=never<br> 这样做会恢复到原来的行为,即只在每个节点上拉动图像。这曾经引起相当多的问题,并且当时是作为权宜之计的解决方案(直到通过摘要实现固定)。本节说明了此方法的一些问题。

    4. However, there is not a 1:1 relation of digests to tags, so when pulling an image by digest, only the digest is known. If you happen to have an image pulled (manually) with a tag that matches that digest, the tag is shown, but not otherwise

      但是,摘要与标签之间没有1:1的关系,因此在通过摘要pull image时,只知道摘要。如果您碰巧使用与该摘要匹配的标记(手动)拉出图像,则会显示标记,否则不会显示

  25. Mar 2019
    1. Usando Traefik para automatizar o proxy reverso de seus containers docker

      Ainda que esse não seja um assunto cobrado diretamente na prova, essas são ferramentas que devem fazer parte do cinto de utilidades de um bom DevOps. E busca por "container" nos nossos tópicos, nesse link, que tu vais descobrir a importância de conhecer bem sobre o assunto.

    2. Pipeline de CI/CD no Kubernetes usando Jenkins e Spinnaker

      Uau! Muitos assuntos da prova LPI DevOps são explorados nessa palestra. Fica de olho no tópico: 702 Container Management.

  26. Jan 2019
    1. this is in /srv/www/ on the host.

      This site actually gives somewhat clear instructions about which directories from which to run the commands. I think where I went wrong befire was using various directories that in the end did not match the actual installations.

  27. Dec 2018
  28. Jun 2018
  29. May 2018
    1. You can pull the image on a computer that have access to the internet.

      sudo docker pull ubuntu Then you can save this image to a file

      sudo docker save -o ubuntu_image.docker ubuntu Transfer the file on the offline computer (USB/CD/whatever) and load the image from the file:

      sudo docker load ubuntu_image.docker

  30. Apr 2018
  31. Dec 2017
  32. Nov 2017
  33. domino.research.ibm.com domino.research.ibm.com
  34. Sep 2017
    1. NixOS is a Linux distribution with a unique approach to package and configuration management.

      This is another approach to systems management and software as a services. I don't really understand in detail the difference between NixOS and docker, but googling NixOS vs Docker shows that its a topic that is ripe for a bunfight.

    1. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data.

      Very interesting, basically Singularity allows containers to run in HPC environments, so that code running in the container can take advantage of the HPC tools, like massive scale and message passing, while at the same time keeping the stuff in the container safer.

    1. Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML will parse numbers in the format xx:yy as sexagesimal (base 60). For this reason, we recommend always explicitly specifying your port mappings as strings.

      Cool feature

  35. Aug 2017
  36. Jun 2017
  37. Mar 2017
    1. Prophet : Facebook에서 오픈 소스로 공개한 시계열 데이터의 예측 도구로 R과 Python으로 작성되었다.

      python statics opensource, also can use R