80 Matching Annotations
  1. Last 7 days
    1. In contrast, a defining feature of ML-powered applications is that they are directly exposed to a large amount of messy, real-world data which is too complex to be understood and modeled by hand.

      One of the best ways to picture a difference between DevOps and MLOps

  2. Sep 2021
    1. Upcoming Trends in DevOps and SRE in 2021 DevOps and SRE are domains with rapid growth and frequent innovations. With this blog you can explore the latest trends in DevOps, SRE and stay ahead of the curve.

      Top 2021 trends for DevOps, SRE.

  3. Jul 2021
    1. This means that an event-driven system focuses on addressable event sources while a message-driven system concentrates on addressable recipients. A message can contain an encoded event as its payload.

      Event-Driven vs Message-Driven

    1. we want systems that are Responsive, Resilient, Elastic and Message Driven. We call these Reactive Systems.

      Reactive Systems:

      • responsive - responds in a timely manner
      • resilient - stays responsive in the face of failure
      • elastic - system stays responsive under varying workload
      • message driven - asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency

      as a result, they are:

      • flexible
      • loosely-coupled
      • scalable
      • easy to develop and change
      • more tolerant of failure
      • highly responsive with interactive feedback
    2. Resilience is achieved by replication, containment, isolation and delegation.

      Components of resilience

    3. Today applications are deployed on everything from mobile devices to cloud-based clusters running thousands of multi-core processors. Users expect millisecond response times and 100% uptime. Data is measured in Petabytes.

      Today's demands from users

  4. Jun 2021
    1. It basically takes any command line arguments passed to entrypoint.sh and execs them as a command. The intention is basically "Do everything in this .sh script, then in the same shell run the command the user passes in on the command line".

      What is the use of this part in a Docker entry point:

      #!/bin/bash
      set -e
      
      ... code ...
      
      exec "$@"
      
    1. We should think about the number of simultaneous connections (peak and average) and the message rate/payload size. I think, the threshold to start thinking about AnyCable (instead of just Action Cable) is somewhere between 500 and 1000 connections on average or 5k-10k during peak hours.
      • number of simultaneous connections (peak and average)

      • the message rate/payload size.

    1. As it stands, sudo -i is the most practical, clean way to gain a root environment. On the other hand, those using sudo -s will find they can gain a root shell without the ability to touch the root environment, something that has added security benefits.

      Which sudo command to use:

      • sudo -i <--- most practical, clean way to gain a root environment
      • sudo -s <--- secure way that doesn't let touching the root environment
    2. Much like sudo su, the -i flag allows a user to get a root environment without having to know the root account password. sudo -i is also very similar to using sudo su in that it’ll read all of the environmental files (.profile, etc.) and set the environment inside the shell with it.

      sudo -i vs sudo su. Simply, sudo -i is a much cleaner way of gaining root and a root environment without directly interacting with the root user

    3. This means that unlike a command like sudo -i or sudo su, the system will not read any environmental files. This means that when a user tells the shell to run sudo -s, it gains root but will not change the user or the user environment. Your home will not be the root home, etc. This command is best used when the user doesn’t want to touch root at all and just wants a root shell for easy command execution.

      sudo -s vs sudo -i and sudo su. Simply, sudo -s is good for security reasons

    4. Though there isn’t very much difference from “su,” sudo su is still a very useful command for one important reason: When a user is running “su” to gain root access on a system, they must know the root password. The way root is given with sudo su is by requesting the current user’s password. This makes it possible to gain root without the root password which increases security.

      Crucial difference between sudo su and su: the way password is provided

    5. “su” is best used when a user wants direct access to the root account on the system. It doesn’t go through sudo or anything like that. Instead, the root user’s password has to be known and used to log in with.

      The su command is used to get a direct access to the root account

  5. Mar 2021
  6. Jan 2021
    1. Different data sources are better suited for different types of data transformations and provide access to different data quantities at different freshnesses

      Comparison of data sources

      • Data warehouses / lakes (such as Snowflake or Redshift) tend to hold a lot of information but with low data freshness (hours or days). They can be a gold mine, but are most useful for large-scale batch aggregations with low freshness requirements, such as “number of lifetime transactions per user.”
      • Transactional data sources (such as MongoDB or MySQL) usually store less data at a higher freshness and are not built to process large analytical transformations. They’re better suited for small-scale aggregations over limited time horizons, like the number of orders placed by a user in the past 24 hrs.
      • Data streams (such as Kafka) store high-velocity events and provide them in near real-time (within milliseconds). In common setups, they retain 1-7 days of historical data. They are well-suited for aggregations over short time-windows and simple transformations with high freshness requirements, like calculating that “trailing count over the last 30 minutes” feature described above.
      • Prediction request data is raw event data that originates in real-time right before an ML prediction is made, e.g. the query a user just entered into the search box. While the data is limited, it’s often as “fresh” as can be and contains a very predictive signal. This data is provided with the prediction request and can be used for real-time calculations like finding the similarity score between a user’s search query and documents in a search corpus.
    2. MLOps platforms like Sagemaker and Kubeflow are heading in the right direction of helping companies productionize ML. They require a fairly significant upfront investment to set up, but once properly integrated, can empower data scientists to train, manage, and deploy ML models. 

      Two popular MLOps platforms: Sagemaker and Kubeflow

    3. …Well, deploying ML is still slow and painful

      How the typical ML production pipeline may look like:

      Unfortunately, it ties hands of Data Scientists and takes a lot of time to experiment and eventually ship the results to production

    1. DevOps Services

      If you want to find DevOps consulting services, I suggest you checking Cleveroad.

  7. Nov 2020
    1. Automation suggests that a sysadmin has invented a system to cause a computer to do something that would normally have to be done manually. In automation, the sysadmin has already made most of the decisions on what needs to be done, and all the computer must do is execute a "recipe" of tasks. Orchestration suggests that a sysadmin has set up a system to do something on its own based on a set of rules, parameters, and observations. In orchestration, the sysadmin knows the desired end result but leaves it up to the computer to decide what to do.

      Most intuitive difference between automation and orchestration

    2. For instance, automation usually involves scripting, often in Bash or Python or similar, and it often suggests scheduling something to happen at either a precise time or upon a specific event. However, orchestration often begins with an application that's purpose-built for a set of tasks that may happen irregularly, on demand, or as a result of any number of trigger events, and the exact results may even depend on a variety of conditions.

      Automation is like a subset of orchestration.

      Orchestration suggest moving many parts, and automation usually refers to a singular task or a small number of strongly related tasks.

  8. Oct 2020
    1. Tabular Comparison Between All Deployment Methods:

      Tabular comparison of 4 deployment options:

      1. Travis-CI/Circle-CI
      2. Cloud + Jenkins
      3. Bitbucket Pipelines/Github Actions
      4. Automated Cloud Platforms
  9. May 2020
    1. more developers are becoming DevOps skilled and distinctions between being a software developer or hardware engineer are blurring
    1. Secrets management is one of the most sensitive and critical disciplines in all of DevOps and is becoming increasingly important as we move toward a fully continuous deployment world. AWS Keys, deploy keys, ssh keys are often the key attack vector for a bad actor or insider threat, and thus all users and customers are concerned about robust secrets management.
    1. In some contexts, "ops" refers to operators. Operators were the counterparts to Developers represented in the original coining of the term DevOps.

      I have always believed the Ops was short for Operations, not Operators.

      https://en.wikipedia.org/wiki/DevOps even confirms that belief.

    1. Continuous Delivery of Deployment is about running as thorough checks as you can to catch issues on your code. Completeness of the checks is the most important factor. It is usually measured in terms code coverage or functional coverage of your tests. Catching errors early on prevents broken code to get deployed to any environment and saves the precious time of your test team.

      Continuous Delivery of Deployment (quick summary)

    2. Continuous Integration is a trade off between speed of feedback loop to developers and relevance of the checks your perform (build and test). No code that would impede the team progress should make it to the main branch.

      Continuous Integration (quick summary)

    3. A good CD build: Ensures that as many features as possible are working properly The faster the better, but it is not a matter of speed. A 30-60 minutes build is OK

      Good CD build

    4. A good CI build: Ensures no code that breaks basic stuff and prevents other team members to work is introduced to the main branch Is fast enough to provide feedback to developers within minutes to prevent context switching between tasks

      Good CI build

    5. The idea of Continuous Delivery is to prepare artefacts as close as possible from what you want to run in your environment. These can be jar or war files if you are working with Java, executables if you are working with .NET. These can also be folders of transpiled JS code or even Docker containers, whatever makes deploy shorter (i.e. you have pre built in advance as much as you can).

      Idea of Continuous Delivery

    6. Continuous Delivery is about being able to deploy any version of your code at all times. In practice it means the last or pre last version of your code.

      Continous Delivery

    7. Continuous Integration is not about tools. It is about working in small chunks and integrating your new code to the main branch and pulling frequently.

      Continuous Integration is not about tools

    8. The app should build and start Most critical features should be functional at all times (user signup/login journey and key business features) Common layers of the application that all the developers rely on, should be stable. This means unit tests on those parts.

      Things to be checked by Continous Integration

    9. Continuous Integration is all about preventing the main branch of being broken so your team is not stuck. That’s it. It is not about having all your tests green all the time and the main branch deployable to production at every commit.

      Continuous Integration prevents other team members from wasting time through a pull of faulty code

  10. Apr 2020
    1. While talking about DevOps, three things are important continuous integration, continuous deployment, and continuous delivery.

      DevOps process

      • Continuous Integration - code gets integrated several times a day (checked by automated pipeline(server))
      • Continuous Delivery - introducing changes with every commit, making code ready for production
      • Continuous Deployment - deployment in production is automatic, without explicit approval from a developer

      another version of the image: and one more:

    2. Basic prerequisites to learn DevOps

      Basic prerequisites to learn DevOps:

      • Basic understanding of Linux/Unix system concepts and administration
      • Familiarity with command-line interface
      • Knowing how build and deployment process works
      • Familiarity with text editor
      • Setting up a home lab environment with VirtualBox
      • Networking in VirtualBox
      • Setting up multiple VMs in VirtualBox
      • Basics of Vagrant
      • Linux networking basics
      • Good to know basic scripting
      • Basics of applications - Java, NodeJS, Python
      • Web servers - Apache HTTPD, G-Unicorn, PM2
      • Databases - MySQL, MongoDB
    3. DevOps benefits

      DevOps benefits:

      • Improves deployment frequency
      • Helps with faster time to market
      • Lowers the failure rate of new releases
      • Increased code quality
      • More collaboration between the teams and departments
      • Shorter lead times between fixes
      • Improves the mean time to recovery
    4. Operations in the software industry include administrative processes and support for both hardware and software for clients as well as internal to the company. Infrastructure management, quality assurance, and monitoring are the basic roles for operations.

      Operations (1/2 of DevOps):

      • administrative processes
      • support for both hardware and software for clients, as well as internal to the company
      • infrastructure management
      • quality assurance
      • monitoring
    1. I set it with a few clicks at Travis CI, and by creating a .travis.yml file in the repo

      You can set CI with a few clicks using Travis CI and creating a .travis.yml file in your repo:

      language: node_js
      node_js: node
      
      before_script:
        - npm install -g typescript
        - npm install codecov -g
      
      script:
        - yarn lint
        - yarn build
        - yarn test
        - yarn build-docs
      
      after_success:
        - codecov
      
    2. I set it with a few clicks at Travis CI, and by creating a .travis.yml file in the repo

      You can set CI with a few clicks using Travis CI and creating a .travis.yml file in your repo:

      language: node_js
      node_js: node
      
      before_script:
        - npm install -g typescript
        - npm install codecov -g
      
      script:
        - yarn lint
        - yarn build
        - yarn test
        - yarn build-docs
      
      after_success:
        - codecov
      
    3. Continuous integration makes it easy to check against cases when the code: does not work (but someone didn’t test it and pushed haphazardly), does work only locally, as it is based on local installations, does work only locally, as not all files were committed.

      CI - Continuous Integration helps to check the code when it :

      • does not work (but someone didn’t test it and pushed haphazardly),
      • does work only locally, as it is based on local installations,
      • does work only locally, as not all files were committed.
    4. With Codecov it is easy to make jest & Travis CI generate one more thing:

      Codecov lets you generate a score on your tests:

    1. Continuous Deployment is the next step. You deploy the most up to date and production ready version of your code to some environment. Ideally production if you trust your CD test suite enough.

      Continuous Deployment

  11. Mar 2020
  12. Feb 2020
    1. DevOps has taught us that the software development process can be generalized and reused for dealing with change not just in application code but also in infrastructure, docs and tests. It can all just be code.
  13. Dec 2019
    1. Environment variables are 'exported by default', making it easy to do silly things like sending database passwords to Airbrake.

      airbrake -- monitoring service

  14. Nov 2019
    1. Top 10 Global DevOps Consulting Companies

      Are you wishing to develop the websites at global level? Here you can find top 10 Devops Consulting Companies that aim to deliver first class solutions forever. In addition to this, the expert developers able to provide the friendly services and thus get attention on familiar sources forever.

  15. May 2019
    1. Valdomiro Bilharvas - Squads mais eficientes com Devops

      Mais um caso prático que vai te mostrar a importância da preparação de um ambiente de desenvolvimento que facilita a vida de todos e garante entregas contínuas e de qualidade. O assunto é transversal a vários tópicos de nossa certificação DevOps Tools.

    2. Daniel Wildt, Guilherme Lacerda - Voltando para as raízes do Desenvolvimento Ágil

      Se você acha que DevOps é modinha, coisa nova, assista a palestra do Daniel e do Guilherme, uma dupla sempre motivadora.

    3. João Brito - CI/CD - Pense um pouco além das ferramentas

      Continuous Integration e Delivery são também tópicos importantes da certificação LPI DevOps Tools, mas como o João Brito vai falar nessa palestra, é importante entender o porque do uso dessas ferramentas.

      701.4 Continuous Integration and Continuous Delivery (weight: 5)

    4. Allan Moraes - Automatizando o Monitoramento de Infraestrutura

      Docker, Grafana e Ansible fazem parte da palestra do Allan e também são tópicos cobertos na prova DevOps Tools do Linux Professional Institute.

      705.1 IT Operations and Monitoring (weight: 4)

    5. Amanda Matos - Métricas & DevOps - Por que você deve medir para conquistar?

      Operações e monitoramento têm um tópico específico entre as exigências do LPI na certificação DevOps. A Amanda irá contar como implementar métricas com ferramentas de código aberto. Fica de olho!

      705.1 IT Operations and Monitoring (weight: 4)

    6. Aurora Li Min de Freitas Wang - Sendo dev em meio ao DevOps: Mudança de cultura de baixo para cima

      A carreira de DevOps é bastante atrativa e desafiante, mas requer uma mudança na cultura predominante. Veja o que a Aurora tem a dizer sobre isso.

    7. Mateus Prado - DevOps Engineers: por que é tão difícil contratar?

      Há falte de profissionais prontos para o mercado que exige DevOps. Compare o que o Mateus precisa com os tópicos que cobrimos em nossa certificação LPI DevOps Tools. Aproveite para seguir os tópicos em https://wiki.lpi.org/wiki/DevOps_Tools_Engineer_Objectives_V1 para criar seu programa de estudos para se tornar um bom profissional DevOps.

    8. Tiago Roberto Lammers - Nossa jornada DevOps na Delivery Much para microserviços e o que aprendemos

      Microservices é um dos temas cobertos pela certificação DevOps Tools do Linux Professional Institute e também é um assunto determinante na escolha de ferramentas do cinto de utilidade de um profissional DevOps. Aproveite para conversar com o Tiago sobre a sua experiência com o uso do Docker, assunto que também cai na prova.

      Tópicos (dentre outros):

      701.1 Modern Software Development (weight: 6) 701.4 Continuous Integration and Continuous Delivery (weight: 5) 702.1 Container Usage (weight: 7)

    9. Program

      Comentei, aqui no programa do DevOpsDays Porto Alegre, as palestras que podem te motivar e dar mais informações sobre os tópicos cobertos em nossa prova de certificação DevOps Tools. Visite https://wiki.lpi.org/wiki/DevOps_Tools_Engineer_Objectives_V1 para a lista completa dos tópicos.

  16. Mar 2019
    1. Acabe com o caos no seu pipeline com 4 ferramentas de métricas e controle

      Lembra de clicar no título para ver mais anotações nas palavras ou frases em destaque!

    2. Encerramento e Sorteios

      Será que vai ter algum sorteio do LPI? Gruda lá na trilha! ;-)

    3. Hashicorp Vault: One-Time Password para SSH

      Está aí um assunto sob o qual quero aprender! Não é explicitamente coberto pelos tópicos de certificação DevOps, mas dá uma olhada nos assuntos cobrindo ssh e security (procura também por vault em https://wiki.lpi.org/wiki/DevOps_Tools_Engineer_Objectives_V1).

    4. Como o iFood criou seu próprio RDS?

      Sempre é bom conhecer casos reais de empresas (e perguntar muito) para conhecer bem o que é o trabalho de um DevOps, especialmente se você é novato ou está querendo ser um profissional desse tipo.

    5. Repositorio NPM privado grátis com Verdaccio e AWS

      Excelente para você entender, na prática, sobre Cloud Deployment (um de nossos importantes subtópicos!). Além disso, vai sair da palestra com mais ferramentas para seu cinto de utilidades!

    6. Usando Traefik para automatizar o proxy reverso de seus containers docker

      Ainda que esse não seja um assunto cobrado diretamente na prova, essas são ferramentas que devem fazer parte do cinto de utilidades de um bom DevOps. E busca por "container" nos nossos tópicos, nesse link, que tu vais descobrir a importância de conhecer bem sobre o assunto.

    7. Como construir uma stack ELK sem dor de cabeça

      Assunto amplamente cobrado pelo subtópico 705.2 - Log Management and Analysis. Dá uma olhada também nesse post de nosso blog.

    8. Pipeline de CI/CD no Kubernetes usando Jenkins e Spinnaker

      Uau! Muitos assuntos da prova LPI DevOps são explorados nessa palestra. Fica de olho no tópico: 702 Container Management.

    9. Implementando CI com GitLab

      Ainda que os tópicos da prova LPI DevOps não cubram apenas o Git para a integração contínua (ele é usado especialmente em Source Code Management), é muito importante conhecer bem os conceitos de integração e entrega contínua cobertos nessa palestra. Eles estão nesse tópico:

      701.4 Continuous Integration and Continuous Delivery

    10. Trilha DevOps Tools

      Que tal aproveitar a trilha DevOps Tools do TDC para se preparar para a prova LPI Devops? Minhas anotações nessa página conectam os assuntos desenvolvidos nas palestras com os tópicos da prova.

      Em cada uma das palestras expanda o assunto (clicando no título delas) para ver mais anotações que fiz.

    11. Jenkins Pipelines

      Veja esse tópico específico: 701.4 Continuous Integration and Continuous Delivery (weight: 5), especialmente o assunto:

      • Understand how Jenkins models continuous delivery pipelines and implement a declarative continuous delivery pipeline in Jenkins

      Mais informações também no nosso blog.

    12. Grafana

      Parte do 1.o assunto no tópico 705 Service Operations: 705.1 IT Operations and Monitoring (weight: 4)

      Understand the architecture of Prometheus, including Exporters, Pushgateway, Alertmanager and Grafana

      Mais informações também nesse post de nosso blog.

    13. Prometheus

      Parte do 1.o assunto no tópico 705 Service Operations: 705.1 IT Operations and Monitoring (weight: 4)

      Understand the architecture of Prometheus, including Exporters, Pushgateway, Alertmanager and Grafana

      Mais informações também nesse post de nosso blog.

    14. Intervalo para Almoço

      Vamos aproveitar para falar sobre carreiras e certificações nesse intervalo?

    15. Você consegue visualizar a saúde da sua aplicação?

      Ainda que aqui os tópicos da certificação não cubram exatamente esse assunto, monitorar a saúde de um sistema e suas aplicações é missão do profissional DevOps. Atente para os tópicos:

      701 Software Engineering 701.1 Modern Software Development (weight: 6)

      e

      705.2 Log Management and Analysis (weight: 4)

  17. Oct 2018
    1. We have also begun developing an instrument to assess organizations' readiness to adopt Agile and DevOps. We would welcome opportunities to pilot this assessment instrument with your organization.

      How do we get involved in the pilot?

  18. Jan 2018
    1. There is only one codebase per app, but there will be many deploys of the app

      Typically Terraform violates the spirit of this principle. Though each deploy may be defined (typically as an environment) in the same repo, the codebase is different. We work around this making heavy use of modules to limit divergence between deploys.

  19. Oct 2017
  20. Feb 2017
  21. Jan 2014
    1. Is it because ops care deeply about systems while devs consider them a tool or implementation detail?

      What is the divide?

    2. When I look at the DevOps “community” today, what I generally see is a near-total lack of overlap between people who started on the dev side and on the ops side.

      I see this same near-total lack of overlap. There is a different language, mindset, and approach.