90 Matching Annotations
  1. Nov 2022
    1. The presence of an Age header field implies that the response was not generated or validated by the origin server for this request. However, lack of an Age header field does not imply the origin was contacted, since the response might have been received from an HTTP/1.0 cache that does not implement Age


  2. May 2022
    1. Protection Vulnerability management tools will identify all known vulnerabilities in base images and packages and provide upgrade recommendations. When vulnerabilities can’t be patched or there is no patch available, providing virtual patching and other runtime protection can be useful compensating controls. For Kubernetes components, this is another reason to consider managed Kubernetes offerings, rather than rolling your own. All the major cloud providers’ managed Kubernetes offerings lock down the kubelet component by default and are not susceptible to this exploit. For those self-managing Kubernetes clusters, tools like Prisma Cloud can identify unsecure components to secure using our Kubernetes audits. Integrations with Open Policy Agent (OPA) can also prevent spinning up privileged containers and other violations of secure Kubernetes practices.

      container security

    1. The kubelet doesn't manage containers which were not created by Kubernetes.


    2. kubelet Synopsis The kubelet is the primary "node agent" that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.


    1. Another early zero-click threat was discovered in 2015 when the Android malware family Shedun took advantage of the Android Accessibility Service's legitimate functions to install adware without the user doing anything. "By gaining the permission to use the accessibility service, Shedun is able to read the text that appears on screen, determine if an application installation prompt is shown, scroll through the permission list, and finally, press the install button without any physical interaction from the user," according to Lookout.

      .c2 Zero click attacks

    2. One of the first defining moments in their history happened in 2010 when security researcher Chris Paget demonstrated at DEFCON18 how to intercept phone calls and text messages using a Global System for Mobile Communications (GSM) vulnerability, explaining that the GSM protocol is broken by design. During his demo, he showed how easy it was for his international mobile subscriber identity (IMSI) catcher to intercept the mobile phone traffic of the audience.

      .c1 Zero click attacks

    3. These attacks are often used against high-value targets because they are expensive. "Zerodium, which purchases vulnerabilities on the open market, pays up to $2.5M for zero-click vulnerabilities against Android," says Ryan Olson, vice president of threat intelligence, Unit 42 at Palo Alto Networks.

      Zero click attacks

    1. Sending domains enable the protocols so that receivers can verify that emails are really from the sender’s domain. Senders enable these protocols so other people can’t claim to be them. Receivers enable them so they can verify whether a particular email is from where it says it’s from. Both sides must be enabled


    1. Have we had any (significant) incidents? Board members will be well-aware of any significant incidents, so this question is usually answered with details as well as estimates regarding costs and potential liability.


    2. Are we secure? This question is the bane of many a cybersecurity pro’s existence because the answer now and always will be “no” from a literal 100% protection standpoint. If we rework the question to “what is our exposure level?” we can start to make headway.

      Cybersecurity metrics for CISOs .c1

  3. Apr 2022
    1. While this VPN alternative or paired option manages identity protocols allowing for more granular activity monitoring, it does not provide additional protections for privileged credentials. To securely manage the credentials for privileged accounts, privileged access management (PAM) is needed, Grunden adds. “If identity management establishes the identity of individual users and authorizes them, PAM tools focus on managing privileged credentials that access critical systems and applications with a higher level of care and scrutiny.”


    2. Identity and access management and privileged access managementSolutions that incorporate a comprehensive verification process to confirm the validity of login attempts provide greater protections compared to traditional VPNs, which normally only require a password. “A security feature of IAM [identity and access management] is that session activity and access privileges are connected to the individual user, so network managers can be sure each user has authorized access and can track each network session,”


    3. “zero-trust methods are able to perform the basic capabilities of a VPN, such as granting access to certain systems and networks, but with an added layer of security in the form of least-privileged access (down to the specific applications), identity authentication, employment verification, and credential storage.” As a result, if an attacker succeeds in infecting a system, the damage is limited to only what this system has access to, Duarte says. “Also, be sure to implement network monitoring solutions to detect suspicious behavior, like an infected machine doing a port scan, so you can automatically generate an alert and shutdown the infected system,”
    4. This was observed in the recent Colonial Pipeline ransomware attack, says Duarte. “In that case, the attackers got access to the internal network just by using compromised username and password credentials for an insecure VPN appliance.” He also notes instances of attackers targeting and exploiting known VPN appliance vulnerabilities. “Most recently, we observed the exploitation of CVE-2021-20016 (affecting SonicWall SSLVPN) by the cybercrime group DarkSide, and also CVE-2021-22893 (affecting Pulse Secure VPN) exploited by more than 12 different malware strains.”

      VPN and Ransonware

    1. Organizations should consider using a security platform built on a cybersecurity mesh architecture with security solutions that work together to combat developing threats, as well as keeping staff current on cyber hygiene and best practices. This holistic approach represents the strongest security posture and best defense against attackers.

      Security Best practices

    2. As with personal hygiene, cyber hygiene needs to be performed on a regular basis – not just once in a while or twice a year. With the goal of keeping data safe, security hygiene involves regular back-ups, firewalls, encryption, password management and more. Ongoing employee education is key, as well; make sure staff know about the latest social engineering techniques (especially email) and security best practices.

      General Security Besta practices

    3. Tackling the problem It’s clear that organizations need to secure, monitor and manage Linux just like any other endpoint in the network. Organizations should have advanced and automated endpoint protection, detection and response as well as integrated zero trust network access. It’s important to fight fire with fire – you’ve got to use the same kinds of tools that bad actors are using.

      Linux OS threats are increasing

    1. Attackers will target industrial sensors to cause physical damage that could result in assembly lines shutting down or services being interrupted, Chelly says. The pandemic has increased the prevalence of employees managing these systems via remote access, which provides “a very good entry point for cybercriminals.”

      IoT challenges pos pandemic

    2. VPNs provide a secure tunnel between the remote user and enterprise resources, but VPN technology can’t tell if the connecting device is already infected or if someone is using stolen credentials; it doesn’t provide application layer security, and it can’t provide role-based access control once a user connects to the network. Zero trust addresses all those issues.

      WHy not VPNs ?

    1. A task can also monitor, but not execute on, other variables that provide additional information to the task's module


    2. All configured monitored information, regardless if it's used for execution or not, can be passed to the task's module as module input.


  4. Mar 2022
    1. A single transaction may be executed across several microservices. Each microservice may keep its own logs and metrics. But if there is a failure, there must be a way of correlating these observations. This process is called distributed tracing.

      Why tracing/observability is crucial for modern (cloud-native) apps ?

    2. Kubernetes provides basic load-balancing using a simple randomizing algorithm. If this is inadequate, you can use a service mesh. A service mesh provides more sophisticated load-balancing based on observed metrics.

      Why microservices help improve K8S

    1. Service meshes in general are great at automating and securing communication between services in an east-west fashion, while API gateways are better at securing and regulating north-south traffic between internal services and external clients, Casemore said. Consul API Gateway can thus be thought of as an extension of Consul service mesh. “While the two are configured independently, they use the same servers to communicate policies, validate and receive certificates, and retrieve service catalog data,” Casemore said.

      service mesh vs API gateway

    1. EKS offers reduced effort for maintaining Kubernetes while ECS offers reduced effort for maintaining Docker containers. However, both require the same effort to maintain compute infrastructure. EKS on Fargate offers minimum effort to maintain both Kubernetes and the underlying infrastructure. Likewise, ECS on Fargate offers the least effort to maintain both Docker containers and the underlying infrastructure

      ECS - EKS - Fargate

  5. Feb 2022
    1. Difference between UEBA and UBA security UBA stands for User Behavior Analytics. UEBA includes the word ‘entity’ because it is able to model the behavior of humans as well as machines - networked devices and servers - within the network. The move from traditional UBA to UEBA has been driven the recognition that other entities besides users are often profiled in order to more accurately pinpoint threats, in part by correlating the behavior of these other entities with user behavior. This is becoming more pertinent due to the rise of connected devices - the Internet of Things - which provide new potential points of entry to the network.

      UEBA vs UBA

    1. Serverless applications are particularly challenging when it comes to observability. In a distributed microservices architecture, each individual service is sizable and complex enough to understand at a service-interaction level. Observability can be achieved by examining machine characteristics alongside coherent stack traces that clearly lay out the path of control flow.

      The problem with Serverless .c1

    2. In serverless applications, however, the event-driven functions are disparate, operate in isolation and are highly ephemeral. It is very difficult to analyze them for potential side effects (such as partially processed batches).

      The problem with Serverless .c2

    3. Observability is an application state that gives you both the insight you need to understand what went wrong, and the tracing and tracking capabilities that help you understand why an error occurred.

      What is observability ?

    1. 1. The physical layer This layer includes the physical equipment involved in the data transfer, such as the cables and switches. This is also the layer where the data gets converted into a bit stream, which is a string of 1s and 0s. The physical layer of both devices must also agree on a signal convention so that the 1s can be distinguished from the 0s on both devices.

      OSI Model

    2. 2. The data link layer The data link layer is very similar to the network layer, except the data link layer facilitates data transfer between two devices on the SAME network. The data link layer takes packets from the network layer and breaks them into smaller pieces called frames. Like the network layer, the data link layer is also responsible for flow control and error control in intra-network communication (The transport layer only does flow control and error control for inter-network communications).

      OSI Model

    3. 3. The network layer The network layer is responsible for facilitating data transfer between two different networks. If the two devices communicating are on the same network, then the network layer is unnecessary. The network layer breaks up segments from the transport layer into smaller units, called packets, on the sender’s device, and reassembling these packets on the receiving device. The network layer also finds the best physical path for the data to reach its destination; this is known as routing.

      OSI Model

    4. 4. The transport layer Layer 4 is responsible for end-to-end communication between the two devices. This includes taking data from the session layer and breaking it up into chunks called segments before sending it to layer 3. The transport layer on the receiving device is responsible for reassembling the segments into data the session layer can consume.

      OSI Model

    5. 5. The session layer This is the layer responsible for opening and closing communication between the two devices. The time between when the communication is opened and closed is known as the session. The session layer ensures that the session stays open long enough to transfer all the data being exchanged, and then promptly closes the session in order to avoid wasting resources.

      OSI Model

    6. 6. The presentation layer This layer is primarily responsible for preparing data so that it can be used by the application layer; in other words, layer 6 makes the data presentable for applications to consume. The presentation layer is responsible for translation, encryption, and compression of data.

      OSI Model

    7. 7. The application layer This is the only layer that directly interacts with data from the user. Software applications like web browsers and email clients rely on the application layer to initiate communications. But it should be made clear that client software applications are not part of the application layer; rather the application layer is responsible for the protocols and data manipulation that the software relies on to present meaningful data to the user. Application layer protocols include HTTP as well as SMTP (Simple Mail Transfer Protocol is one of the protocols that enables email communications).

      OSI Model

  6. Jan 2022
    1. Most developers are familiar with MySQL and PostgreSQL. They are great RDBMS and can be used to run analytical queries with some limitations. It’s just that most relational databases are not really designed to run queries on tens of millions of rows. However, there are databases specially optimized for this scenario - column-oriented DBMS. One good example is of such a database is ClickHouse.

      How to use Relational Databases to process logs

    2. Another format you may encounter is structured logs in JSON format. This format is simple to read by humans and machines. It also can be parsed by most programming languages
  7. Nov 2021
    1. Overload DoS attacks at the application layer: These DoS attacks typically attempt to consume the compute resources of the service by exercising compute-expensive functionality, or by generating many more application sessions than the service has been designed to cope with.

      Types of DDoS Attacks .c2

    2. Overload DoS attacks at the network layer: These DoS attacks typically attempt to consume all available capacity on network links, or to cause network hardware or software to fail due to overload.

      Types of DDoS Attacks .c1

  8. Oct 2021
    1. That’s because Photoshop, GIMP, Image Magick, OpenCV (via the cv2.resize function), etc. all use classic interpolation techniques and algorithms (ex., nearest neighbor interpolation, linear interpolation, bicubic interpolation) to increase the image resolution.


    1. The distributed streaming platform is both literally and figuratively the “missing link” needed to fulfill the promises of the microservices architecture. It combines messaging, storage and stream processing in a single, lightweight solution. It provides the pervasive, persistent and performant connectivity needed for containerized microservices. And it is remarkably easy to use with its “stupid simple” API for writing to and reading from streams.

      The data streaming platform nirvana

    2. A major advantage of publish/subscribe streaming platforms is the “decoupled” nature of all communications. This decoupling eliminates the need for publishers to track — or even be aware of — any subscribers, and makes it possible for any and all subscribers to have access to any published streams. The result is the ability to add new publishers and subscribers without any risk of disruption to any existing microservices

      Advantage of Publish/Subscriber model

    3. The term “distributed streaming platform” was first used to describe Kafka from the Apache Foundation. The contributors to this open source software, commonly used in Weblogs, characterize it as having three important capabilities: Publish and subscribe to streams of messages in a way that is similar to how a message queue or messaging system operates. Store or persist streams of messages in a fault-tolerant manner. Process streams in real-time as they occur.

      What is data streaming ?

    4. The problem is: Microservices running in containers have a much greater need for inter-service communications than traditional architectures do, and this can introduce problems ranging from poor performance, owing to higher latency, to application-level failures, owing to the loss of data or state. This potential pitfall plagues, to a greater or lesser extent, each of the traditional architectures, which all struggle to scale capacity and/or throughput to handle the relentless growth in the volume and velocity of data

      Common problems of Microservices Architectures .c2

    5. Containers have emerged as the ideal technology for running microservices. By minimizing or eliminating overhead, containers make efficient use of available compute and storage resources, enabling them to deliver peak performance and scalability for all microservices needed in any application. Containers also enable microservices to be developed without the need for dedicated development hardware, requiring only a personal computer in almost all cases.

      Why containers are best for microservices ? .c1

    1. The service’s database is effectively part of the implementation of that service. It cannot be accessed directly by other services. There are a few different ways to keep a service’s persistent data private. You do not need to provision a database server for each service. For example, if you are using a relational database then the options are: Private-tables-per-service – each service owns a set of tables that must only be accessed by that service Schema-per-service – each service has a database schema that’s private to that service Database-server-per-service – each service has it’s own database server. Private-tables-per-service and schema-per-service have the lowest overhead. Using a schema per service is appealing since it makes ownership clearer. Some high throughput services might need their own database server. It is a good idea to create barriers that enforce this modularity. You could, for example, assign a different database user id to each service and use a database access control mechanism such as grants. Without some kind of barrier to enforce encapsulation, developers will always be tempted to bypass a service’s API and access it’s data directly.


    2. Keep each microservice’s persistent data private to that service and accessible only via its API. A service’s transactions only involve its database.

      Whats the best database architecture for microservices ? .c1

    1. The advent of web services and SOA offers potential for lower integration costs and greater flexibility. An important aspect of SOA is the separation of the service interface (the what) from its implementation (the how). Such services are consumed by clients that are not concerned with how these services will execute their requests. Web services are the next step in the Web's evolution, since they promise the infrastructure and tools for automation of business-to-business relationships over the Internet.

      Why SOA ? .c2

    2. SOA is usually realized through web services. Web services specifications may add to the confusion of how to best utilize SOA to solve business problems. In order for a smooth transition to SOA, managers and developers in organizations should known that: SOA is an architectural style that has been around for years. Web services are the preferred way to realize SOA. SOA is more than just deploying software. Organizations need to analyze their design techniques and development methodology and partner/customer/supplier relationship. Moving to SOA should be done incrementally and this requires a shift in how we compose service-based applications while maximizing existing IT investments.

      Web Services & SOA .c1

    3. The J2EE 1.4 platform enables you to build and deploy web services in your IT infrastructure on the application server platform. It provides the tools you need to quickly build, test, and deploy web services and clients that interoperate with other web services and clients running on Java-based or non-Java-based platforms. In addition, it enables businesses to expose their existing J2EE applications as web services. Servlets and Enterprise JavaBeans components (EJBs) can be exposed as web services that can be accessed by Java-based or non-Java-based web service clients. J2EE applications can act as web service clients themselves, and they can communicate with other web services, regardless of how they are implemented

      J2EE and Web Services

    4. SOA uses the find-bind-execute paradigm as shown in Figure 1. In this paradigm, service providers register their service in a public registry. This registry is used by consumers to find services that match certain criteria. If the registry has such a service, it provides the consumer with a contract and an endpoint address for that service.

      How services are discovered ? .c4

    5. Services are software components with well-defined interfaces that are implementation-independent. An important aspect of SOA is the separation of the service interface (the what) from its implementation (the how). Such services are consumed by clients that are not concerned with how these services will execute their requests. Services are self-contained (perform predetermined tasks) and loosely coupled (for independence) Services can be dynamically discovered Composite services can be built from aggregates of other services

      Main SOA Characteristics .c3

    6. it enables businesses to leverage existing investments by allowing them to reuse existing applications, and promises interoperability between heterogeneous applications and technologies.


    7. SOA is an architectural style for building software applications that use services available in a network such as the web. It promotes loose coupling between software components so that they can be reused. Applications in SOA are built based on services. A service is an implementation of a well-defined business functionality, and such services can then be consumed by clients in different applications or business processes.

      Short Definition of SOA .c1

    1. So while containers and microservices exist independently and serve different purposes, they’re often used together; consider them the PB&J of DevOps. Containers are an enabling technology for microservices, which is why microservices are often delivered in one or more containers. Since containers are isolated environments, they can be used to deploy microservices quickly, regardless of the code language used to create each microservice.

      Why Containers in microservices

    1. They could have done the migration using a container platform, but they were happy to avoid the extra complexity. Using EC2, each of the microservices ends up in its own auto-scaling group and can be managed independently from a scale perspective.

      Microservices without Containers

    2. Monitoring is essential to microservices management. While a service mesh can help manage the application components, observability is critical to understanding how the services are behaving. Such observability is what creates the ability to develop management patterns that are resilient and manageable. It also provides feedback loops that enable developers to iterate and improve based on data and results.

      Why Observability ?

    3. “This is really powerful for general performance improvements and powerful for cost reduction,” said McLean of Stacktrace Profiler. “Snapchat tried it out, and within a day of collecting data they realized a very small piece of code — I think it was a regular expression — which should not have even been showing up in Profiler, was actually consuming a fairly large amount of CPU. This could happen to anyone. It happens to Google. The Snapchat demonstration was just a really great demonstration of the power of this profiling technology.

      Distribute Tracing Capabilities

    4. monoliths.


    5. in an organization and infrastructure as large as Google, it’s impossible for an SRE to have a complete view. Still, SREs provide context that a DevOps team working closer to the services may not have.


    6. The SRE role combines the skills of a developer with those of a system administrator, producing an employee capable of debugging applications in production environments when things go completely sideways


    7. If you already have containers, you should have an idea of how your system scales, so that can help with figuring microservices costs,” said Starmer. “We’ve done that with a couple of companies.”Next, look at the fixed costs of what each developer needs in terms of access to testing and development servers or virtual machines, then add in other related development costs, he said. Create a model based on the scale of the application, how many developers are going to work on the problem and other factors.

      Microservices Costs

    8. “The ongoing costs for microservices are far less than it would be for a monolith-based system. I think your costs could go down easily by up to 90 percent less. It depends on how bad it was in the first place.”But the savings don’t just come by using container systems such as Kubernetes, said Priest. The savings also come through changes in the culture involving developers.With a monolithic infrastructure, a company can have 100 developers working on the same code. This can be a nightmare because their changes don’t jibe with those of others, adding to complexity and problems, he said. But under a microservices approach, developers can be independently working on different parts of the code, which allows more work to be completed with less overlap

      Microservices Costs

    9. The monitoring and security challenges associated with microservice architectures arise from how microservices were created to be highly scalable — which means they may replicate themselves across nodes rapidly, run for minutes, and then shut down. Security tools geared for static locations, even virtual ones, will not work. “Additionally, the network becomes how you do dynamic scaling, so network controls must be able to keep up with the changes and have the visibility for intra-host and inter-host communication between microservices,” Rani Osnat, vice president of product marketing for Aqua Security, said

      monitoring & securing microservices

    1. The Google Cybersecurity Action Team is part of a wider array of cybersecurity initiatives Google is pursuing, which also include the $10 billion the company pledged to invest in cybersecurity over the next five years following a meeting at White House. 


    2. Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), praised Google for the action, particularly following the establishment by CISA of the Joint Cyber Defense Collaborative (JCDC) to help defend the nation against cyberattacks. Google Cloud is a member of the group. 


    3. The Google Cybersecurity Action Team will be made up of company cybersecurity experts, and will provide customers with incident response services, advisory services for security plans, and ways to deploy Google Cloud in a secure way that will make it more difficult for these customers to be successfully targeted by hackers. 


    1. An important point to note about this translation and delegation to Wasm is that Wasm is sandboxed technology. From a security standpoint this is highly desirable but it has implications for the memory model. Any interaction with state between Envoy and the Wasm VM/your Wasm module  (manipulating headers and/or body) will be copied from Envoy memory to Wasm memory and back. Understanding this and the tradeoffs made when processing requests is important (more below).

      WASN & Envoy .c2

    2. Envoy is an open-source proxy written in C++ used in many popular service-mesh and edge gateway implementations. Envoy has many extensibility points including the network pipeline, access logging, stats, et.al. To extend the network pipeline, for example, you can write filters that operate on the byte stream between the downstream client and the upstream backend service. The first major component we discuss here is the Envoy Wasm filter.

      WASN & Envoy .c1

    1. n the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter.[2] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.[2]

      IA Winter

  9. Sep 2021
    1. Usability: Kubernetes has a well-documented API with a simple, straightforward configuration that offers phenomenal developer UX. Together, DevOps practices and Kubernetes also allow businesses to deliver applications and features into users’ hands quickly, which translates into more competitive products and more revenue opportunities.

      .c5 Benefits K8S brings to DevOps

    2. Resiliency: A core goal of DevOps teams is to achieve greater system availability through automation. With that in mind, Kubernetes is designed to recover from failure automatically. For example, if an application dies,

      .c3 Benefits K8S brings to DevOps

    3. Deployment frequency: In the DevOps world, the entire team shares the same business goals and remains accountable for building and running applications that meet expectations. Deploying shorter units of work more frequently minimizes the amount of code you have to sift through to diagnose problems. The speed and simplicity of Kubernetes deployments enables teams to deploy frequent application updates.

      .c2 Benefits K8S brings to DevOps

    4. Independently deployable services: You can develop applications as a suite of independently deployable, modular services. Infrastructure code can be built with Kubernetes for almost any software stack, so organizations can create repeatable processes that are scalable across many different applications.

      .c1 Benefits K8S brings to DevOps

    5. CentOS and RHEL remain viable choices, but are also fairly large distributions. Red Hat’s concession here is Project Atomic, a tiny Linux OS designed to do little more than host containers. Alpine Linux currently holds the prize of being the smallest popular distribution, but it can have some sharp edges for the inexperienced user and is intended for use inside containers as the base OS on which container images are built. VMware has Photon OS and Rancher has RancherOS which are used as host-level operating systems which share their kernel with running containers.

      Best OS for running containers

    6. Microservices are an architectural approach to software development based on building an application as a collection of small services.


    7. Networking: Microservices represent the best of both worlds when seeking to secure networking vulnerabilities, since microservices essentially expose networking deeper inside the application. “The opportunity is that you can secure the application at the microservice level, which would prevent an attack from spreading much soone

      Key Security considerations for microservices

    8. Tooling: This is especially critical when deploying microservices applications with a new set of management tools, such as Kubernetes. “There’s a knowledge gap around these tools that leads to mistakes around authentication, authorization, hardening and other best practices,”

      Key Security considerations for microservices

    9. Dynamic code delivery and updates: By automating testing, code must be vetted to ensure that it represents an acceptable risk in terms of vulnerabilities, malware, hard-coded secrets, etc. “The old way of gating a version, stopping to test it for an extended time, etc. will simply not fly,”

      Key Security considerations for microservices

    10. Critics and proponents alike are skeptical of any term that is meant to describe this architectural style in its whole, as it can’t encapsulate how much different it is than web services. At first glance it may seem absurd to make the comparison, but the only essential differences are in new abstractions to make the compute, networking and storage a lot easier to use.

      How Microservices are differ from Web Services ?

    11. an emerging best practice for cloud native application design and a trend that we are continuing to follow.


    12. Defining microservices according to functionality and deployment patterns is


    13. Finding engineers with microservices expertise who can make the necessary tooling and architecture decisions can also be tough. Such experts include the elusive “full stack developers” who understand the application at every layer: from the network and hosting environment, to data modeling, business logic, APIs and user interface and user experience (UI/UX). 13 These individuals are


    14. The Feature Flag pattern: Also known as feature toggles, these give the ability to change the execution path within an application in real time. “We can implement a flag that allows us to send some traffic to our new microservice, but not all of it,”

    15. The Branch by Abstraction pattern: A technique for gradually undertaking a large-scale change to a software system, Branch by Abstraction allows you to release the system regularly while the change is still in progress. “This helps introduce alternative logic to perform an operation,”

    16. The Circuit Breaker pattern: Circuit Breakers abort code execution when something unexpected happens; for example, when the network goes down and two microservices cannot communicate with each other. “This pattern forces us to think about what to do in that scenario, and there are several libraries that implement this pattern in different languages,”

    1. Benefits This solution has a number of benefits: Enables the continuous delivery and deployment of large, complex applications. Improved maintainability - each service is relatively small and so is easier to understand and change Better testability - services are smaller and faster to test Better deployability - services can be deployed independently It enables you to organize the development effort around multiple, autonomous teams. Each (so called two pizza) team owns and is responsible for one or more services. Each team can develop, test, deploy and scale their services independently of all of the other teams. Each microservice is relatively small: Easier for a developer to understand The IDE is faster making developers more productive The application starts faster, which makes developers more productive, and speeds up deployments Improved fault isolation. For example, if there is a memory leak in one service then only that service will be affected. The other services will continue to handle requests. In comparison, one misbehaving component of a monolithic architecture can bring down the entire system. Eliminates any long-term commitment to a technology stack. When developing a new service you can pick a new technology stack. Similarly, when making major changes to an existing service you can rewrite it using a new technology stack.

      Microservices Benefits

    2. Microservices Benefits

    3. Define an architecture that structures the application as a set of loosely coupled, collaborating services. This approach corresponds to the Y-axis of the Scale Cube. Each service is: Highly maintainable and testable - enables rapid and frequent development and deployment Loosely coupled with other services - enables a team to work independently the majority of time on their service(s) without being impacted by changes to other services and without affecting other services Independently deployable - enables a team to deploy their service without having to coordinate with other teams Capable of being developed by a small team - essential for high productivity by avoiding the high communication head of large teams Services communicate using either synchronous protocols such as HTTP/REST or asynchronous protocols such as AMQP. Services can be developed and deployed independently of one another. Each service has its own database in order to be decoupled from other services. Data consistency between services is maintained using the Saga pattern

      Microservices Patterns