- Nov 2022
-
documentation.mailgun.com documentation.mailgun.com
-
In our system, events are generated by physical hosts and follow different routes to the event storage. Therefore, the order in which they appear in the storage and become retrievable - via the events API - does not always correspond to the order in which they occur. Consequently, this system behavior makes straight forward implementation of event polling miss some events. The page of most recent events returned by the events API may not contain all the events that occurred at that time because some of them could still be on their way to the storage engine. When the events arrive and are eventually indexed, they are inserted into the already retrieved pages which could result in the event being missed if the pages are accessed too early (i.e. before all events for the page are available). To ensure that all your events are retrieved and accounted for please implement polling the following way:
-
-
docs.ruby-lang.org docs.ruby-lang.org
-
A SimpleDelegator instance can take advantage of the fact that SimpleDelegator is a subclass of Delegator to call super to have methods called on the object being delegated to. class SuperArray < SimpleDelegator def [](*args) super + 1 end end SuperArray.new([1])[0] #=> 2
-
-
-
-
Set the endpoint to Mailgun's Postbin. A Postbin is a web service that allows you to post data, which is then displayed through a browser. This allows you to quickly determine what is actually being transmitted to Mailgun's API.
-
-
github.com github.com
-
PostBin, a simple web service for testing and logging of the receival of WebHooks (HTTP POST requests).
-
Tags
Annotators
URL
-
-
en.wikipedia.org en.wikipedia.org
-
The web API is now the most common meaning of the term API
Tags
Annotators
URL
-
-
en.wikipedia.org en.wikipedia.org
-
-
documentation.mailgun.com documentation.mailgun.com
-
You can access Events through a few interfaces: Webhooks (we POST data to your URL). The Events API (you GET data through the API). The Logs tab of the Control Panel (GUI).
-
-
stackoverflow.com stackoverflow.com
-
{ "folders": [ { "path": "apps/api" }, { "path": "apps/crawler" }, { "name": "root", "path": "." } ], "settings": {} }
-
-
opendyslexic.org opendyslexic.org
Tags
Annotators
URL
-
-
opendyslexic.org opendyslexic.org
Tags
Annotators
URL
-
-
unix.stackexchange.com unix.stackexchange.com
-
Bash maintains an internal hash of previously found executables in your path. In this case, it has details that at one time there was an executable at /usr/bin/siege, and reuses that path to avoid having to search again. You need to tell bash to manually rehash the path for siege like so: hash siege You can also clear all hashed locations: hash -r
-
-
stackoverflow.com stackoverflow.com
-
Remember there are two kinds of variable. Internal Variables and Environment Variables. PATH should be an environment variable.
In my case, I was trying to debug
which asdf
not finding asdf, in a minimal shell.I had checked
bash-5.1$ echo $PATH|grep asdf /home/tyler/.asdf/bin
but ```
The PATH environment variable
env | /bin/grep PATH
`` being empty was the key discovery here. Must have forgotten the
export`. -
All shells should tell you that your path is the same thing with BOTH of the two commands: # The PATH variable echo "$PATH" # The PATH environment variable env | /bin/grep PATH
-
-
github.com github.com
-
This was originally disallowed because #5907 was opened asking for different behavior in this situation that we didn't want to allow, and so we decided to make it a compiler error rather than have confusing behavior in that situation.
-
-
-
In v3, svelte-preprocess was able to type-check Svelte components. However, giving the specifics of the structure of a Svelte component and how the script and markup contents are related, type-checking was sub-optimal. In v4, your TypeScript code will only be transpiled into JavaScript, with no type-checking whatsoever. We're moving the responsibility of type-checking to tools better fit to handle it, such as svelte-check, for CLI and CI usage, and the VS Code extension, for type-checking while developing.
-
-
-
Not trying to be presumptuous, but thought this proposal would be best served with a PR.
-
-
github.com github.com
-
Undecided yet what bundler to use? We suggest using SvelteKit, or Vite with vite-plugin-svelte.
Undecided?
-
-
github.com github.com
-
The console needs to be readable in development and to provide the best DX I have to design my libraries in ways that prevent these warnings. This results in design decisions that are detrimental to functionality and/or code readability/simplicity.
-
-
stackoverflow.com stackoverflow.com
-
You're likely not using "type": "module" in your package.json, so import statements don't work in svelte.config.js. You have three ways to fix this: Use require() instead (also see https://github.com/sveltejs/language-tools/blob/master/docs/preprocessors/in-general.md#generic-setup) Rename svelte.config.js to svelte.config.mjs Set "type": "module" in your package.json (may break other scripts)
-
-
github.com github.com
-
Use a :global rule to only expose parts of the stylesheet:
-
-
stackoverflow.com stackoverflow.com
-
it is not part of Svelte but it is part of Svelte Preprocess github.com/sveltejs/svelte-preprocess#global-style
-
-
developer.mozilla.org developer.mozilla.org
-
Warning: This ignores the user's keyboard layout, so that if the user presses the key at the "Y" position in a QWERTY keyboard layout (near the middle of the row above the home row), this will always return "KeyY", even if the user has a QWERTZ keyboard (which would mean the user expects a "Z" and all the other properties would indicate a "Z") or a Dvorak keyboard layout (where the user would expect an "F"). If you want to display the correct keystrokes to the user, you can use Keyboard.getLayoutMap().
Wow, that's quite a caveat!
-
-
github.com github.com
-
I'm rather concerned about adding svelte.config.js support to things that already have well established mechanisms for configuration.
-
-
github.com github.com
-
It would probably be worth mentioning this explicitly in the README: "Configuration of the language server happens over the LSP protocol by passing a configuration object; your LSP client should have a way of setting the configuration object for a server. Here is a link to the spec for the configuration that is supported [...]"
-
-
-
So when configuring Capybara, I'm using ignore_default_browser_options, and only re-use this DEFAULT_OPTIONS and exclude the key I don't want Capybara::Cuprite::Driver.new( app, { ignore_default_browser_options: true, window_size: [1200, 800], browser_options: { 'no-sandbox': nil }.merge(Ferrum::Browser::Options::Chrome::DEFAULT_OPTIONS.except( "disable-features", "disable-translate", "headless" )), headless: false, } )
-
-
-
github.com github.com
-
If you are going to crawl sites you better use Ferrum or Vessel because you crawl, not test.
-
As simple as Puppeteer, though even simpler.
-
-
github.com github.com
-
The thing is Chrome doesn't provide details about such resources.
-
-
stackoverflow.com stackoverflow.com
-
You can definitely set the Return-Path header as a sender. But yes, some receivers might rewrite it (But not always ), or depending on who you're sending through, it might be re-written by them. For instance when using MailGun to send bulk email you have to do things just right in order to set a Return-Path that will be preserved. I know this contradicts the RFC you cite, but it's in practice true.
-
Return-Path header is written by the receiving server, not by the sending server. And as per the RFC 5321, it is the same as the address supplied in MAIL FROM command.
-
-
github.com github.com
-
-
This buildpack installs shims that always add --headless, --disable-gpu, --no-sandbox, and --remote-debugging-port=9222 to any google-chrome command as you'll have trouble running Chrome on a Heroku dyno otherwise.
-
-
stackoverflow.com stackoverflow.com
-
Do you know about lacolhost.com? as in, do something like blerg.lacolhost.com:3000/ as your url and it'll resolve to localhost:3000, which is where your tests are running.
-
I have DNS settings in my hosts file that are what resolve the visits to localhost, but also preserve the subdomain in the request (this latter point is important because Rails path helpers care which subdomain is being requested)
-
I've developed additional perspective on this issue - I have DNS settings in my hosts file that are what resolve the visits to localhost, but also preserve the subdomain in the request (this latter point is important because Rails path helpers care which subdomain is being requested) To sum up the scope of the problem as it stands now - I need a way within Heroku/Capybara system tests to both route requests to localhost, but also maintain the subdomain information of the request. I've been able to accomplish one or the other, but haven't found a configuration that provides both yet.
-
-
www.rubydoc.info www.rubydoc.info
-
subject { described_class.do_something }
-
-
www.ruby-forum.com www.ruby-forum.com
-
I have been doing different things w/ Ruby for a couple of years now and the only bad thing I can say about it is that it makes programming in other languages feel awfully burdensome. = )
-
-
bugs.ruby-lang.org bugs.ruby-lang.org
-
I would like to understand this design then. In my experience it has only served to limit what I can achieve, and gained me no additional benefit.
-
-
-
engineering.appfolio.com engineering.appfolio.com
-
Benoit Daloze of TruffleRuby points out that this is all much easier to read if you define your Ruby internals in Ruby, like they do. He's not wrong.
-
first we're looking for the "main" object. The word "main" is used in lots of places in Ruby, so that will be hard to track down. How else can we search?Luckily, we know that if you print out that object, it says "main". Which means we should be able to find the string "main", quotes and all, in C.
-
-
-
Refresh tokens are bearer tokens. It's impossible for the authorization server to know who is legitimate or malicious when receiving a new access token request. We could then treat all users as potentially malicious.
-
How could we handle a situation where there is a race condition between a legitimate user and a malicious one?
-
-
www.oauth.com www.oauth.com
-
There are also many reasons refresh tokens may expire prior to any expected lifetime of them as well.
such as...?
-
You might notice that the “expires_in” property refers to the access token, not the refresh token. The expiration time of the refresh token is intentionally never communicated to the client. This is because the client has no actionable steps it can take even if it were able to know when the refresh token would expire.
-
-
www.pingidentity.com www.pingidentity.com
-
But what about a Refresh Token flow? When using a refresh token, confidential clients also have to authenticate. Public clients, such as browser-based applications, do not authenticate during the Refresh Token flow. So in a typical frontend application, refresh tokens issued to frontend web applications are bearer tokens. In practice, this means that if an attacker manages to steal a refresh token from a frontend application, they can use that token in a Refresh Token flow. To counter such attacks, the OAuth 2.0 specifications mandate that browser-based applications apply a security measure known as refresh token rotation.
-
-
www.taniarascia.com www.taniarascia.com
-
-
you can use a Backend for Frontend (BFF)
first sighting: Backend for Frontend
-
Grant Types
-
For example, if I make an application (Client) that allows a user (Resource Owner) to make notes and save them as a repo in their GitHub account (Resource Server), then my application will need to access their GitHub data. It's not secure for the user to directly supply their GitHub username and password to my application and grant full access to the entire account. Instead, using OAuth 2.0, they can go through an authorization flow that will grant limited access to some resources based on a scope, and I will never have access to any other data or their password.
-
-
www.jvt.me www.jvt.me
-
-
Proof of Key Code Exchange is an OAuth2 extension that recently been adopted as the standard for both OAuth 2.1 and IndieAuth, and provides additional security for attacks on the Authorization Code flow.
-
-
developer.okta.com developer.okta.com
-
This is an effective, dynamic stand-in for a fixed secret.
run-time dynamicness vs. hard-coded values hard-coded values = fixed secret
-
Here’s what this flow looks like:
-
-
www.honeybadger.io www.honeybadger.io
-
Until now, we had a lot of code. Although we were using a plugin to help with boilerplate code, ready endpoints, and webpages for sign in/sign up management, a lot of adaptations were necessary. This is when Doorkeeper comes to the rescue. It is not only an OAuth 2 provider for Rails but also a full OAuth 2 suite for Ruby and related frameworks (Sinatra, Devise, MongoDB, support for JWT, and more).
-
The process used to create an OAuth wrapper client is very simple.
-
-
-
www.rfc-editor.org www.rfc-editor.org
-
This document defines how a JWT Bearer Token can be used to request an access token when a client wishes to utilize an existing trust relationship, expressed through the semantics of the JWT, without a direct user-approval step at the authorization server.
[transfer fo trust/credentials]
Tags
Annotators
URL
-
-
auth0.com auth0.com
-
Can I try the endpoints before I implement my application?
-
If the Client is a Single-Page App (SPA), an application running in a browser using a scripting language like JavaScript, there are two grant options: the Authorization Code Flow with Proof Key for Code Exchange (PKCE) and the Implicit Flow with Form Post. For most cases, we recommend using the Authorization Code Flow with PKCE because the Access Token is not exposed on the client side, and this flow can return Refresh Tokens.
-
Is the Client absolutely trusted with user credentials?
-
Which OAuth 2.0 Flow Should I Use?
-
If the Client is a regular web app executing on a server, then the Authorization Code Flow is the flow you should use. Using this the Client can retrieve an Access Token and, optionally, a Refresh Token.
-
The first decision point is about whether the party that requires access to resources is a machine. In the case of machine-to-machine authorization, the Client is also the Resource Owner, so no end-user authorization is needed.
-
-
developer.mozilla.org developer.mozilla.org
-
you will need to add a few more braces
If "brace" means {}, then this is incorrect. A few more parentheses, (), is correct.
-
-
learn.microsoft.com learn.microsoft.com
-
The Console now supports redeclaration of const variables across separate REPL scripts (such as when you run a statement in the Console), in addition to the existing let and class redeclarations. This support allows you to experiment with different declarations for const variables without refreshing the page. Previously, DevTools threw a syntax error if you redeclared a const binding.
Edge version of this matching release note from the matching Chrome feature:
https://hyp.is/d9XEKGfOEe2a27vFWUjjSA/developer.chrome.com/blog/new-in-devtools-92/
Interesting, they're copying some content, but not all of it verbatim.
-
-
developer.chrome.com developer.chrome.com
-
The Console now supports redeclaration of const statement, in addition to the existing let and class redeclarations. The inability to redeclare was a common annoyance for web developers who use the Console to experiment with new JavaScript code.
-
-
developer.mozilla.org developer.mozilla.org
-
Note that strings here are encoded as UTF-8, unlike the usual JavaScript UTF-16 strings.
-
-
developer.mozilla.org developer.mozilla.org
-
A File object is a specific kind of Blob, and can be used in any context that a Blob can.
-
-
developer.mozilla.org developer.mozilla.org
-
-
binary string (i.e., a string in which each character in the string is treated as a byte of binary data)
-
convert the string such that each 16-bit unit occupies only one byte
What is a 16-bit "unit"?
How can a 16-bit unit fit in 8 bits (1 byte)?
-
The btoa() function takes a JavaScript string as a parameter. In JavaScript strings are represented using the UTF-16 character encoding: in this encoding, strings are represented as a sequence of 16-bit (2 byte) units. Every ASCII character fits into the first byte of one of these units, but many other characters don't. Base64, by design, expects binary data as its input. In terms of JavaScript strings, this means strings in which each character occupies only one byte. So if you pass a string into btoa() containing characters that occupy more than one byte, you will get an error, because this is not considered binary data:
-
If you need to encode Unicode text as ASCII using btoa(), one option is to convert the string such that each 16-bit unit occupies only one byte.
-
-
stackoverflow.com stackoverflow.com
-
Honestly, at this point, I don't even know what tools I'm using, and which is responsible for what feature. Diving into the code of capybara and cucumber yields hundreds of lines of metaprogramming magic that somehow accretes into a testing framework. It's really making me loathe TDD despite my previous youthful enthusiasm.
opinion: too much metaprogramming magic
I'm not so sure it's "too much" though... Any framework or large software project is going to feel that way to a newcomer looking at the code, due to the number of layers of abstractions, etc. that eventually were added/needed by the maintainers to make it maintainable, decoupled, etc.
-
Wow, the man himself.
-
-
github.com github.com
-
Your tests then should also work correctly with transactional testing (and no need for database cleaner)
no need for database cleaner
-
-
stackoverflow.com stackoverflow.com
-
To setup it
Wow
-
-
stackoverflow.com stackoverflow.com
-
-
Please refer to the help center for possible explanations why a question might be removed.
Why not just show the page and let people see the content and decide for themselves if it's helpful? (Could also show the moderation outcome there, with the reason.)
-
This question was removed from Stack Overflow for reasons of moderation.
-
-
stackoverflow.com stackoverflow.com
-
In general, I've found Selenium to be unreliable and not deterministic as laid out here although I still don't entirely know why.
-
-
stackoverflow.com stackoverflow.com
-
session = ActionDispatch::Integration::Session.new(Rails.application) response = session.post("/mypath", my_params: "go_here")
worked for me
-
As has been stated elsewhere, in a Capybara test you typically want to do POSTs by submitting a form just like the user would.
-
I used the above to test what happens to the user if a POST happens in another session (via WebSockets), so a form wouldn't cut it.
-
-
www.suffix.be www.suffix.be
-
So far for the obligatory warning. I get the point, I even agree with the argument, but I still want to send a POST request. Maybe you are testing an API without a user interface or you are writing router tests? Is it really impossible to simulate a POST request with Capybara? Nah, of course not!
-
The Capybara Ruby gem doesn’t support POST requests, the built-in visit method always uses GET. This is by design and with good reason: Capybara is built for acceptance testing and a user would never ask to ‘post’ parameter X and Y to the application. There will always be some kind of interface, a form for example. It makes more sense to simulate what the visitor would really do
-
-
writingexplained.org writingexplained.org
-
“Have you brought your time sheet up to date yet?”
-
-
stackoverflow.com stackoverflow.com
-
module InjectSession include Warden::Test::Helpers def inject_session(hash) Warden.on_next_request do |proxy| hash.each do |key, value| proxy.raw_session[key] = value end end end end
-
-
plantuml.com plantuml.com
-
www.oscarberg.com www.oscarberg.com
-
This site is making use of some basic analytics cookies so I can hopefully learn something from of the metrics. By using the site you are totally fine with that.
-
And with diagrams as text close to the code chances are they will be kept to up to date (and created to begin with…).
-
There IS a super nice Visual Studio Code plugin for it but it still comes with the environment pre-reqs AS well as the cumbersome pressing of Alt-D (or was it Ctrl-D??) to get the preview. For other IDE’s I bet there are similar plugins leaving the same nasty taste of dissatisfaction… and the pollution of your environment.
-
-
-
You may want to change the controllers to your custom controllers with: Rails.application.routes.draw do use_doorkeeper do # it accepts :authorizations, :tokens, :token_info, :applications and :authorized_applications controllers :applications => 'custom_applications' end end
-
If you want to extend the default behaviour, just inherit from Doorkeeper::ApplicationsController. For example: class CustomApplicationsController < Doorkeeper::ApplicationsController end
-
-
github.com github.com
-
This is ugly by design, as an inducement to test properties instead of specifics.
-
So transcriptor aims to do less, and impose the bare minimum of cognitive load needed to convert a REPL interaction into a test. The entire API is four functions:
-
Testing frameworks often introduce their own abstractions for e.g. evaluation order, data validation, reporting, scope, code reuse, state, and lifecycle. In my experience, these abstractions are always needlessly different from (and inferior to) related abstractions provided by the language itself.
-
-
-
stackoverflow.com stackoverflow.com
-
Check the "Auto-open DevTools for popups".
Without this feature, when a pop-up opens without DevTools open, if it redirects, it will be too late to open DevTools and see the redirect logged...
There is still a problem though: If the pop-up window closes, so does that DevTools. So you can't see logs or network logs (redierects) that happened right before it closed...
-
-
github.com github.com
-
-
I just assumed that nesting/inheriting settings would be a thing because of course it would
-
git_workspace/ ├── .vscode │ └── settings.json # global settings, my preferred ones ├── my-personal-projects/ │ └── project1/ │ └── .git/ └── company-projects/ ├── .vscode │ └── settings.json # local settings, overrides some of my personal ones ├── project2/ │ └── .git/ └── project3/ └── .git/
-
-
typeorm.io typeorm.io
-
-
stackoverflow.com stackoverflow.com
-
-
As you note, Activity diagrams inherently can include concurrency and timing. If you look at this example cribbed from Wikipedia, shown below, you can observe the section with two heavy horizontal bars, and two parallel activities of "present idea" and "record idea". That is read as "start these activities in parallel, and continue only when both are complete." Flowcharts can't express this within the notation. Practically, using activity diagrams lets you think clearly about concurrent processes. I think you'll find that anyone who can read a flowchart will quickly adapt.
-
Activity diagram spreads confusion by its own name, there must be a reason why nobody understand them and ask similar questions.
-
It might seem as a preference, but if we have a standardized language for describing software systems, Why do we use something else? This can lead to bad habit of overusing flowcharts. Activity diagrams are really simple. But if you decide to describe a more complicated aspect of the system or try to change the part you are describing, you might have to switch anyway. So just use UML and prevent confusion in the future.
-
assuming a standard is better because the standard says so, it is like that old while(1) infinite loop it is better not to enter.
-
-
en.wikipedia.org en.wikipedia.org
Tags
Annotators
URL
-
-
-
When public clients (e.g., native and single-page applications) request access tokens, some additional security concerns are posed that are not mitigated by the Authorization Code Flow alone.
-
the OAuth 2.0 grant type, Authorization Code Flow with Proof Key for Code Exchange (PKCE).
-
-
stackoverflow.com stackoverflow.com
-
it seems like a perversion of my beautiful REST/JSON server
-
-
developer.twitter.com developer.twitter.com
-
-
Please note - any callback URL that you use with the POST oauth/request_token endpoint will have to be configured within your developer App's settings in the app details page of developer portal.
-
In the guide below, you may see different terms referring to the same thing.
-
-
developer.twitter.com developer.twitter.com
-
frontegg.com frontegg.com
Tags
Annotators
URL
-
-
en.wikipedia.org en.wikipedia.org
-
-
specific types of diagrams are also called a type of flow diagrams
-
-
-
The uses of flow diagrams are vast and honestly endless.
-
-
raphael-leger.medium.com raphael-leger.medium.com
-
to set up
-
It is handy to manually generate the diagram from times to times using the previously created command: npm run db:diagram:generate. Though, getting the diagram to update itself on its own automatically without a developer interaction would ensure that it the diagram is never obsolete. There are several ways of doing this.You could use a pre-commit git hook or even better simply configure your CI/CD pipeline(s) to run the npm script whenever something gets merged into the main branch 🙂
-
When it comes to showing up somewhere in your documentation a diagram describing your SQL database, you often end up with a recurring problem : after a few days / weeks / months, the diagram you made became obsolete.
-
-
github.com github.com
-
It would be nice if we could get some official word on whether this repository is affect by the catastrophic CVE-2021-44228 that is currently affecting a considerable percentage of softwares around the globe. From my limited understanding and looking at the refreshingly concise list of dependencies in the pom.xml, I would think this project is not affected, but I and probably others who are not familiar with the projects internals would appreciate an official word.
-
I understand that typically, it wouldn't make much sense to comment on every CVE that doesn't affect a product, but considering the severity and pervasiveness of this particular issue, maybe an exception is warranted.
Tags
Annotators
URL
-
-
gitlab.com gitlab.com
-
-
What if I hate snakes and/or indifference?
-
-
developer.intuit.com developer.intuit.com
-
-
You can also go to the Ruby OAuth Client Library to download the source code and run: 1gem build intuit-oauth.gemspec to build your own gem if you want to modify certain functions in the library.
-
-
github.com github.com
-
I agree that these fields should be whitelisted by ActiveAdmin automatically as it generates them via the form helpers. Regardless of if you use :raise or :log you wouldn't usually want these causing unnecessary noise.
-
-
stackoverflow.com stackoverflow.com
-
blog.appsignal.com blog.appsignal.com
-
-
Sorbet is written in modern C++ and despite Matz's preferences (quote: "I hate type annotations"), opted for an approach based on type annotations.
-
-
meta.stackoverflow.com meta.stackoverflow.com
-
I don't think a new tag makes sense here, at least not yet.
-
Once "Containerfile" starts becoming less of a whisper and more of the topic, then perhaps we can talk about a synonym. But definitely not now.
-
creating the new tag as a synonym.
-
Yes, it was right, but nowadays Dockerfiles are not specific to Docker. Dockerfiles also work with Buildah & Podman (and there might be other ones in the future) and they have generalized the naming: “Containerfile.”
-
-
Docker suffers from the Xerox problem. Like it or not the industry refers to them as Dockerfiles.
But the industry can change what they call it... just like it's already changed - from "master" to "main" - from "blacklist" to "blocklist" - and so on
-
They are 100% identical; just different names. From podman-build: “Builds an image using instructions from one or more Containerfiles or Dockerfiles and a specified build context directory. A Containerfile uses the same syntax as a Dockerfile internally. For this document, a file referred to as a Containerfile can be a file named either ‘Containerfile’ or ‘Dockerfile’.”
Tags
- Containerfile
- generic trademark
- prefer generic name
- different names for the same/identical thing (synonyms)
- now is not the right time
- alias
- Dockerfile
- name changes
- not yet; maybe later
- newer/better ways of doing things
- choose a best/preferred name instead of providing multiple aliases
Annotators
URL
-
-
gitlab.com gitlab.com
-
how was your code review experience with this merge request? Please tell us how we can continue to iterate and improve: Leave a 👍 or a 👎 on this comment to describe your experience.
-
-
gitlab.com gitlab.com
-
Good commit hygiene is considered a best practice. GitLab should encourage and enable these kinds of best practices. This feature currently creates a problem and requires workarounds that remove information, or significant manual work.
-
-
www.dekudeals.com www.dekudeals.com
-
help.steampowered.com help.steampowered.com
-
github.com github.com
-
Post.in_order_of(:type, %w[Draft Published Archived]).order(:created_at).pluck(:name) which generates SELECT posts.name FROM posts ORDER BY CASE posts.type WHEN 'Draft' THEN 1 WHEN 'Published' THEN 2 WHEN 'Archived' THEN 3 ELSE 4 END ASC, posts.created_at ASC
-
-
stackoverflow.com stackoverflow.com
-
Changing the second line to: foo.txt text !diff would restore the default unset-ness for diff, while: foo.txt text diff will force diff to be set (both will presumably result in a diff, since Git has presumably not previously been detecting foo.txt as binary).
comments for tag: undefined vs. null: Technically this is undefined (unset,
!diff
) vs. true (diff
), but it's similar enough that don't need a separate tag just for that.annotation meta: may need new tag: undefined/unset vs. null/set
-
-
git-scm.com git-scm.com
-
Unspecified No pattern matches the path, and nothing says if the path has or does not have the attribute, the attribute for the path is said to be Unspecified.
-
Unset The path has the attribute with special value "false"; this is specified by listing the name of the attribute prefixed with a dash - in the attribute list.
Tags
Annotators
URL
-
-
test-nbdime.readthedocs.io test-nbdime.readthedocs.io
-
test-nbdime.readthedocs.io test-nbdime.readthedocs.io
-
euangoddard.github.io euangoddard.github.io
-
github.com github.com
-
www.markusdosch.com www.markusdosch.com
-
github.com github.com
-
The second situation would be zombie reaping. If the process spawns child processes and does not properly reap them it will lead to a full process table
-
For both of these concerns we recommend tini. It is incredibly small, has minimal external dependencies, fills each of these roles, and does only the necessary parts of reaping and signal forwarding.
-
The first being signal handling. If the process launched does not handle SIGTERM by exiting, it will not be killed since it is PID 1 in the container
-
There are two situations where an init-like process would be helpful for the container.
-
highly recommended that the resulting image be just one concern per container; predominantly this means just one process per container, so there is no need for a full init system
container images: whether to use full init process: implied here: don't need to if only using for single process (which doesn't fork, etc.)
-
Try to make the Dockerfile easy to understand/read.
-
# check for the expected command if [ "$1" = 'mongod' ]; then # init db stuff.... # use gosu (or su-exec) to drop to a non-root user exec gosu mongod "$@" fi # else default to run whatever the user wanted like "bash" or "sh" exec "$@"
-
A beginning user should be able to docker run official-image bash (or sh) without needing to learn about --entrypoint.
-
It is also nice for advanced users to take advantage of entrypoint, so that they can docker run official-image --arg1 --arg2 without having to specify the binary to execute.
-
For dependent packages installed by apt there is not usually a need to pin them to a version.
Just install the specific initial version, but don't pin it. This allows users to easily update to later versions later...
-
For example, if using apt to install the main program for the image, be sure to pin it to a specific version (ex: ... apt-get install -y my-package=0.1.0 ...)
-
Rebuilding the same Dockerfile should result in the same version of the image being packaged, even if the second build happens several versions later, or the build should fail outright, such that an inadvertent rebuild of a Dockerfile tagged as 0.1.0 doesn't end up containing 0.2.3.
-
Version bumps and security fixes should be attended to in a timely manner.
-
If you do not represent upstream and upstream becomes interested in maintaining the image, steps should be taken to ensure a smooth transition of image maintainership over to upstream.
-
Because the official images are intended to be learning tools for those new to Docker as well as the base images for advanced users to build their production releases, we review each proposed Dockerfile to ensure that it meets a minimum standard for quality and maintainability. While some of that standard is hard to define (due to subjectivity), as much as possible is defined here, while also adhering to the "Best Practices" where appropriate.
-
It may be tempting, for the sake of brevity, to put complicated initialization details into a standalone script and merely add a RUN command in the Dockerfile. However, this causes the resulting Dockerfile to be overly opaque
Tags
- audience: both casual users and power users
- init process: responsibility: forwarding signals
- init process: responsibility: reap zombie adopted orphan processes
- container images: whether to use full init process
- staying up-to-date/in sync with upstream project
- containers: entrypoint
- dependencies: locking to specific version
- best practices
- good point
- readability
- audience: casual users (not power users)
- signals: handling/tripping signals
- transparency
- single responsibility
- unofficial
- repeatability
- container images: base image: good/recommended practices
- do one thing and do it well
- human-readable
- opaque
- audience: power users
- clear (easy to understand)
- overriding defaults
- maintainer
Annotators
URL
-
-
-
Good question! This is going to be a bit long, so bear with me
-
Doing everything PID 1 needs to do and nothing else. Things like reading environment files, changing users, process supervision are out of scope for Tini (there are other, better tools for those)
-
Tini differentiates with:
-
Finally, do note that there are alternatives to Tini (like Phusion's base image).
-
As to whether you should be using Tini.
-
Is whatever process you exec in your entrypoint registering signal handlers? A good way to figure this out might be to check whether your process responds properly to e.g. docker stop (or if it waits for 10 seconds before exiting)
-
(note: this is not the reason why Jenkins uses Tini, they use it for signal reaping, but it was used in the RabbitMQ image for that reason).
-
Tini does install explicit signal handlers (to forward them, incidentally), so those signals no longer get dropped. Instead, they're sent to Jenkins, which is not running as PID 1 (Tini is), and therefore has default signal handlers
-
Second, if Jenkins runs as PID 1, then it may not receive the signals you send it! That's a subtlety in PID 1. Unlike other unlike processes, PID 1 does not have default signal handlers, which means that if Jenkins hasn't explicitly installed a signal handler for SIGTERM, then that signal is going to be discarded when it's sent (whereas the default behavior would have been to terminate the process).
-
First, if Jenkins runs as PID 1, then it's difficult to differentiate between process that were re-parented to Jenkins (which should be reaped), and processes that were spawned by Jenkins (which shouldn't, because there's other code that's already expecting to wait them).
Tags
- distinction
- comparing one's project/product with competition/alternatives
- init process: responsibility: forwarding signals
- motivation: why did you create this?
- do only the minimum necessary
- apologizing for long explanation/answer
- computing: history
- special cases
- long explanation/answer
- Linux: signals
- origin story
- init process
- signals: handling/tripping signals
- long, detailed explanation
- init process: tini
- do one thing and do it well
- small/minimal core
- selling point
- subtlety
- should you use it?
- advantages/merits/pros
- detailed explanation
Annotators
URL
-
-
Tags
Annotators
URL
-
-
hub.docker.com hub.docker.com
-
-
Note: This repo does not publish or maintain a latest tag. Please declare a specific tag when pulling or referencing images from this repo.
-
-
github.com github.com
-
Unfortunately most init systems don't do this correctly within Docker since they're built for hardware shutdowns instead. This causes processes to be hard killed with SIGKILL, which doesn't give them a chance to correctly deinitialize things.
-
-
unix.stackexchange.com unix.stackexchange.com
-
SIGTSTP is like SIGSTOP except that it can be caught and handled.
-