16,680 Matching Annotations
  1. Last 7 days
    1. A related technique is git submodules, but they come with annoying caveats (for example people who clone your repository won't clone the submodules unless they call git clone --recursive),
    2. git-subtrac (from the author of the earlier git-subtree) seems to solve some of the problems with git submodules.
    3. # Do this the first time: $ git remote add -f -t master --no-tags gitgit https://github.com/git/git.git $ git subtree add --squash --prefix=third_party/git gitgit/master # In future, you can merge in additional changes as follows: $ git subtree pull --squash --prefix=third_party/git gitgit/master # And you can push changes back upstream as follows: $ git subtree push --prefix=third_party/git gitgit/master # Or possibly (not sure what the difference is): $ git subtree push --squash --prefix=third_party/git gitgit/master
    1. I intend to keep it around and maybe fix up minor things here and there if needed, but don't really have any plans for new features at this point. I think it's great to give people the option to choose the Go port if the advanced features is what they're after.
    1. nodemon will automatically know how to run the script even though out of the box support for processing scripts

      even though out of the box support for processing scripts ?

    1. depended on about 3 million projects

      depended on by about 3 million projects

    Tags

    Annotators

    URL

    1. then two different listeners/renderers switching magically between each other based on the header being present or not, without the end user being informed or clear about this
    2. Thus my docs recommendation of public function beforeFilter(Event $event) // do not render out the now inconsistent one for is(json) if (!$this->request->is('jsonapi')) { throw new NotFoundException('Invalid access, use application/vnd.api+json for Content-Type and Accept.'); } } to specifically only whitelist the desired jsonapi for the general use case.
    3. A default baked app has all those included. Thats why I am saying this - it is by default an issue we should and need to address :)
    1. If you're using JavaScript for writing to a HTML Attribute, look at the .setAttribute and [attribute] methods which will automatically HTML Attribute Encode. Those are Safe Sinks as long as the attribute name is hardcoded and innocuous, like id or class.
    2. If you're using JavaScript for writing to HTML, look at the .textContent attribute as it is a Safe Sink and will automatically HTML Entity Encode.
    1. In a clickjacking attack, the attacker creates a malicious website in which it loads the authorization server URL in a transparent iframe above the attacker’s web page. The attacker’s web page is stacked below the iframe, and has some innocuous-looking buttons or links, placed very carefully to be directly under the authorization server’s confirmation button. When the user clicks the misleading visible button, they are actually clicking the invisible button on the authorization page, thereby granting access to the attacker’s application. This allows the attacker to trick the user into granting access without their knowledge.

      Maybe browsers should prevent transparent iframes?! Most people would never suspect this is even possible.

    1. it's also one of the smartest games I've ever played and I can't recommend it enough if you enjoy system-driven narrative, which it handles exquisitely.

      .

    2. The Quiet Sleep has 'cult classic' written all over it. It uses strategy, management, and tower defence mechanics to take you inside someone's head in a way that I don't think has ever been done before. It's really a bold experiment, and you'll be glad you played it.

      .

    1. Well I would like to express my huge concern regarding the withdrawal of support for the SMB 1.0 network protocol in Windows 11, and future versions of the Microsoft OS, as there are many, many users who need to make use of this communication protocol, especially users households, since there are hundreds of thousands of products that use the embedded Linux operating system on devices that still use the SMB 1.0 protocol, and many devices, such as media players and NAS, that have been discontinued and companies no longer update their firmware.
    1. With Windows 10 version 1511, support for SMBv1 and thus NetBIOS device discovery was disabled by default. Depending on the actual edition, later versions of Windows starting from version 1709 ("Fall Creators Update") do not allow the installation of the SMBv1 client anymore. This causes hosts running Samba not to be listed in the Explorer's "Network (Neighborhood)" views.

      .

    2. Since NetBIOS discovery is not supported by Windows anymore, wsdd makes hosts to appear in Windows again using the Web Service Discovery method.

      .

    1. Windows 10 if configured the way Microsoft wants you to configure it by default will never be able to "discover" your Ubuntu samba shares.

      .

    1. to see the changes the commands make. Among the commands, I'd like to use useradd, userdel, usermod, groupadd, groupmod, & groupdel. And, as I'm guessing you are understanding, these are just the ones I've read about today. If I can get away without modifying any files directly, I'd rather be able to do that because it means I'll have a strong grasp of the commands, and I'd be able to learn the editing of smb.conf (& the other files) by seeing how it/they change as I use the commands.

      .

    2. I'm trying to learn enough about Samba that I'm able to do complete administration from the command line. That's a big task, I know, like learning DOS when all I know is French (I know far more DOS than French, but that's the idea).

      .

    3. I have definitely looked at some of the Samba.org instructions. The problem is mine - I'm either too busy dealing with the kids in the morning, or too tired in the evenings, to be able to - within my realm of patience - find what I need, implement it, test it, and confirm that it works or try something else. Finding it, and recognizing that I've found it, is usually the hard part. That's why a book does me worlds of good - I can read it during the work day when I'm taking a few minutes break, and it's uninterrupted concentration time.

      .

  2. Aug 2022
    1. The custom title bar has been a success on Windows, but the customer response on Linux suggests otherwise. Based on feedback, we have decided to make this setting opt-in on Linux and leave the native title bar as the default. The custom title bar provides many benefits including great theming support and better accessibility through keyboard navigation and screen readers. Unfortunately, these benefits do not translate as well to the Linux platform. Linux has a variety of desktop environments and window managers that can make the VS Code theming look foreign to users.
    1. If you insist on having the user id in the version table, you can do this: ActiveRecord::Base.transaction do @user.save! @user.versions.last.update_attributes!(:whodunnit => @user.id) end

      Not ideal... but we can't set it any earlier because we don't know the id until after the save

    1. Wouldn't it be easier to do a squash merge instead? git merge --squash [branch] Like comment: Like comment: 1 like Like Comment button Reply Collapse Expand Brack Carmony Brack Carmony Brack Carmony Follow Joined Jan 3, 2022 • Jan 3 Dropdown menu Copy link Hide Report abuse It would, if the assumption that every commit in the chain is what you want, this lets you keep the power of the rebase available if you want to cherry-pick commits or any of the other crazy features it seems to let you use.
    1. Would be more of a neutral rating for me but seeing that I have only two options (or no review at all), I'll go with the upvote for encouragement as they do appear to be putting some effort into the game.

      .

    1. This is actually the most correct answer, because it explains why people (like me) are suddenly seeing this warning after nearly a decade of using git. However,it would be useful if some guidance were given on the options offered. for example, pointing out that setting pull.ff to "only" doesn't prevent you doing a "pull --rebase" to override it.
    2. I appreciate the time and effort you put into your answer, but frankly this is still completely incomprehensible to me.
    3. one should not upgrade a production environment without extensive testing. I prefer to not upgrade prod at all. Instead, I create a new instance with latest everything, host my apps there, test everything out, and then make it production.
    1. Beyond memory leaks, it's also really useful to be able to re-run a test many times to help with tracking down intermittence failure and race conditions.
    2. I don't understand the hesitation here to accept a really useful addition to rspec. Maintenance burden. Forseen internal changes required to do it. Unforseen internal changes required to do it. Formatter changes to handle new output status for a spec that passed and failed It's simply not a previously design use case of RSpec. It will be hacky to implement.
    3. We already have a very wide configuration API. The further we expand it the more unwieldy it becomes for users. At this point we generally require new features to be implemented first as extension gems, and then to see support, before considering including them in core.
    4. I created a gem called rspec_n that installs an executable that will do this. It will re-run the test suite N times by default. You can make it stop as soon as it hits a failing iteration via the -s cli option. It will display other stats about the iterations as well.
    1. You can pass any options to puma via the server setting Capybara.server = :puma, { queue_requests: true }
    2. This very much appears to be a bug or design flaw in puma - The fact that a persistent connection ties up a thread on the chance a request might come over that connection seems like not great behavior. This would really only be an issue when puma is run with no workers (which wouldn't be done in production) but it still seems a little nuts.
    1. "It's difficult because we can't tell people exactly what's allowed and not allowed," said Chris Castelli, a manager for the Department of State Lands. "It's even tougher for law enforcement that gets called out to very heated disputes and doesn't have strict laws they can apply." 
    1. the declaration is statutory

      what does this mean here? what is being clarified or contrasted here? statutory as opposed to what?

    2. The extent of public use varies, with Montana affording the greatest access. Rafters can float and fishermen can wade in rivers that flow through private land so long as they enter from public property. They can even leave the river and walk up to the high-water mark.
    1. I understand that you are bound to specification. And also understand that it could take months to decide wether the specification should be changed.

      .

    1. I thought something like git rev-parse --abbrev-ref origin/HEAD would work, but that just seems to show what the default branch was of the repo it was cloned from, at the time of cloning, provided that the remote we cloned from was named origin.

      good enough for my purposes (local git scripts/aliases)!

      ⟫ cat .git/refs/remotes/origin/HEAD ref: refs/remotes/origin/main

    2. This is a terrific answer! Without something like locks or transactions, we indeed will only ever be able to get an updated-as-of-when-the-repository-just-told-us point of accuracy that gets stale if changed in the time since then
    3. It's a great way to test various limits. When you think about this even more, it's a little mind-bending, as we're trying to impose a global clock ("who is the most up to date") on a system that inherently doesn't have a global clock. When we scale time down to nanoseconds, this affects us in the real world of today: a light-nanosecond is not very far.
    4. Which of these to use depends on the result you want. Note that by the time you get the answer, it may be incorrect (out of date). There is no way to fix this locally. Using some ESP,2 imagine the remote you're contacting is in orbit around Saturn. It takes light about 8 minutes to travel from the sun to Earth, and about 80 to travel from the sun to Saturn, so depending on where we are orbitally, they're 72 to 88 minutes away. Any answer you get back from them will necessarily be over an hour out of date.
    5. When we have our git rev-parse examine our Git repository to view our origin/HEAD, what we see is whatever we have stored in this origin/HEAD. That need not match what is in their HEAD at this time. It might match! It might not.
    6. There are many questions we can ask and answer about branch names. Each one is specific to one particular repository because all branch names are local to that particular repository. Any changes anyone makes in that repository affect only that one repository, at least at the time they make them.

      which assumption? well, people make the assumption that our local repo should know some fact about the remote repo, like its default branch, without actually asking the remote about itself

    7. The main problem here is that the problem itself is a little bit poorly defined.
    8. Exaggeration of System Parameters
    9. Using git remote set-head has the advantage of updating a cached answer, which you can then use for some set period. A direct query with git ls-remote has the advantage of getting a fresh answer and being fairly robust. The git remote show method seems like a compromise between these two that has most of their disadvantages with few of their advantages, so that's the one I would avoid.)
    1. You can use the lsblk command. If the disk is already unlocked, it will display two lines: the device and the mapped device, where the mapped device should be of type crypt. # lsblk -l -n /dev/sdaX sdaX 253:11 0 2G 0 part sdaX_crypt (dm-6) 253:11 0 2G 0 crypt If the disk is not yet unlocked, it will only show the device. # lsblk -l -n /dev/sdaX sdaX 253:11 0 2G 0 part
    1. Bear in mind that lsof doesn't seem to present an easy solution because, once the device is disconnected, the associated names provided by lsof no longer include the name of the disconnected device.
    1. Yes, this happens when luks encrypted device was not cleanly deactivated with cryptsetup close. You can try to remove the mapping using dmsetup remove /dev/mapper/luks-... if you want to avoid rebooting.
    1. We can use the readlink command to resolve relative paths, including symlinks. It uses the -f flag to print the full path:
    1. $0 would be OK in most cases, some exceptions are, for instance, when the script you're executing is aliased (through alias in .bash_profile). You should really use $BASH_SOURCE variable, instead of $0.
    2. Using $0 does not work when the script is run using source script or . script; the name of the script is not available.
    3. MY_PATH=$(cd "$MY_PATH" && pwd) # absolutized and normalized

      scripting: finding absolute path

    1. you can also replicate the bind:this syntax if you please: Wrapper.svelte <script> let root export { root as this } </script> <div bind:this={root} />

      This lets the caller use it like this: <Wrapper bind:this={root} />

      in the same way we can already do this with elements: <div bind:this=

  3. Jul 2022
    1. Patrician IV is an overhauling upgrade to Patrician III; so if you have not played the previous games in the Patrician series, starting with IV is really all you need. Also, the game of Patrician is very straightforward and addicting, so playing previous versions won't offer you anything unseen in Patrician IV.
    2. Onto the game itself.

      onto

    1. Don't worry if your project isn't quite ready for Plug'n'Play just yet! This guide will let you migrate without losing your node_modules folder. Only in a later optional section we will cover how to enable PnP support, and this part will only be recommended, not mandatory. Baby steps!
    1. Process Substitution is something everyone should be using regularly! It is super useful. I do something like vimdiff <(grep WARN log.1 | sort | uniq) <(grep WARN log.2 | sort | uniq) every day.

      underused

    1. Always use a while read construct: find . -name "*.txt" -print0 | while read -d $'\0' file do …code using "$file" done The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.
    2. What ever you do, don't use a for loop: # Don't do this for file in $(find . -name "*.txt") do …code using "$file" done Three reasons: For the for loop to even start, the find must run to completion. If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names. Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.
    1. $0 can be set to an arbitrary value by the caller. On the flip side, $BASH_SOURCE can be empty, if no named file is involved; e.g.: echo 'echo "[$BASH_SOURCE]"' | bash
    2. hile this warning is helpful, it could be more precise, because you won't necessarily get the first element: It is specifically the element at index 0 that is returned, so if the first element has a higher index - which is possible in Bash - you'll get the empty string; try 'a[1]='hi'; echo "$a"'.
    1. Pre and post commands with matching names will be run for those as well (e.g. premyscript, myscript, postmyscript)

      Could potentially be confusing behavior if running a script does something extra and you don't know why. They might look at the definition of myscript and not see the additional commands and wonder how/why they are running. The premyscript might be lost in a lost unsorted script list.

    2. Since npm@1.1.71, the npm CLI has run the prepublish script for both npm publish and npm install, because it's a convenient way to prepare a package for use (some common use cases are described in the section below). It has also turned out to be, in practice, very confusing. As of npm@4.0.0, a new event has been introduced, prepare, that preserves this existing behavior. A new event, prepublishOnly has been added as a transitional strategy to allow users to avoid the confusing behavior of existing npm versions and only run on npm publish (for instance, running the tests one last time to ensure they're in good shape).
    1. This option wasn’t offered by the library, but that doesn’t have to stop us. Isn’t that fun?
    2. Here’s a quick blog post about a specific thing (making FactoryBot.lint more verbose) but actually, secretly, about a more general thing (taking advantage of Ruby’s flexibility to bend the universe to your will). Let’s start with the specific thing and then come back around to the general thing.
    1. Steer, of course, can also be a noun that refers to male cattle. This meaning is unrelated to the expression steer clear.
    1. The amount of time wasted on this is ridiculous. Thanks. This is about the only thing that worked. Why in the world this wouldn't "just work" by defining the default url options in Rails config/environments/test.rb is beyond me.
    1. The goal of this project is to have a single gem that contains all the helper methods needed to resize and process images. Currently, existing attachment gems (like Paperclip, CarrierWave, Refile, Dragonfly, ActiveStorage, and others) implement their own custom image helper methods. But why? That's not very DRY, is it? Let's be honest. Image processing is a dark, mysterious art. So we want to combine every great idea from all of these separate gems into a single awesome library that is constantly updated with best-practice thinking about how to resize and process images.
    1. It is sublimely annoying to have to configure the exact same parameters in config/environments, spec/spec_helper.rb and again here... all in marginally different ways (with 'http://' or without, with port number or port specified separately). Even Capybara.configure syntax can't seem to stay consistent to itself between versions...
    1. It really only takes one head scratching issue to suck up all the time it saves you over a year, and in my experience these head scratchers happen much more often than once a year. So in that sense it's not worth it, and the first time I run into an issue with it, I disable it completely.
    2. It feels like « removing spring » is one of those unchallenged truths like « always remove Turbolinks » or « never use fixtures ». It also feels like a confirmation bias when it goes wrong.

      "unchallenged truths" is not really accurate. More like unchallenged assumption.

    3. I may had to turn it off and on again a few times as debugging technique when I had no other ideas on what to do.
    1. Thanks for your making your first contribution to Cucumber, and welcome to the Cucumber committers team! You can now push directly to this repo and all other repos under the cucumber organization! In return for this generous offer we hope you will: Continue to use branches and pull requests. When someone on the core team approves a pull request (yours or someone else's), you're welcome to merge it yourself. Commit to setting a good example by following and upholding our code of conduct in your interactions with other collaborators and users. Join the community Slack channel to meet the rest of the team and make yourself at home. Don't feel obliged to help, just do what you can if you have the time and the energy. Ask if you need anything. We're looking for feedback about how to make the project more welcoming, so please tell us!
    1. A more conservative workaround is find the gems that are causing issues and list them on the top of your Gemfile.

      good solution ... except that it didn't help/work

    2. A good way to debug what is causing these is put this at the top of your Gemfile:
    1. These directives are inherited from the previous configuration level if and only if there are no proxy_set_header directives defined on the current level.

      This conditional rule for inheritance is different than most other apps/contexts. Usually it just always inherits, and any local config at the current level gets merged with or overrides what is inherited.

    1. By default, this function reads template files in /etc/nginx/templates/*.template and outputs the result of executing envsubst to /etc/nginx/conf.d.

      '

    1. It is "guaranteed" as long as you are on the default network. 172.17.0.1 is no magic trick, but simply the gateway of the network bridge, which happens to be the host. All containers will be connected to bridge unless specified otherwise.
    2. For example I use docker on windows, using docker-toolbox (OG) so that it has less conflicts with the rest of my setup and I don't need HyperV.
    1. Even with OverloadedRecordDot, Haskell’s records are still bad, they’re just not awful.
    1. Don’t make claims unless you can cite documentation, formalized guidelines, and coding examples to back those claims up. People need to know why they are being asked to make a change, and another developer’s personal preference isn’t a good enough argument.
    1. You are context switching between new features and old commits that still need polishing.
    2. If the code review process is not planned right, it could have more cost than value.
    1. Defects found in peer review are not an acceptable rubric by which to evaluate team members. Reports pulled from peer code reviews should never be used in performance reports. If personal metrics become a basis for compensation or promotion, developers will become hostile toward the process and naturally focus on improving personal metrics rather than writing better overall code.
    1. raise StandardError.new "No authentication is configured for ActiveStorage"

      forces the issue by requiring end-dev to edit/override this method to avoid getting this error

    2. # ActiveStorage defaults to security via obscurity approach to serving links # If this is acceptable for your use case then this authenticable test can be # removed. If not then code should be added to only serve files appropriately. # https://edgeguides.rubyonrails.org/active_storage_overview.html#proxy-mode def authenticated? raise StandardError.new "No authentication is configured for ActiveStorage" end
    1. Stop autoclosing of PRs While the idea of cleaning up the the PRs list by nudging reviewers with the stale message and closing PRs that didn't got a review in time cloud work for the maintainers, in practice it discourages contributors to submit contributions. Keeping PRs open and not providing feedback also doesn't help with contributors motivation, so while I'm disabling this feature of the bot we still need to come up with a process that will help us to keep the number of PRs in check, but celebrate the work contributors already did instead of ignoring it, or dismissing in the form of a "stale" alerts, and automatically closing PRs.

      Yes!! Thank you!!

      typo: cloud work -> could work

    1. I don't understand why it should be so hard to keep issues open / reopen them. That's just going to cause people to open a duplicate issue/PR — or (if they notice in time) cause people to add extra "not stale" noise when the bot warns it's about to be closed. Wouldn't it be preferable to keep the discussion together in one place instead of spreading across duplicate issues? (Similarly, moving the meta conversation about an issue out to a completely separate system (Discord) seems like the wrong direction, because it wouldn't be visible to/discoverable by those arriving at the closed issue.) I get how it's useful to have stale issues not cluttering the list. But if interes/activity later picks up again, then "stale" is no longer accurate and its status should be automatically updated to reflect its newfound freshness... like it did back here:
    2. ActiveSupport.on_load :active_storage_blob do def accessible_to?(accessor) attachments.includes(:record).any? { |attachment| attachment.accessible_to?(accessor) } || attachments.none? end end ActiveSupport.on_load :active_storage_attachment do def accessible_to?(accessor) record.try(:accessible_to?, accessor) end end ActiveSupport.on_load :action_text_rich_text do def accessible_to?(accessor) record.try(:accessible_to?, accessor) end end module ActiveStorage::Authorize extend ActiveSupport::Concern included do before_action :require_authorization end private def require_authorization head :forbidden unless authorized? end def authorized? @blob.accessible_to?(Current.identity) end end Rails.application.config.to_prepare do ActiveStorage::Blobs::RedirectController.include ActiveStorage::Authorize ActiveStorage::Blobs::ProxyController.include ActiveStorage::Authorize ActiveStorage::Representations::RedirectController.include ActiveStorage::Authorize ActiveStorage::Representations::ProxyController.include ActiveStorage::Authorize end

      Interesting, rather clean approach, I think

    3. I'm partial to the solution originally proposed. It follows a pattern already established in Rails. For example, using an application-specific ApplicationStorageController which inherits from ActiveStorage::BaseController is very similar to the ApplicationRecord which inherits from ActiveRecord::Base or ApplicationJob which inherits from ActiveJob::Base.
    4. I think this is important, and I'd love to help making ActiveStorage a more secure place.
    5. it should be normal for production apps to add authentication and authorization to their ActiveStorage controllers. Unfortunately, there are 2 possible ways to achieve it currently: Not drawing ActiveStorage routes and do everything by yourself Override/monkey patch ActiveStorage controllers None of them is ideal because in the end you can't benefit from Rails upgrades (bug fixes, etc) so the intention of this PR is to let people define a parent controller (inspired by Devise, maybe @carlosantoniodasilva can tell us his experience on this feature) so that people can add authentication and authorization in a single place and still benefit from the default controllers.
    1. Create a new controller to override the original: app/controllers/active_storage/blobs_controller.rb

      Original comment:

      I've never seen monkey patching done quite like this.

      Usually you can't just "override" a class. You can only reopen it. You can't change its superclass. (If you needed to, you'd have to remove the old constant first.)

      Rails has already defined ActiveStorage::BlobsController!

      I believe the only reason this works:

      class ActiveStorage::BlobsController < ActiveStorage::BaseController

      is because it's reopening the existing class. We don't even need to specify the < Base class. (We can't change it, in any case.)

      They do the same thing here: - https://github.com/ackama/rails-template/pull/284/files#diff-2688f6f31a499b82cb87617d6643a0a5277dc14f35f15535fd27ef80a68da520

      Correction: I guess this doesn't actually monkey patch it. I guess it really does override the original from activestorage gem and prevent it from getting loaded. How does it do that? I'm guessing it's because activestorage relies on autoloading constants, and when the constant ActiveStorage::BlobsController is first encountered/referenced, autoloading looks in paths in a certain order, and finds the version in the app's app/controllers/active_storage/blobs_controller.rb before it ever gets a chance to look in the gem's paths for that same path/file.

      If instead of using autoloading, it had used require_relative (or even require?? but that might have still found the app-defined version earlier in the load path), then it would have loaded the model from activestorage first, and then (possibly) loaded the model from our app, which (probably) would have reopened it, as I originally commented.

    1. This was a surprise to me, since we generally authenticate the record quite well, but then go on to do something like record.file.url in our view, generating a URL that is permanent and unauthenticated.
    1. meat: https://github.com/musaffa/file_validators/blob/master/lib/file_validators/validators/file_content_type_validator.rb

      Compared to https://github.com/aki77/activestorage-validator, I slightly prefer this because - it has more users and has been battle tested more - is more flexible: can specify exclude as well as allow - has more expansive Readme documentation - is mentioned by https://github.com/thoughtbot/paperclip/blob/master/MIGRATING.md#migrating-from-paperclip-to-activestorage - mentions security: whether or not it's needed, at least this makes extra attempt to be secure by using external tool to check content_type; https://github.com/aki77/activestorage-validator/blob/master/lib/activestorage/validator/blob.rb just uses blob.content_type, which I guess just trusts whatever ActiveStorage gives us (which seems fair too: perhaps this should be kicked up to them to be their concern)

      In fact, it looks like ActiveStorage does do some kind of mime type checking...

      activestorage-6.1.6/app/models/active_storage/blob/identifiable.rb ``` def identify_without_saving unless identified? self.content_type = identify_content_type self.identified = true end end

      def identify_content_type
        Marcel::MimeType.for download_identifiable_chunk, name: filename.to_s, declared_type: content_type
      end
      

      ```

    1. Overall, there appears to be no MIME type image/jpg. Yet, in practice, nearly all software handles image files named "*.jpg" just fine.

      Extension != MIME type