20,173 Matching Annotations
  1. Sep 2021
    1. As a bonus, you have the option of choosing a particular version of JavaScript to target when compiling, so you can use updated JavaScript features, but still, maintain legacy browser support (if you're into that sort of thing)!
    2. React, on the other hand, often requires a fair amount of boilerplate code, even for simple interactions.
    1. The current supported languages out-of-the-box are Sass, Stylus, Less, CoffeeScript, TypeScript, Pug, PostCSS, Babel.
    2. svelte-preprocess doesn't do any kind of type-checking, it just transpiles your ts into js (see it here). If you want to fail your build when a type error is found, you can use svelte-check.
    3. Note: If you want to transpile your app to be supported in older browsers, you must run babel from the context of your bundler.
    4. // No need for babel to resolve modules modules: false,
    1. It is advised to inline any css @import in component's style tag before it hits css-loader. This ensures equal css behavior when using HMR with emitCss: false and production.
    2. hotOptions

      Should be hotReloadOptions to parallel hotReload

    3. while we figure out how to best include HMR support in the compiler itself (which is tricky to do without unfairly favoring any particular dev tooling)
    4. Webpack's resolve.mainFields option determines which fields in package.json are used to resolve identifiers. If you're using Svelte components installed from npm, you should specify this option so that your app can use the original component source code, rather than consuming the already-compiled version (which is less efficient).
    1. Code that is needed to create the output and the output itself is hard to read because of all the workarounds we have to use, especially around shadowed variables and control flow
    1. Look at local job ads and see what they want.
    2. Do you have practical skills? Can you build useful things?
    3. Don't get me wrong, you need to know your stuff technically. But at the end of the day, the interviewer is asking themselves, “would I like to work with this person ?”
    4. I would suggest focus on interview skills. It doesn't matter how good you are if you can't communicate that to the interviewer.
    5. I was able to secure my current full time job because I was the best communicator and had one of the best interviews. I know my coding skills were no where near other candidates but I was told that they couldn’t communicate well.
    1. They are deliberately dumbing the browser down further and further and it'll probably end up eventually becoming completely unuseable because of this.
    2. (They blame Chrome's "feature" addition treadmill, where "they keep adding stupid kitchen sinks for the sole and only purpose to make others unable to keep up.")
    1. I have never seen the @Stale bot or any directly equivalent to it achieve a net positive outcome. Never. It results in disgruntled people, extra expenditure of effort (for reporters and maintainers), real stuff getting lost when people get fed up with poking the bot (I have no intention of poking it further), and more extensive filing of duplicates. You say a simple comment dismisses it, but it doesn’t—it only does this time. Next time, it continues to annoy. This is an issue tracker. Use labels, projects, milestones and the likes for prioritising stuff. Not sweeping stuff under the carpet.
    2. It is an issue tracker but we don't have a backlog, or planning sessions, or a project board. Or the resources to even triage and tag effectively. If it is important someone will respond / reopen, popular issues are exempt from the bot, we can't fix everything and this is pretty much our only view on stuff that need to be addressed. We need to make some attempt to make sure that everything is still relevant and reduce the noise to a degree where we can actually manage it. I understand the trade-offs with stale bots but we don't have many options. I appreciate your experiences but that doesn't make them a fact. We have discussed this internally and this is what we are doing. If you have any other actionable alternatives outside of saying the bot is bad then we are all ears.
    3. Closing issues doesn’t solve anything. Closing issues in GitHub just sweeps them under the carpet and helps everyone to forget about them, which is just not what you want—the fact that GitHub search excludes closed items by default is a large part of the problem with it. As applied to software projects with general-purpose issue trackers, the @Stale bot is fundamentally phenomenally bad idea, a road paved with good intentions. I presented an actionable alternative: labels. Possibly automatically applied, but it’s certainly better to spend a little bit of time on manual triage. It honestly doesn’t take long to skim through a few hundred issues and bin them into labels. 609 open issues? That’s honestly not a problem. Not at all. There’s nothing wrong with having a large number of issues open, if they do correspond to real things—even things that you may not expect to get to for years, if ever, because that might change or someone might decide they want to deal with one. Closing issues that aren’t dealt with is bad. Please don’t do it.
    4. This is the wrong place for this conversation though.
    5. Most issues have been manually labelled as stale rather than automated and closure will be manual too, so we have time to think.

      manual action time to think

    1. Use this to load modules whose location is specified in the paths section of tsconfig.json when using webpack. This package provides the functionality of the tsconfig-paths package but as a webpack plug-in. Using this plugin means that you should no longer need to add alias entries in your webpack.config.js which correspond to the paths entries in your tsconfig.json. This plugin creates those alias entries for you, so you don't have to!
    2. Notice that the plugin is placed in the resolve.plugins section of the configuration. tsconfig-paths-webpack-plugin is a resolve plugin and should only be placed in this part of the configuration. Don't confuse this with the plugins array at the root of the webpack configuration object.
    1. The declarations you make in the tsconfig.json are re-stated in the webpack.config.js. Who wants to maintain two sets of code where one would do? Not me.
    2. When I look at the tsconfig.json and the webpack.config.js something occurs to me: I don't like to repeat myself. As well as that, I don't like to repeat myself. It's so... Repetitive.
    3. Let's not get over-excited. Actually, we're only part-way there; you can compile this code with the TypeScript compiler.... But is that enough?I bundle my TypeScript with ts-loader and webpack. If I try and use my new exciting import statement above with my build system then disappointment is in my future. webpack will be all like "import whuuuuuuuut?"You see, webpack doesn't know what we told the TypeScript compiler in the tsconfig.json.
    4. import * as utils from '../../../../../../../shared/utils';
    5. Which do you prefer? If the answer was "the first" then read no further. You have all you need, go forth and be happy.

      good example of: not just assuming people are dissatisfied / will want to change

    1. webpack-bot commented on Nov 13, 2020 For maintainers only: webpack-4 webpack-5 bug critical-bug enhancement documentation performance dependencies question
    2. This is not dumb at all. It is exceedingly common to use aliases. It's not about being lazy, it's about writing portable code.
    3. Saying that web devs used to be fine with relative imports is like saying that human beings used to be fine living without refrigerators. Sure we did. But was it better than it is now? No. No, it wasn't.
    4. config: path.resolve(__dirname, '../config'), vue: 'vue/dist/vue.js', src: path.resolve(__dirname, '../src'), store: path.resolve(__dirname, '../src/store'), assets: path.resolve(__dirname, '../src/assets'), components: path.resolve(__dirname, '../src/components'), '@': path.resolve(__dirname, '../src'),
    5. alias: { _self: path.join(__dirname, 'src/web'), _shared: path.join(__dirname, 'src/shared'), _components: path.join(__dirname, 'src/web/components'), _helpers: path.join(__dirname, 'src/web/helpers'), _layers: path.join(__dirname, 'src/web/layers'), _mutations: path.join(__dirname, 'src/web/mutations'), _routes: path.join(__dirname, 'src/web/routes') }
    6. alias: { '@components': path.join(srcDir, 'components'), '@modules': path.join(srcDir, 'modules'), '@store': path.join(srcDir, 'store') }
    7. Aliases are absolute nonsense for resolving imports. If you don't want to type ../ consider using something like path.resolve(__dirname, '../src') so you can do import Stuff from 'client/components/stuff'; // relative to root of project instead of: import Stuff from 'COMPONENTS/stuff'; // this is dumb
    8. alias: { '@shared': path.dirname(require.resolve('app')), '~': path.join(fs.realpathSync(process.cwd()), 'app'), },
    9. In this example, @shared is the package, ~ is the project. I wouldn't do it this way in the future, but I know this configuration works.
    10. This particular project has a differentiation between a package's app/ folder and the current project's app/ folder.
    1. alias: { Library: path.resolve(__dirname, "root/library/"), Single: path.resolve(__dirname, "root/test.js"), },
    2. alias: { Single$: path.resolve(__dirname, "root/test.js"), },
    1. Gems use a period and packages use a dot

      Probably a false distinction, because "packages" is used in a way that it implies a distinction from "gems", when in actuality

      1. gems are packages, too (Ruby packages)
      2. it's referring specifically to JavaScript/node/npm packages,

      ... so there is only truly a distinctio if you are specific enough to say JavaScript packages.

    2. (Gems use a period and packages use a dot between the main version number and the beta version.)
    3. Update API usage of the view helpers by changing javascript_packs_with_chunks_tag and stylesheet_packs_with_chunks_tag to javascript_pack_tag and stylesheet_pack_tag. Ensure that your layouts and views will only have at most one call to javascript_pack_tag or stylesheet_pack_tag. You can now pass multiple bundles to these view helper methods.

      Good move. Rather than having 2 different methods, and requiring people to "go out of their way" to "opt in" to using chunks by using the longer-named javascript_packs_with_chunks_tag, they changed it to just use chunks by default, out of the box.

      Now they don't need 2 similar but separate methods that do nearly the same, which makes things simpler and easier to understand (no longer have to stop and ask oneself, which one should I use? what's the difference?).

      You can't get it "wrong" now because there's only one option.

      And by switching that method to use the shorter name, it makes it clearer that that is the usual/common/recommended way to go.

    4. If you fail to changes this, you may experience performance issues, and other bugs related to multiple copies of React, like issue 2932.
    5. Webpacker used to configure Webpack indirectly, which lead to a complicated secondary configuration process. This was done in order to provide default configurations for the most popular frameworks, but ended up creating more complexity than it cured. So now Webpacker delegates all configuration directly to Webpack's default configuration setup.

      more trouble than it's worth

      • creating more complexity than it cured
    6. Webpacker has become a slimmer wrapper around Webpack
    1. What's the reasoning behind this change?
    2. TylerRick
    3. Yeah I don’t think we will find something that works for everyone in all cases. But Webpacker is quite flexible with the setup it has now. Easy to change!
    4. I feel like app/packs (or something like it) is a good name because it communicates to developers that it's not just JavaScript that can be bundled, it's also CSS, images, SVGs — you name it. I realize what can be bundled is wholly dependent on the bundler you use, but even esbuild supports bundling CSS. So couldn't this possibly be confusing?
    1. Analytics modules that run in the background, monitor user interaction, and send the data to a server.
    2. Many jQuery plugins attach themselves to the global jQuery object.
    3. A polyfill for example, might not do anything, because it finds that the feature that it enables is already supported by the browser.
    4. A module with side-effects is one that changes the scope in other ways then returning something, and it's effects are not always predictable, and can be affected by outside forces (non pure function).
  2. developer.mozilla.org developer.mozilla.org
    1. Import a module for its side effects only Import an entire module for side effects only, without importing anything. This runs the module's global code, but doesn't actually import any values.
    1. In 3.1.0, we used oneOf option which solved this problem but then loaders which were using multiple transformation started failing (erb loader) since it was using first matching loader from the list.
    1. Thanks @gj ! That's the best config help response I've ever gotten--changed the regex as you outlined, removed the md.js file I had added, and updated environment.js you post and it worked perfectly.
    2. WARNING in ./app/javascript/components/ComponentLibrary/Docs/Intro.md Module parse failed: Unexpected character '#' (1:0) You may need an appropriate loader to handle this file type.
    1. Why can you remove it? The loader will first try to resolve @import as a relative path. If it cannot be resolved, then the loader will try to resolve @import inside node_modules.
    2. Using ~ is deprecated and can be removed from your code (we recommend it)
    3. ℹ️ We highly recommend using Dart Sass.
    4. ⚠ Node Sass does not work with Yarn PnP feature and doesn't support @use rule.
    1. Webpack 5 no longer polyfills Node.js core modules automatically which means if you use them in your code running in browsers or alike, you will have to install compatible modules from npm and include them yourself. Here is a list of polyfills webpack has used before webpack 5:
    1. You can help make Node.js and browsers more unified. For example, Node.js has util.promisify, which is commonly used. I don’t understand why such an essential method is not also available in browsers. In turn, browsers have APIs that Node.js should have. For example, fetch, Web Streams (The Node.js stream module is awful), Web Crypto (I’ve heard rumors this one is coming), Websockets, etc.
    2. This sucks!I agree. Go complain on the Webpack issue tracker. They caused this.
    3. The main reason I love Node.js is that I don’t have to deal with the awfulness that is JS front-end tooling.
    4. I am not going to do Webpack support. I’ve been pretty lenient in the past and answered most Webpack support questions, but it takes a lot of my time that I could have spent on more important things. I will instead refer users to the Webpack support channels.
    5. Users think every Webpack tool/config problem is a problem with a specific package and opens an issue asking for support on the package instead of Webpack. In the past year alone, I’ve had to deal with hundreds of Webpack issues on my repos.
    6. The problem is that Webpack created convenience by automatically polyfilling and then now suddenly took it away.
    1. The Rails server will also compile your assets if the dev server is not running, but this is much slower vs running separate processes and not recommended.
    2. Run the Rails server (bin/rails s) and the Webpack Dev Server (bin/webpack-dev-server) via your preferred method. Two terminal tabs will work or create a Procfile and run via overmind or foreman.
    3. Please consider sharing 🙏

      first sighting: "Share" metadata

    4. This page has changed since first posted, refer to the changelog at the bottom.
    1. Some would argue that the phrase ''survival of the fittest'' is tautological, in that the fittest are defined as those that survive to reproduce.
    1. Click below to download free plans for building my Dado Engine jig. It lets you safely rout dado grooves for cabinets and shelves with the perfect width of groove to match your sheet material.
    2. Cut a dado groove with a 3/4” diameter router bit and you’ll almost certainly have a too-loose joint when you try to plug some 3/4” plywood in place. Under the guise of metrification, sheet material thicknesses have all shrank enough to cause problems with joinery if you rely on the old, Imperial thickness designations. And besides, material thickness varies enough from sheet to sheet that it can make a difference when it comes to prominent joinery. This is even true in the USA that still uses Imperial more or less exclusively. Sheet goods remain thinner than their name specifies.
    1. Unfortunately, it's too late to make the question more specific as this would invalidate some of the (good) answers.
    2. This question is broad and not very clear -- with the result that the following answers cover quite different scenarios and use cases.
    1. It is a descendent of the "Object Mother" pattern for creating objects for testing, and is related to the concept of an "object exemplar" or stereotype.

      Object Daddy < Object Mother

    1. I first learned to love the functionality of Dave Thomas' annotate_models plugin (you can find a repo for it here). Later, when it became un-maintained and broke, I switched over to ctran/annoate. Then, when work on it waned and broke as well, I decided to write my own as an exercise.
    1. One good use for /dev/tty is if you're trying to call an editor in a pipeline (e.g., with xargs). Since the standard input of xargs is some list of files rather than your terminal, just doing, e.g., | xargs emacs will screw up your terminal. Instead you can use | xargs sh -c 'emacs "$@" </dev/tty' emacs to connect the editor to your terminal even though the input of xargs is coming from elsewhere.
    1. An extensible plugin architecture allows for customizing your workflow or even making Yarn a package manager for non-JavaScript projects.
    2. (Yeah, npm 7 has these too, but Yarn 2’s implementation is more expressive.)
    3. Workspaces make monorepo-style projects more manageable.
    4. This is no different from other popular libraries or frameworks making huge architectural changes (think React 16.8 with hooks or Python 3). The longer you wait to make the switch, the more painful it will be for your project when you finally do. And in the meantime, you’ll be missing out on valuable improvements to a fundamental part of the workflow of every single project you work on.
    5. it’s time to reconsider that decision. Here are three reasons you might have waited to make the switch — and why those reasons are out of date in 2021.
    6. If you don't learn from history, you're doomed to rebase it.
    1. Quora+ is a subscription to the best of Quora.Access great writing, straight-from-the-source knowledge, and stories you can’t find anywhere else while supporting creators who matter to you.

      Another example of a service that tries to entice users with a free service (and writers with a financial incentive) and then once they achieve enough popularity, they make some of "their" content "premium".

      (YouTube Premium, ...)

      This is why we should distrust and avoid using "free" services.

    1. My understanding is that the caret is the answer for traditional SemVer, i.e., there will be breaking changes prior to 0.1.0, there may be breaking changes between minor versions prior to 1.0.0, and there may only be breaking changes between major versions above 1.0.0.
    1. I find it much simpler to use a partition label with LABEL=.... It is shorter, easier to remember, and also has the advantage that should the partition go bad and need to be replaced you can create a new partition, give it the same label provided the old partition is either removed or at least changed to be unlabelled and fstab will never know the difference.
    1. sudo apt-get autoclean sudo apt-get autoremove sudo apt-get clean sudo apt update sudo apt-get dist-upgrade --fix-missing sudo apt-get dist-upgrade --fix-broken sudo apt full-upgrade sudo apt -f install dpkg --configure -a
    1. But it is always important to remember that those are not language concepts. Those are community concepts that only exist in our heads and in the names of some library methods.

      I'm not sure about this. I get what he's saying and agree that singleton methods are nothing but a naming convention for the more fundamental/atomic construct called instance methods (which indeed are the only kind of method that exist in Ruby, depending how you look at it), but I think I would actually say that singleton methods are language concepts because those methods like Object#define_singleton_method, ... are always available in Ruby (without needing to require a standard library first, for example). In other words, I would argue that something belonging in the Ruby core "library" (?) by definition makes it part of the language -- even if it in turn builds on even lower-level Ruby language features/constructs.

    2. have a philosophy that if someone can provide any more meaningful information to a problem even if it indirectly solves the problem, I think that should also be rewarded.
    3. Note: when I wrote above that "there is no such thing as X", what I meant was that "there is no such thing as X in the Ruby language". That does not mean that those concepts don't exist in the Ruby community.
    4. The important thing to understand is that there is no such thing as a class method in Ruby. A class method is really just a singleton method. There is nothing special about class methods. Every object can have singleton methods. We just call them "class methods" when the object is a Class because "singleton method of an instance of Class" is too long and unwieldy.
    5. Class methods are actually instance methods defined on the singleton class of a class.
    6. The question is similar but its in a Rails context. The solutions would answer my question, but I'm almost certain that he could probably leverage Arel to solve his problem. The question I posted was designed purely as a Ruby question so that it was easier to search for. You might want to suggest an edit of the title of his question because it didn't show up when I searched for a solution to my problem.
    7. Yes, unfortunately the other question has a misleading and completely irrelevant Rails context and might be harder to find for some people. IMHO, it's still a perfect content duplicate, although not a topic one. Answers are also equal. Anyways, still a good question of yours.
    8. Side note: When I flagged yours as a dupe during review, the review system slapped me in the face and seriously accused me of not paying attention, a ridiculous claim by itself since locating a (potential) dupe requires quite a lot of attention.
    1. Thanks to Rack Middleware and Rails 3 you can output pretty JSON for every request without changing any controller of your app. I have written such middleware snippet and I get nicely printed JSON in browser and curl output.
    1. This is a really frustrating problem.
    2. One solution is to run this command to reset your keyring password: rm ~/.local/share/keyrings/login.keyring
    3. I am being told my Login Keyring Password "no longer matches" my login. I am confused - I provided a password as I was setting this up, and so I don't know what this is about and how I can fix it. Thanks for the help.
    4. Usually you get this error if you change your password by some other means which fails to update the password for the keyring.
    1. I prefer legacy to UEFI because it's easier to move the OS from the installation SSD to the mdadm RAID0
    2. I keep detailed records of my installation and configuration process so that I can quickly find out where something went wrong.
    3. Indeed yes, but sometimes it is necessary to change one's password, even if one is not 'mucking about with' or 'tweaking' or 'customising' any other system settings.
    4. It seems to me (N.b. what do I know about this? Nothing!) that the best solution would be to tweak the 'Change Password' process so that it also updates the 'Passwords and Keys'>Passwords>Login folder's properties.

      "I'm not an expert, but it seems to me..."

    5. this isn't a question, it's a warning for the unwary new Mint user (i.e. people like me)
    1. I don't recommend the game, but since it is very inexpensive, you can try it for yourself and not be out a lot of money, so this review might not be necessary, but I'm writing it anyway because I have lots of thoughts that I don't see reflected in the first reviews I see on this first Store Page of Squidlit, although I haven't read every review, which would require a lot of time.

      :-)

    1. Remote Access is something that we are really excited about because it will allow our support team to give you a seamless and high level of support that is truly unmatched. When you need extra help, you can enable the Remote Access toggle with a single click. This will send a secure token to the Elegant Themes support staff that they can use to log in to your WordPress Dashboard. No passwords are shared and there is no need to send the token to our team yourself. It all works seamlessly in the background. While remote access is enabled, our team will be able to log in to your website and help explore whatever problems you are experiencing. You can even enable it preemptively before chatting with our support team so that we can jump right in if necessary. By default, our support staff will have limited access to your website using a custom WordPress support role. You can also enable full admin access if requested. Remote access is automatically disabled after 4 days, or when you disable Divi. You can also turn it off manually after an issue has been resolved, and of course, Remote Access can only be enabled by you, the website owner, and not by Elegant Themes or anyone else. The Remote Access system is wonderful because it saves tons of time during support chat, and it saves you the hassle of having to debug certain complicated issues yourself. It allows us to take a hands on approach to solving problems quickly, instead of wasting hours or days chatting back and forth.
    1. One workspace.platform.source of truth.Endless solutions.Orchestrate powerful business solutions with a single source of truth. The only limit is your imagination.
    2. From day one, your team will love the familiarity of a spreadsheet, and the power of a database.
    3. Airtable evolves with you and your team, so you can build a solution with increasing sophistication and capability.
    1. As criticisms go, “it was too addictive and I finished it in a few hours” isn’t exactly the worst thing you can say about a game. In fact, as someone who prefers quality over quantity,

      .

    1. Three days before Labor Day, on Friday, September 2, 1921, the U.S. Army intervened on the side of coal companies against striking coal miners, marking the end of the Battle of Blair Mountain in southern West Virginia. The battle was the climax of two decades of low-intensity warfare across the coalfields of Appalachia, as the West Virginia miners sought to unionize and mining companies used violent tactics to undermine their efforts. The struggle turned deadly.
    1. first sighting: A Forward link at bottom of an e-mail, which takes you here, which has a link to a preview (which is basically a web version of the e-mail that was sent).

      In some ways, this seems preferable over forwarding the original e-mail that you received using your e-mail client's forward feature. In particular:

      • It doesn't inadvertently include your personalized unsubscribe link, allowing the forwarded-to person to maliciously unsubscribe you without your consent.
    1. a class of attacks that were enabled by Privacy Badger’s learning. Essentially, since Privacy Badger adapts its behavior based on the way that sites you visit behave, a dedicated attacker could manipulate the way Privacy Badger acts: what it blocks and what it allows. In theory, this can be used to identify users (a form of fingerprinting) or to extract some kinds of information from the pages they visit
    1. If you are riding a bicycle, you can get through the roundabout two ways: Get off your bike and go through a crosswalk like a pedestrian, or ride around it like a vehicle.

      .

    1. target="_blank" which opens the anchor in a new window(which has been redirected to tabs by browser settings usually)

      new window => new tab

    2. Instead if this anchor was nested in frames it would open in a sandbox mode of sorts, meaning only in that frame.
    1. In thermodynamics, a diathermal wall between two thermodynamic systems allows heat transfer but do not allow transfer of matter across it
    1. There is a huge explanation about why the dot is important quoting issues about DNS and character encoding

      It doesn't seem like the dot, in this context, would have anything to do with/help with either DNS or character encoding

    2. But I realized after a lot of research that the problem was that I did not copy the right URL address from the iTunes API documentation. It should have been https://itunes.apple.com/search?term=jack+johnson. not https://itunes.apple.com/search?term=jack+johnson Notice the dot at the end There is a huge explanation about why the dot is important quoting issues about DNS and character encoding but the truth is you probably do not care. Try adding the dot it might work for you too. When I added the "." everything worked like a charm.
    1. Mod note: This question is about why XMLHttpRequest/fetch/etc. on the browser are subject to the Same Access Policy restrictions (you get errors mentioning CORB or CORS) while Postman is not. This question is not about how to fix a "No 'Access-Control-Allow-Origin'..." error. It's about why they happen.
    2. Applying a CORS restriction is a security feature defined by a server and implemented by a browser.
    3. When you are using postman they are not restricted by this policy. Quoted from Cross-Origin XMLHttpRequest: Regular web pages can use the XMLHttpRequest object to send and receive data from remote servers, but they're limited by the same origin policy. Extensions aren't so limited. An extension can talk to remote servers outside of its origin, as long as it first requests cross-origin permissions.
    4. Please stop posting: CORS configurations for every language/framework under the sun. Instead find your relevant language/framework's question. 3rd party services that allow a request to circumvent CORS Command line options for turning off CORS for various browsers
    1. The 2 stars this show received on average is really an inaccurate and lazy way to characterize the hard work of the cast and crew. The filmography is amazing and scenery beautiful, the characters are rich and complete, the people are beautiful. Few shows engage me, this one did.

      unfairly bad rating

    1. To ensure that our rating mechanism remains effective, we do not disclose the exact method used to generate the rating. 

      secret / not transparent

    1. My pc and where i site when i watch netflix are not the same place, not at all.And thats beside the point, why in gods name does netflix feel the need to pause?It has a auto play function and a auto pause function that both can't be turned on or off, what the?
    1. According to Netflix, the Netflix app asks this question to prevent users from wasting bandwidth by keeping a show playing that they’re not watching. This is especially true if you’re watching Netflix on your phone through mobile data. Every megabyte is valuable, considering that network providers impose strict data limits and may charge exorbitant rates for data used on top of your phone plan. Advertisement tmntag.cmd.push(function(){tmntag.adTag('purch_Y_C_0_1', false);}); Of course, this saves Netflix bandwidth, too—if you fall asleep or just leave the room while watching Netflix, it will automatically stop playing rather than streaming until you stop it. Netflix also says this helps ensure you don’t lose your position in a series when you resume it. If you fall asleep in the middle of your binging session, you might wake up to find that several hours of episodes have played since you stopped watching. It will be difficult for you to remember when you left off.
    2. If you watch Netflix via the desktop website, one way to disable the prompt is by using a browser extension called “Never Ending Netflix” for Google Chrome.

      unofficial/fan-made extension

    1. It's kind of ridiculous that it does it 2 minutes in, it should just prompt before it starts playing the episode

      bad UX

    2. The reason is they don't want to stream to an empty house, and want to make sure you're there to watch the content (pandora does the same thing).
    3. it's always the 3rd episode in a row, 2 minutes into the 3rd.
    4. my problem is that it doesn't ask you at the end of the previous episode, or even at the beginning of the current one. It waits to interrupt what you're watching

      bad UX

    5. I called Netflix and they told me that there was no option to change this. So I asked why? He told me that nobody was requesting this as a feature. So I said I'm requesting this feature the ability to select continuous play without the pause or a play all without any pauses. He says he would put in my request but so far nothing has been done. Then I asked if Netflix had a Feedback or Feature Request page he said their Facebook, Twitter, or to call Netflix and request it.

      how to submit feature request

    6. they could just say they're putting in your request by phone without doing anything at all

      skeptical

  3. Aug 2021