20,173 Matching Annotations
- Jun 2021
-
github.com github.com
-
AnyCable allows you to use any WebSocket server (written in any language) as a replacement for your Ruby server (such as Faye, Action Cable, etc).
-
-
github.com github.com
-
Whether you agree or not, to me there's nothing in this world that is entirely apolitical - when there are people there is politics. You don't have to agree with my views, nor do I expect you to. Diversity and disagreement is what drives mankind forward.
-
In the end this plugin is a piece of software that I wrote and I'm just doing what I think is reasonable to make our community more inclusive.
- doing what one believes is best for community
-
As aforementioned, the usage of master as a branch most likely originated from the first meaning
The meaning:
An original recording, film, or document from which copies can be made.
makes more sense to me. Why would they have meant the other meaning?
-
I completely understand that master have two meanings: A man who has people working for him, especially servants or slaves; and An original recording, film, or document from which copies can be made.
-
-
so by adopting git installations with latest source code you're effectively agreeing to go bleeding-edge. I would assume that means you're ready for any breaking changes and broken installations, which is what happened here.
-
There are many projects that does not use the master branch as default. For example, Next.js uses the canary branch, the npm CLI and many more other projects uses stuff like prod, production, dev, develop, release, beta, head.
-
I'm not sure if there's any cost in terms of contributing either, especially when by design git can have any branch as default, and will not hinder your experience when you use something other than master.
git is neutral/unbiased/agnostic about default branch name by design
And that is a good thing
-
It just happens that most projects chose to be "lazy" (stick to default), opted to use master
-
to be honest I think it is more about sentiment than actual engineering practices now.
-
Forcing people out of the habit to assume this branch would be called master, is a valuable lesson.
-
The primary branch in git can have any name by design.
-
Well, there are a lot of reasons, with the main reason being that I am empathetic to what is happening out there and I agree with many other people that we should re-examine our choice of words to make the industry more inclusive.
-
Personally I think it is a very bad idea to leverage political views, even if I may share them, through software.
-
I think it's just a bad English/mis-translation problem. I'm guessing @pmmmwh assumed 'master' meant like 主 in 奴隸主 (slave owner/master). Actually a better translation would be 師 like 功夫大師 (Kung Fu master). The specimen copies are made from.
-
The specimen copies are made from.
-
On existing projects, consider the global effort to change from origin/master to origin/main. The cost of being different than git convention and every book, tutorial, and blog post. Is the cost of change and being different worth it?
-
In the context of git, the word "master" is not used in the same way as "master/slave". I've never known about branches referred to as "slaves" or anything similar.
-
I'm glad I never got a master's degree in college!
-
My 3 projects were using your lib and got broken thanks to the renaming.
-
In the context of git, the word "master" is not used in the same way as "master/slave". I've never known about branches referred to as "slaves" or anything similar. On existing projects, consider the global effort to change from origin/master to origin/main. The cost of being different than git convention and every book, tutorial, and blog post. Is the cost of change and being different worth it? PS. My 3 projects were using your lib and got broken thanks to the renaming. PS. PS. I'm glad I never got a master's degree in college!
Tags
- using cutting-edge/pre-release tech
- git
- high-cost changes
- inoffensive/inclusive/politically correct wording
- unintentionally breaking something
- intentional/well-considered decisions
- unintended consequence
- most people choose the lazy/default option
- git: changing from master branch to main
- neutral/unbiased/agnostic
- lost in translation
- explaining why
- I disagree
- I like this
- nothing is apolitical where people are involved
- good point
- confusing wording
- is it worth it?
- the cost of changing something
- good question
- annotation meta: may need new tag
- no arbitrary limitation
- by design
- this is a good thing
- word senses
- despite:
- forcing people out of a habit
- alternative to mainstream way
- git: default branch
- sentiment vs. good/rational reasons
- re-examining/challenging long-established traditions
- valuable lesson
- funny
- sharing/spreading political views through software
- ambiguous
- I agree
- poor/confusing wording
- diversity
- separation of personal/political views from professional activity
- questioning/challenging long-held traditions/beliefs/habits
- being inclusive
- doing what one believes is best
- words with multiple different meanings (ambiguity)
- wording
- do pros outweigh/cover cons?
- doing something other than the most common/popular option
- is using bleeding-edge tech risky?
- words with multiple different meanings: master
- you don't have to agree with my views
Annotators
URL
-
-
www.theserverside.com www.theserverside.com
-
-
However, the term master is out of favor in the computing world and beyond.
-
"While it takes time to make these changes now, it's a one-time engineering cost that will have lasting impacts, both internally and externally," Sorenson said in an email. "We're in this for the long game, and we know inclusive language is just as much about how we code and what we build as it is about person-to-person interactions."
-
"I really appreciate the name change [because] it raises awareness," said Javier Cánovas, assistant professor in the SOM Research Lab, at the Internet Interdisciplinary Institute at the Open University of Catalonia in Barcelona. "There are things that we accept as implicit, and we then realize that we can change them because they don't match our society."
-
the benefits of GitHub renaming the master branch to main far outweigh any temporary stumbling blocks. He said the change is part of a broader internal initiative to add inclusive language to the company's systems. His team is also replacing whitelist and blacklist with allowlist and blocklist.
-
"Both Conservancy and the Git project are aware that the initial branch name, 'master,' is offensive to some people and we empathize with those hurt by the use of that term," said the Software Freedom Conservancy.
-
Let's examine why GitHub renamed the master branch to main branch and what effect it will have on developers.
Tags
- long term / long game
- good explanation
- falling out of favor
- inoffensive/inclusive/politically correct wording
- one-time cost
- raising awareness
- potentially offensive/non-inclusive wording/terms
- things we accept as implicit
- git: changing from master branch to main
- wording designed to be more palatable/pleasing/inoffensive
- inclusive language
- explaining why
Annotators
URL
-
-
github.com github.com
-
The emphasis was made on a raw CDP protocol because Chrome allows you to do so many things that are barely supported by WebDriver because it should have consistent design with other browsers.
compatibility: need for compatibility is limiting:
- innovation
- use of newer features
-
-
Runs headless by default, but you can configure it to run in a headful mode.
first sighting of term: headful
-
There's no official Chrome or Chromium package for Linux don't install it this way because it's either outdated or unofficial, both are bad. Download it from official source.
-
Ferrum connects to the browser by CDP protocol and there's no Selenium/WebDriver/ChromeDriver dependency.
Tags
- Selenium/WebDriver
- using cutting-edge/pre-release tech
- outdated
- software distribution
- browser: headful vs. headless
- testing: CDP-based
- compatibility: need for compatibility is limiting
- testing: non-Selenium
- unofficial
- CDP (Chrome DevTools Protocol)
- browser: headful
- not:
- good advice
- reasonable defaults
- first sighting
- compatibility: need for compatibility is limiting: prevents use of newer features
- distributing apps
- browser: headless
- Ferrum (Ruby)
Annotators
URL
-
-
github.com github.com
-
driven_by :selenium_chrome_headless
first sighting:
driven_by
-
Rails.application.routes.default_url_options[:host] = "localhost:#{Capybara.current_session.server.port}"
-
-
thoughtbot.com thoughtbot.com
-
-
UDP
-
Broadcast messages a ephemeral from the WS server point of view.
-
The main (IMO) feature of MQTT – quality of service – doesn't make sense in our case: if a WebSocket server is down and doesn't receive broadcast messages (through HTTP/Redis/queue), it's likely not to handle client connections too.
-
According to official Actioncable guide, Actioncable creates multiple redis pubsub channels.
-
Yes, AnyCable uses only a single Redis pub/sub channel. Unlike Action Cable, anycable-go manages the actual subscriptions by itself (see hub.go), we only need a single channel to get broadcasts from web apps to a WS server, which performs the actual retransmission. Check out https://docs.anycable.io/#/v1/misc/how_to_anycable_server
-
Right now, we are building a concept proofing prototype using Anycable.
-
We should think about the number of simultaneous connections (peak and average) and the message rate/payload size. I think, the threshold to start thinking about AnyCable (instead of just Action Cable) is somewhere between 500 and 1000 connections on average or 5k-10k during peak hours.
number of simultaneous connections (peak and average)
the message rate/payload size.
-
We use a single stream/queue/channel to deliver messages from RPC to WS. RPC server acts as publisher: it pushes a JSON-encoded command. Pubsub connection is initialized lazily in this case (during the first #broadcast call). WS server (anycable-go) acts as subscriber: subscription is initialized on server start, messages are received, deserialized and passed to the app.
-
they handled this with 4 1x dynos on Heroku (before switching to AnyCable they had 20 2x dynos for ActionCable).
-
HTTP REST seems like an "out of external dependency" way to go.
-
Personally, I like having Redis as a dependency as most of my current applications use two Redis instances; persistent store and volatile.
-
The idea is to avoid additional dependency if it's possible.
Tags
- differences
- efficiency (computing)
- proof of concept
- system architecture description/overview
- dependencies: already using it
- threshold to start considering/thinking about this option
- dependencies: avoid additional dependency if possible
- AnyCable
- threshold to start considering/thinking about this factor
- RPC
- minimal dependencies
- devops/server architecture: factors
- ActionCable
- REST API
- features
- Redis
- ephemeral
- defining feature
- I like this
- UDP
- good idea
- devops/server architecture
- feature: reliable/guaranteed delivery
- wasteful/inefficient use of resources
- primary feature
- determining if something is an appropriate application / best tool for the job
- annotation meta: may need new tag
- comparison
- non-guaranteed delivery
- pub/sub
Annotators
URL
-
-
twitter.com twitter.com
-
So ActionCable needs Redis! Is this the first time Rails is aligning with a vendor product? Why not abstract it like AR/AJ?
-
-
evilmartians.com evilmartians.com
-
prepend(Module.new do
-
A lot of projects leveraging CDP appeared since then, including the most well-known one—Puppeteer, a browser automation library for Node.js. What about the Ruby world? Ferrum, a CDP library for Ruby, although being a pretty young one, provides a comparable to Puppeteer experience. And, what’s more important for us, it ships with a companion project called Cuprite—a pure Ruby Capybara driver using CDP.
-
That’s not the only way of writing end-to-end tests in Rails. For example, you can use Cypress JS framework and IDE. The only reason stopping me from trying this approach is the lack of multiple sessions support, which is required for testing real-time applications (i.e., those with AnyCable 😉).
-
Thus, by adding system tests, we increase the maintenance costs for development and CI environments and introduce potential points of failures or instability: due to the complex setup, flakiness is the most common problem with end-to-end testing. And most of this flakiness comes from communication with a browser.
-
For example, Database Cleaner for a long time was a must-have add-on: we couldn’t use transactions to automatically rollback the database state, because each thread used its own connection; we had to use TRUNCATE ... or DELETE FROM ... for each table instead, which is much slower. We solved this problem by using a shared connection in all threads (via the TestProf extension). Rails 5.1 was released with a similar functionality out-of-the-box.
-
In practice, we usually also need another tool to provide an API to control the browser (e.g., ChromeDriver).
-
There were attempts to simplify this setup by building specific browsers (such as capybara-webkit and PhantomJS) providing such APIs out-of-box, but none of them survived the compatibility race with real browsers.
-
“System tests” is a common naming for automated end-to-end tests in the Rails world. Before Rails adopted this name, we used such variations as feature tests, browser tests
-
even acceptance tests (though the latter are ideologically different)
-
Disclaimer: This article is being regularly updated with the best recommendations up to date, take a look at a Changelog section.
Tags
- disadvantages/drawbacks/cons
- testing: end-to-end
- Ruby: prepend Module.new
- Ferrum (Ruby)
- failed attempt
- misnomer
- testing: CDP-based
- testing: database: wrapping tests in transaction
- compatibility
- browser-based automated testing
- limitations
- Ruby: prepend
- naming convention
- testing: stack: choosing
- unfortunate limitations
- race (general)
- changelog
- updating a published document: disclosing that it has been updated
- naming
- Cypress
- intermittent test failures (flaky tests)
- no longer needed
- competition/race
- testing: system-level
- distinction
- testing: acceptance tests
- Cuprite (Ruby)
- advantages/merits/pros
- Chromedriver
- Capybara
Annotators
URL
-
-
github.com github.com
-
-
Cuprite is a pure Ruby driver (read as no Selenium/WebDriver/ChromeDriver dependency) for Capybara.
-
The design of the driver is as close to Poltergeist as possible though it's not a goal.
-
-
duckduckgo.com duckduckgo.com
Tags
Annotators
URL
-
-
chromedevtools.github.io chromedevtools.github.io
-
duckduckgo.com duckduckgo.com
-
-
github.com github.com
-
We instead recommend using the Selenium or Apparition drivers.
-
Development has been suspended on this project because QtWebKit was deprecated in favor of QtWebEngine, which is not a suitable replacement for our purposes.
-
-
stackoverflow.com stackoverflow.com
-
stackoverflow.com stackoverflow.com
-
FYI, my use case is having clickable links in the mail generated by the integration tests.
-
Setting Capybara.server_port worked when the selenium integration test ran independent of other integration tests, but failed to change the port when run with other tests, at least in my env. Asking for the port number capybara wanted to use, seemed to work better with running multiple tests. Maybe it would have worked if I changed the port for all tests, instead of letting some choose on their own.
-
-
stackoverflow.com stackoverflow.com
-
config.default_max_wait_time = ENV.has_key?("CI") ? 60 : 10
-
-
stackoverflow.com stackoverflow.com
-
Capybara.default_host only affects tests using the rack_test driver (and only if Capybara.app_host isn't set). It shouldn't have the trailing '/' on it, and it already defaults to 'http://www.example.com' so your setting of it should be unnecessary. If what you're trying to do is make all your tests (JS and non-JS) go to 'http://www.example.com' by default then you should be able to do either Capybara.server_host = 'www.example.com' or Capybara.app_host = 'http://www.example.com' Capybara.always_include_port = true
-
-
www.mutuallyhuman.com www.mutuallyhuman.com
-
This is why for a recent Angular+Rails project we chose to use a testing stack from the backend technology’s ecosystem for e2e testing.
-
Rather than write new tooling we decided to take advantage of tooling we had in place for our unit tests. Our unit tests already used FactoryBot, a test data generation library, for building up test datasets for a variety of test scenarios. Plus, we had already built up a nice suite of helpers that we coud re-use. By using tools and libraries already a part of the backend technology’s ecosystem we were able to spend less time building additional tooling. We had less code to maintain because of this and more time to work on solving our customer’s pain points.
-
We were not strictly blackbox testing our application. We wanted to simulate a user walking thru specific scenarios in the app which required that we have corresponding data in the database. This helps ensure integration between the frontend and backend was wired up successfully and would give us a foundation for testing critical user flows.
-
The problem domain and the data involved in this project was complicated enough. We decided that not having to worry about unknowns with the frontend end-to-end testing stack helped mitigate risk. This isn’t to say you should always going with the tool you know, but in this instance we felt it was the right choice.
-
This particular project team came in with a lot of experience using testing tools like RSpec and Capybara. This included integrating with additional tools like Selenium WebDriver, Chrome and Chromedriver, data generation libraries like FactoryBot, and task runners like Rake. We had less experience doing end-to-end testing with Protractor even though it too uses Selenium WebDriver (a tool we’re very comfortable with).
-
There are times to stretch individually and as a team, but there are also times to take advantage of what you already know.
-
When it came to testing the whole product, end-to-end, owning both sides gave us not only more options to consider, but also more tools to choose from.
-
This meant that we owned both sides of the product implementation. For unit testing on the frontend, we stayed with Angular’s suggestion of Jasmine. For unit testing on the backend, we went with rspec-rails. These worked well since unit tests don’t need to cross technology boundaries.
-
We used testing tools that were in the same ecosystem as our backend technology stack for primrily three reasons: We owned both ends of the stack Team experience Interacting with the database
-
We chose to define the frontend in one technology stack (Angular+TypeScript/JavaScript) and the backend in another (Ruby+Ruby on Rails), but both came together to fulfill a singular product vision.
Tags
- answer the "why?"
- testing: end-to-end
- testing: unit tests
- testing: clear-box testing
- software stack: choosing: factors: familiarity/experience
- wise choice
- end-to-end testing
- distributed (client/server) system
- testing: black-box testing
- how to choose a dependency/library/framework
- avoid extra/needless work
- reuse/leverage existing _ when possible
- testing: stack
- explaining why
- testing: stack: choosing
- software stack: choosing: factors: code reuse
- frontend vs. backend: owning both ends
- good advice
- key point
- don't reinvent the wheel
- rationale
- official preferred convention / way to do something
- using disparate technologies in a single project
- determining if something is an appropriate application / best tool for the job
- how to choose software stack
- people stick to what they know
- software stack: choosing
- don't repeat yourself
- me too
- officially recommended
Annotators
URL
-
-
www.audienceplay.com www.audienceplay.com
-
CMP or consent management platform is a platform or a tool to take consent from the visitor to use his/her digital identity for marketing efforts.
-
“The data does not exist independently in the world, nor is it generated spontaneously. Data is constructed by people, from people,” (source 1).
-
-
-
store.steampowered.com store.steampowered.com
-
Cool concept but badly executed.
.
-
-
github.com github.com
-
Once a variable is specified with the use method, access it with EnvSetting.my_var Or you can still use the Hash syntax if you prefer it: EnvSetting["MY_VAR"]
-
Configuration style is exactly the same for env_bang and env_setting, only that there's no "ENV!" method... just the normal class: EnvSetting that is called and configured.
-
Inspired by ENV! and David Copeland's article on UNIX Environment, env_setting is a slight rewrite of env_bang to provide OOP style access to your ENV.
-
Fail loudly and helpfully if any environment variables are missing.
-
-
github.com github.com
-
add_class Set do |value, options| Set.new self.Array(value, options || {}) end use :NUMBER_SET, class: Set, of: Integer
-
-
use :ENABLE_SOUNDTRACK, class: :boolean
-
ENV! can convert your environment variables for you, keeping that tedium out of your application code. To specify a type, use the :class option:
-
-
gitlab.com gitlab.com
-
-
The following types are supported:
-
access to typed ENV-variables (integers, booleans etc. instead of just strings)
-
-
naildrivin5.com naildrivin5.com
-
It also makes it hard to centralize type coercions and default values.
-
-
It’s easy to create bugs because the environment is a somewhat degenerate settings database.
-
It also makes your code harder to follower because you are using SCREAMING_SNAKE_CASE instead of nice, readable methods.
-
Most programming languages vend environment variables as strings. This leads to errors like so:
Tags
- answer the "why?"
- Ruby: ENV interfaces
- messy
- coerce string values to boolean
- poor solution
- programming: centralized location in code
- Ruby: ENV
- less than ideal / not optimal
- environment variables
- letter case: all capitals: hard/unpleasant to read
- database
- Ruby: ENV: don't use ENV directly
- illustrating problem
Annotators
URL
-
-
github.com github.com
-
github.com github.com
Tags
Annotators
URL
-
-
www.gertgoet.com www.gertgoet.com
-
Note: as for setting boolean variables: not only are true/false and 0/1 acceptable values, but also T/F and on/off. Thanks, coercible!
-
-
-
github.com github.com
-
-
This repository has been archived by the owner.
No explanation/announcement in the Readme
-
You could also opt to extend your Rails configuration object: Envy.init use: MyApp::Application.config MyApp::Application.config.my_variable # => ...
-
-
www.dekudeals.com www.dekudeals.com
-
all the mechanics are missing
-
-
github.com github.com
-
Most of the matchers provided by this gem are useful in a Rails context, and as such, can be used for different parts of a Rails app: database models backed by ActiveRecord non-database models, form objects, etc. backed by ActiveModel controllers routes (RSpec only) Rails-specific features like delegate
-
-
-
Typing cmd in the Run Prompt and pressing Shift + Alt + Enter to open an elevated Command Prompt
-
-
trac.nginx.org trac.nginx.org
-
I've updated ticket description to mangle domain names.
Tags
Annotators
URL
-
-
help.ting.com help.ting.com
-
Here's why Ting is switching to Verizon: The small MVNO — as of Q1 2019 it boasted 284,000 subscribers — is moving to Verizon — the largest wireless provider in the US — because it can offer Ting both better network coverage and better rates, the two most important factors for an MVNO.
-
Verizon is drawing Ting's business because the telecom has consistently boasted the strongest network quality and consumer experience. For an MVNO, that will mean that it can offer users consistent service — the same that they'd be able to get by signing on with Verizon — while taking advantage of the more nuanced pricing models that these budget carriers use.
-
-
pragmaticstudio.com pragmaticstudio.com
-
-
If you reload a typical Rails-generated page, you’ll notice that the embedded CSRF token changes. Indeed, Rails appears to generate a new CSRF token on every request. But in fact what’s happening is the “real” CSRF token is simply being masked with a one-time pad to protect against SSL BREACH attacks.
-
So even though the token appears to vary, any token generated from a user’s session (by calling form_authenticity_token) will be accepted by Rails as a valid CSRF token for that session.
-
(In case you’re wondering, there’s nothing special about the name CSRF-TOKEN.)
-
Note: Instead of storing a user’s ID in the session cookie you could store a JWT, but I’m not sure what that buys you. However, you may be using specific JWT claims that make this worthwhile.
-
cookie-based authentication goes something like this:
-
That means if an attacker can inject some JavaScript code that runs on the web app’s domain, they can steal all the data in localStorage. The same is true for any third-party JavaScript libraries used by the web app. Indeed, any sensitive data stored in localStorage can be compromised by JavaScript. In particular, if an attacker is able to snag an API token, then they can access the API masquerading as an authenticated user.
-
But there’s a drawback that I didn’t like about this option: localStorage is vulnerable to Cross-site Scripting (XSS) attacks.
-
So here’s the question: Where do you store the token in the browser so that the token survives browser reloads? The off-the-cuff answer is localStorage because it’s simple and effective:
-
Token-Based Authentication
Tags
- code injection
- cryptography
- JWT
- see content below
- distributed (client/server) system
- only do it if it makes sense/is worth it (may be sometimes but not always worthwhile)
- sequence diagram
- authentication: token-based
- software architecture
- security: cross-site scripting (XSS) vulnerability
- localStorage
- authentication: cookie-based
- naming
- excellent technical writing
- annotation meta: may need new tag
- authentication
Annotators
URL
-
-
disqus.com disqus.com
-
While rails does have nice CSRF protection, in my instance it limited me.
-
However, the cookie containing the CSRF-TOKEN is only used by the client to set the X-XSRF-TOKEN header. So passing a compromised CSRF-TOKEN cookie to the Rails app won't have any negative effect.
-
network requests are a big deal, and having to deal with this kind of thing is one of the prices of switching away from server-side rendering to a distributed system
-
In short: storing the token in HttpOnly cookies mitigates XSS being used to get the token, but opens you up to CSRF, while the reverse is true for storing the token in localStorage.
-
Therefore, since each method had both an attack vector they opened up to and shut down, I perceived either choice as being equal.
-
I started off really wanting to use HttpOnly cookies
-
On the security side I think code injection is still a danger. If someone does smuggle js into your js app they'll be able to read your CSRF cookie and make ajax requests using your logged-in http session, just like your own code does
-
This stuff is all rather boring or frustrating when you just want to get your app finished
-
Handling 401s well is important for the user's experience. They won't happen often (though more often than I expected), but really do break everything if you're not careful. Getting a good authentication abstraction library for Vue or Ember or whatever you are using should help with a lot of the boring parts. You'll probably need to define some extra strategies/rules for this cookie session approach, but if it's anything like in ember-simple-auth they're so simple it feels like cheating, because the Rails app is doing all of the hard work and you just need the js part to spot a 401 and handle logging in and retrying whatever it was doing before.
-
I went for session cookies in a very lazy time-pressured "aha" moment some years ago. It's been working in production for 3-4 years on a well used site without issue. It wouldn't be appropriate for a back-end API like a payment gateway where there's no user with a browser to send to a log-in screen, but for normal web pages, and especially carving js apps out of / on top of an existing site, it's extending what we have instead of starting again.
Tags
- code injection
- disadvantages/drawbacks/cons
- migration from:
- distributed (client/server) system
- server-side rendering: traditional web server
- trade-offs
- handling
- mitigation
- challenges
- limitations
- security: cross-site scripting (XSS) vulnerability
- localStorage
- HTTP 401
- cookies: HttpOnly
- unfortunate limitations
- the boring stuff
- authentication: cookie-based
- Rails
- defending an idea
- security
- features: built-in
- CSRF
Annotators
URL
-
-
cheatsheetseries.owasp.org cheatsheetseries.owasp.org
-
Remember that any Cross-Site Scripting (XSS) can be used to defeat all CSRF mitigation techniques!
-
-
-
developer.mozilla.org developer.mozilla.org
-
This status is sent with a WWW-Authenticate header that contains information on how to authorize correctly.
-
The HTTP 401 Unauthorized client error status response code indicates that the request has not been applied because it lacks valid authentication credentials for the target resource.
-
-
en.wikipedia.org en.wikipedia.org
-
Similar to 403 Forbidden, but specifically for use when authentication is required and has failed or has not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource.
-
-
stackoverflow.com stackoverflow.com
-
What if you only want to set the width though? I need "full site, at 1200px browser width", for example.
-
-
hacks.mozilla.org hacks.mozilla.org
-
if you just need a screenshot of a webpage, that’s built in:
-
This poses a few problems for automation. In some environments, there may be no graphical display available, or it may be desirable to not have the browser appear at all when being controlled.
-
Browsers are at their core a user interface to the web, and a graphical user interface in particular.
-
-
browsersync.io browsersync.io
-
github.com github.com
-
Minimal dependencies (no explicit rspec, minitest, redis, pg dependencies)
Tags
Annotators
URL
-
-
github.com github.com
-
ractors
-
Mocking is a form of global state like others (including ENV sharing), which will cause difficulties here (more with threads, a bit less with forks).
-
Process based parallelisation is simpler than thread based due to well, the GIL on MRI rubies and lack of 100% thread safety within the other gems. (I'm fairly certain for example that there are threaded bugs lurking within the mocks code).
-
No I'm writing it from first principles using the bisect runner as a guide and some other external gems.
-
-
-
Parallel testing in this implementation utilizes forking processes over threads. The reason we (tenderlove and eileencodes) chose forking processes over threads is forking will be faster with single databases, which most applications will use locally. Using threads is beneficial when tests are IO bound but the majority of tests are not IO bound.
-
-
about.gitlab.com about.gitlab.com
-
github.com github.com
-
To better understand what is actually possible have a look at the full example
-
-
stackoverflow.com stackoverflow.com
-
netstat (net-tools) is deprecated, perhaps you want to use other tools (ss, lsof, fuser etc.)
-
-
www.mutuallyhuman.com www.mutuallyhuman.com
-
For me the diagrams make it easier to talk about what the tests do without getting bogged down by how they do it.
-
I’m going to represent tests as sequence diagrams (handily created via plantuml) rather than actually coding them out. For me the diagrams make it easier to talk about what the tests do without getting bogged down by how they do it.
-
I’m going to add the API Server as an actor to my first test sequence to give some granularity as to what I’m actually testing.
-
For features like websocket interactions, a single full-stack smoke test is almost essential to confirm that things are going as planned, even if the individual parts of the interaction are also covered by unit tests.
Tags
- focus on what it should do, not on how it should do it (implementation details; software design)
- testing: smoke tests
- too detailed
- testing: end-to-end
- dev tool
- see content below
- illustration (visual)
- sequence diagram
- communication: effective communication
- communication: use the right level of detail
- illustrating problem
- describe the what without getting bogged down by how (implementation details; too detailed)
- testing: levels of tests: how to test at the correct level?
- communication: focus on what is important
Annotators
URL
-
-
www.reddit.com www.reddit.com
-
app_host is used whenever you call visit to generate the url, server_host sets the ip the server should accept connections from to use (0.0.0.0 means all network interfaces) and finally server_port sets the server port (auto generated by default).You are correct in that both app and server host should be set. Could you try server_host = “0.0.0.0” and app_host = “http://rails:#{Capybara.server_port}”.
app_host ~ server_host
-
-
www.browserstack.com www.browserstack.com
-
Local Testing establishes a secure connection between your machine and the BrowserStack cloud. Once you set up Local Testing, all URLs work out of the box, including HTTPS URLs and those behind a proxy or firewall.
.
-
-
github.com github.com
-
Why does test suite performance matter? First of all, testing is a part of a developer's feedback loop (see @searls talk) and, secondly, it is a part of a deployment cycle.
-
-
docs.gitlab.com docs.gitlab.com
-
How to test at the correct level?
-
As many things in life, deciding what to test at each level of testing is a trade-off:
-
Unit tests are usually cheap, and you should consider them like the basement of your house
-
A system test is often better than an integration test that is stubbing a lot of internals.
-
Only test the happy path, but make sure to add a test case for any regression that couldn’t have been caught at lower levels with better tests (for example, if a regression is found, regression tests should be added at the lowest level possible).
-
These tests should only be used when: the functionality/component being tested is small the internal state of the objects/database needs to be tested it cannot be tested at a lower level
-
White-box tests at the system level (formerly known as System / Feature tests)
-
GitLab is transitioning from controller specs to request specs.
-
These kind of tests ensure that individual parts of the application work well together, without the overhead of the actual app environment (i.e. the browser). These tests should assert at the request/response level: status code, headers, body. They’re useful to test permissions, redirections, what view is rendered etc.
-
-
These tests should be isolated as much as possible. For example, model methods that don’t do anything with the database shouldn’t need a DB record. Classes that don’t need database records should use stubs/doubles as much as possible.
-
Black-box tests at the system level (aka end-to-end or QA tests)
-
White-box tests at the system level (aka system or feature tests)
Tags
- testing: end-to-end
- GitLab
- testing: unit tests
- testing: levels of tests: prefer lower-level tests when possible
- testing: clear-box testing
- appropriate use case
- falling out of favor
- testing: types of tests
- end-to-end testing
- testing: integration tests
- testing: levels of tests
- newer/better ways of doing things
- guidelines
- when to use
- testing: what is worth testing?
- testing: levels of tests: how to test at the correct level?
- testing: what to test
- good advice
- testing: Rails: controller tests
- testing: system-level
- testing: levels of tests: higher level better than stubbing a lot of internals
- happy path
- testing: white-box testing
- name changes
- testing: speed of tests: avoid doing unnecessary work
- regression testing
Annotators
URL
-
-
en.wikipedia.org en.wikipedia.org
-
Levels
-
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of software testing that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing)
-
-
docs.gitlab.com docs.gitlab.com
-
A common cause of a large number of created factories is factory cascades, which result when factories create and recreate associations.
-
We’ve enabled deprecation warnings by default when running specs. Making these warnings more visible to developers helps upgrading to newer Ruby versions.
-
Test speed GitLab has a massive test suite that, without parallelization, can take hours to run. It’s important that we make an effort to write tests that are accurate and effective as well as fast.
-
:js is particularly important to avoid. This must only be used if the feature test requires JavaScript reactivity in the browser. Using a headless browser is much slower than parsing the HTML response from the app.
-
Use Factory Doctor to find cases where database persistence is not needed in a given test.
-
:clean_gitlab_redis_cache which provides a clean Redis cache to the examples.
-
Time returned from a database can differ in precision from time objects in Ruby, so we need flexible tolerances when comparing in specs. We can use be_like_time to compare that times are within one second of each other.
-
We use the RSpec::Parameterized gem
first sighting: rspec-parameterized
-
Parameterized tests
-
This style of testing is used to exercise one piece of code with a comprehensive range of inputs. By specifying the test case once, alongside a table of inputs and the expected output for each, your tests can be made easier to read and more compact.
Tags
- rspec-parameterized
- precision
- deprecation warnings
- first sighting
- testing: tolerance for small differences (precision)
- errors/warnings are helpful for development
- testing: clearing cache
- testing: speed of tests
- testing: parameterized tests
- test factory: problems: factory cascades
- testing: comprehensiveness (testing all possible/representative cases)
- testing: speed of tests: avoid doing unnecessary work
Annotators
URL
-
-
docs.gitlab.com docs.gitlab.com
-
Controller specs should not be used to write N+1 tests as the controller is only initialized once per example. This could lead to false successes where subsequent “requests” could have queries reduced (e.g. because of memoization).
-
As an example you might create 5 issues in between counts, which would cause the query count to increase by 5 if an N+1 problem exists.
-
QueryRecorder is a tool for detecting the N+1 queries problem from tests.
-
-
github.com github.com
-
This change should be a workaround for issue #8.
-
-