19,785 Matching Annotations
  1. Jun 2021
    1. To avoid the problems with different versions of echo you may want to use printf instead. In contrast to echo printf always interprets \ sequences but doesn't automatically add a linefeed at the end so you have to append \n at the end if you want one.
    1. First off: The fact that the developer read the review, saw that a puzzle from elsewhere had made it into the game, fact-checked this, responded, and made an update within 48 hours is exactly the kind of thing I want to support.

      .

    1. You need to run gem pristin --only-executables Because whenever a ruby is updated or perhaps moved/named, due to RubyGems is generating explicit #!/path/to/ruby for all gem executables, will need to regenerate the gem bin stubs with the new path to the ruby executable.
    2. Unfortunately, even though this bug/request was opened in 2016, this feature is still not implemented in ruby-install.
    3. Based on the responses in a feature request, the best way to remove older ruby versions is to go back to the src directory and run make uninstall or rake uninstall. By default, ruby-install uses $HOME/src/ruby-$version for unpacked sources of ruby versions during installation.
    1. Whether you agree or not, to me there's nothing in this world that is entirely apolitical - when there are people there is politics. You don't have to agree with my views, nor do I expect you to. Diversity and disagreement is what drives mankind forward.
    2. In the end this plugin is a piece of software that I wrote and I'm just doing what I think is reasonable to make our community more inclusive.
      • doing what one believes is best for community
    3. As aforementioned, the usage of master as a branch most likely originated from the first meaning

      The meaning:

      An original recording, film, or document from which copies can be made.

      makes more sense to me. Why would they have meant the other meaning?

    4. I completely understand that master have two meanings: A man who has people working for him, especially servants or slaves; and An original recording, film, or document from which copies can be made.
    5. so by adopting git installations with latest source code you're effectively agreeing to go bleeding-edge. I would assume that means you're ready for any breaking changes and broken installations, which is what happened here.
    6. There are many projects that does not use the master branch as default. For example, Next.js uses the canary branch, the npm CLI and many more other projects uses stuff like prod, production, dev, develop, release, beta, head.
    7. I'm not sure if there's any cost in terms of contributing either, especially when by design git can have any branch as default, and will not hinder your experience when you use something other than master.

      git is neutral/unbiased/agnostic about default branch name by design

      And that is a good thing

    8. It just happens that most projects chose to be "lazy" (stick to default), opted to use master
    9. to be honest I think it is more about sentiment than actual engineering practices now.
    10. Forcing people out of the habit to assume this branch would be called master, is a valuable lesson.
    11. The primary branch in git can have any name by design.
    12. Well, there are a lot of reasons, with the main reason being that I am empathetic to what is happening out there and I agree with many other people that we should re-examine our choice of words to make the industry more inclusive.
    13. Personally I think it is a very bad idea to leverage political views, even if I may share them, through software.
    14. I think it's just a bad English/mis-translation problem. I'm guessing @pmmmwh assumed 'master' meant like 主 in 奴隸主 (slave owner/master). Actually a better translation would be 師 like 功夫大師 (Kung Fu master). The specimen copies are made from.
    15. The specimen copies are made from.
    16. On existing projects, consider the global effort to change from origin/master to origin/main. The cost of being different than git convention and every book, tutorial, and blog post. Is the cost of change and being different worth it?
    17. In the context of git, the word "master" is not used in the same way as "master/slave". I've never known about branches referred to as "slaves" or anything similar.
    18. I'm glad I never got a master's degree in college!
    19. My 3 projects were using your lib and got broken thanks to the renaming.
    20. In the context of git, the word "master" is not used in the same way as "master/slave". I've never known about branches referred to as "slaves" or anything similar. On existing projects, consider the global effort to change from origin/master to origin/main. The cost of being different than git convention and every book, tutorial, and blog post. Is the cost of change and being different worth it? PS. My 3 projects were using your lib and got broken thanks to the renaming. PS. PS. I'm glad I never got a master's degree in college!
    1. However, the term master is out of favor in the computing world and beyond.
    2. "While it takes time to make these changes now, it's a one-time engineering cost that will have lasting impacts, both internally and externally," Sorenson said in an email. "We're in this for the long game, and we know inclusive language is just as much about how we code and what we build as it is about person-to-person interactions."
    3. "I really appreciate the name change [because] it raises awareness," said Javier Cánovas, assistant professor in the SOM Research Lab, at the Internet Interdisciplinary Institute at the Open University of Catalonia in Barcelona. "There are things that we accept as implicit, and we then realize that we can change them because they don't match our society."
    4. the benefits of GitHub renaming the master branch to main far outweigh any temporary stumbling blocks. He said the change is part of a broader internal initiative to add inclusive language to the company's systems. His team is also replacing whitelist and blacklist with allowlist and blocklist.
    5. "Both Conservancy and the Git project are aware that the initial branch name, 'master,' is offensive to some people and we empathize with those hurt by the use of that term," said the Software Freedom Conservancy.
    6. Let's examine why GitHub renamed the master branch to main branch and what effect it will have on developers.
    1. The emphasis was made on a raw CDP protocol because Chrome allows you to do so many things that are barely supported by WebDriver because it should have consistent design with other browsers.

      compatibility: need for compatibility is limiting:

      • innovation
      • use of newer features
    2. Runs headless by default, but you can configure it to run in a headful mode.

      first sighting of term: headful

    3. There's no official Chrome or Chromium package for Linux don't install it this way because it's either outdated or unofficial, both are bad. Download it from official source.
    4. Ferrum connects to the browser by CDP protocol and there's no Selenium/WebDriver/ChromeDriver dependency.
    1. UDP
    2. Broadcast messages a ephemeral from the WS server point of view.
    3. The main (IMO) feature of MQTT – quality of service – doesn't make sense in our case: if a WebSocket server is down and doesn't receive broadcast messages (through HTTP/Redis/queue), it's likely not to handle client connections too.
    4. According to official Actioncable guide, Actioncable creates multiple redis pubsub channels.
    5. Yes, AnyCable uses only a single Redis pub/sub channel. Unlike Action Cable, anycable-go manages the actual subscriptions by itself (see hub.go), we only need a single channel to get broadcasts from web apps to a WS server, which performs the actual retransmission. Check out https://docs.anycable.io/#/v1/misc/how_to_anycable_server
    6. Right now, we are building a concept proofing prototype using Anycable.
    7. We should think about the number of simultaneous connections (peak and average) and the message rate/payload size. I think, the threshold to start thinking about AnyCable (instead of just Action Cable) is somewhere between 500 and 1000 connections on average or 5k-10k during peak hours.
      • number of simultaneous connections (peak and average)

      • the message rate/payload size.

    8. We use a single stream/queue/channel to deliver messages from RPC to WS. RPC server acts as publisher: it pushes a JSON-encoded command. Pubsub connection is initialized lazily in this case (during the first #broadcast call). WS server (anycable-go) acts as subscriber: subscription is initialized on server start, messages are received, deserialized and passed to the app.
    9. they handled this with 4 1x dynos on Heroku (before switching to AnyCable they had 20 2x dynos for ActionCable).
    10. HTTP REST seems like an "out of external dependency" way to go.
    11. Personally, I like having Redis as a dependency as most of my current applications use two Redis instances; persistent store and volatile.
    12. The idea is to avoid additional dependency if it's possible.
    1. prepend(Module.new do
    2. A lot of projects leveraging CDP appeared since then, including the most well-known one—Puppeteer, a browser automation library for Node.js. What about the Ruby world? Ferrum, a CDP library for Ruby, although being a pretty young one, provides a comparable to Puppeteer experience. And, what’s more important for us, it ships with a companion project called Cuprite—a pure Ruby Capybara driver using CDP.
    3. That’s not the only way of writing end-to-end tests in Rails. For example, you can use Cypress JS framework and IDE. The only reason stopping me from trying this approach is the lack of multiple sessions support, which is required for testing real-time applications (i.e., those with AnyCable 😉).
    4. Thus, by adding system tests, we increase the maintenance costs for development and CI environments and introduce potential points of failures or instability: due to the complex setup, flakiness is the most common problem with end-to-end testing. And most of this flakiness comes from communication with a browser.
    5. For example, Database Cleaner for a long time was a must-have add-on: we couldn’t use transactions to automatically rollback the database state, because each thread used its own connection; we had to use TRUNCATE ... or DELETE FROM ... for each table instead, which is much slower. We solved this problem by using a shared connection in all threads (via the TestProf extension). Rails 5.1 was released with a similar functionality out-of-the-box.
    6. In practice, we usually also need another tool to provide an API to control the browser (e.g., ChromeDriver).
    7. There were attempts to simplify this setup by building specific browsers (such as capybara-webkit and PhantomJS) providing such APIs out-of-box, but none of them survived the compatibility race with real browsers.
    8. “System tests” is a common naming for automated end-to-end tests in the Rails world. Before Rails adopted this name, we used such variations as feature tests, browser tests
    9. even acceptance tests (though the latter are ideologically different)
    10. Disclaimer: This article is being regularly updated with the best recommendations up to date, take a look at a Changelog section.
    1. We instead recommend using the Selenium or Apparition drivers.
    2. Development has been suspended on this project because QtWebKit was deprecated in favor of QtWebEngine, which is not a suitable replacement for our purposes.
    1. FYI, my use case is having clickable links in the mail generated by the integration tests.
    2. Setting Capybara.server_port worked when the selenium integration test ran independent of other integration tests, but failed to change the port when run with other tests, at least in my env. Asking for the port number capybara wanted to use, seemed to work better with running multiple tests. Maybe it would have worked if I changed the port for all tests, instead of letting some choose on their own.
    1. Capybara.default_host only affects tests using the rack_test driver (and only if Capybara.app_host isn't set). It shouldn't have the trailing '/' on it, and it already defaults to 'http://www.example.com' so your setting of it should be unnecessary. If what you're trying to do is make all your tests (JS and non-JS) go to 'http://www.example.com' by default then you should be able to do either Capybara.server_host = 'www.example.com' or Capybara.app_host = 'http://www.example.com' Capybara.always_include_port = true
    1. This is why for a recent Angular+Rails project we chose to use a testing stack from the backend technology’s ecosystem for e2e testing.
    2. Rather than write new tooling we decided to take advantage of tooling we had in place for our unit tests. Our unit tests already used FactoryBot, a test data generation library, for building up test datasets for a variety of test scenarios. Plus, we had already built up a nice suite of helpers that we coud re-use. By using tools and libraries already a part of the backend technology’s ecosystem we were able to spend less time building additional tooling. We had less code to maintain because of this and more time to work on solving our customer’s pain points.
    3. We were not strictly blackbox testing our application. We wanted to simulate a user walking thru specific scenarios in the app which required that we have corresponding data in the database. This helps ensure integration between the frontend and backend was wired up successfully and would give us a foundation for testing critical user flows.
    4. The problem domain and the data involved in this project was complicated enough. We decided that not having to worry about unknowns with the frontend end-to-end testing stack helped mitigate risk. This isn’t to say you should always going with the tool you know, but in this instance we felt it was the right choice.
    5. This particular project team came in with a lot of experience using testing tools like RSpec and Capybara. This included integrating with additional tools like Selenium WebDriver, Chrome and Chromedriver, data generation libraries like FactoryBot, and task runners like Rake. We had less experience doing end-to-end testing with Protractor even though it too uses Selenium WebDriver (a tool we’re very comfortable with).
    6. There are times to stretch individually and as a team, but there are also times to take advantage of what you already know.
    7. When it came to testing the whole product, end-to-end, owning both sides gave us not only more options to consider, but also more tools to choose from.
    8. This meant that we owned both sides of the product implementation. For unit testing on the frontend, we stayed with Angular’s suggestion of Jasmine. For unit testing on the backend, we went with rspec-rails. These worked well since unit tests don’t need to cross technology boundaries.
    9. We used testing tools that were in the same ecosystem as our backend technology stack for primrily three reasons: We owned both ends of the stack Team experience Interacting with the database
    10. We chose to define the frontend in one technology stack (Angular+TypeScript/JavaScript) and the backend in another (Ruby+Ruby on Rails), but both came together to fulfill a singular product vision.
    1. CMP or consent management platform is a platform or a tool to take consent from the visitor to use his/her digital identity for marketing efforts.
    2. “The data does not exist independently in the world, nor is it generated spontaneously. Data is constructed by people, from people,” (source 1).
    1. Once a variable is specified with the use method, access it with EnvSetting.my_var Or you can still use the Hash syntax if you prefer it: EnvSetting["MY_VAR"]
    2. Configuration style is exactly the same for env_bang and env_setting, only that there's no "ENV!" method... just the normal class: EnvSetting that is called and configured.
    3. Inspired by ENV! and David Copeland's article on UNIX Environment, env_setting is a slight rewrite of env_bang to provide OOP style access to your ENV.
    4. Fail loudly and helpfully if any environment variables are missing.
    1. add_class Set do |value, options| Set.new self.Array(value, options || {}) end use :NUMBER_SET, class: Set, of: Integer
    2. use :ENABLE_SOUNDTRACK, class: :boolean
    3. ENV! can convert your environment variables for you, keeping that tedium out of your application code. To specify a type, use the :class option:
    1. Most of the matchers provided by this gem are useful in a Rails context, and as such, can be used for different parts of a Rails app: database models backed by ActiveRecord non-database models, form objects, etc. backed by ActiveModel controllers routes (RSpec only) Rails-specific features like delegate
    1. Here's why Ting is switching to Verizon: The small MVNO — as of Q1 2019 it boasted 284,000 subscribers — is moving to Verizon — the largest wireless provider in the US — because it can offer Ting both better network coverage and better rates, the two most important factors for an MVNO.
    2. Verizon is drawing Ting's business because the telecom has consistently boasted the strongest network quality and consumer experience. For an MVNO, that will mean that it can offer users consistent service — the same that they'd be able to get by signing on with Verizon — while taking advantage of the more nuanced pricing models that these budget carriers use.
    1. If you reload a typical Rails-generated page, you’ll notice that the embedded CSRF token changes. Indeed, Rails appears to generate a new CSRF token on every request. But in fact what’s happening is the “real” CSRF token is simply being masked with a one-time pad to protect against SSL BREACH attacks.
    2. So even though the token appears to vary, any token generated from a user’s session (by calling form_authenticity_token) will be accepted by Rails as a valid CSRF token for that session.
    3. (In case you’re wondering, there’s nothing special about the name CSRF-TOKEN.)
    4. Note: Instead of storing a user’s ID in the session cookie you could store a JWT, but I’m not sure what that buys you. However, you may be using specific JWT claims that make this worthwhile.
    5. cookie-based authentication goes something like this:
    6. That means if an attacker can inject some JavaScript code that runs on the web app’s domain, they can steal all the data in localStorage. The same is true for any third-party JavaScript libraries used by the web app. Indeed, any sensitive data stored in localStorage can be compromised by JavaScript. In particular, if an attacker is able to snag an API token, then they can access the API masquerading as an authenticated user.
    7. But there’s a drawback that I didn’t like about this option: localStorage is vulnerable to Cross-site Scripting (XSS) attacks.
    8. So here’s the question: Where do you store the token in the browser so that the token survives browser reloads? The off-the-cuff answer is localStorage because it’s simple and effective:
    9. Token-Based Authentication
    1. While rails does have nice CSRF protection, in my instance it limited me.
    2. However, the cookie containing the CSRF-TOKEN is only used by the client to set the X-XSRF-TOKEN header. So passing a compromised CSRF-TOKEN cookie to the Rails app won't have any negative effect.
    3. network requests are a big deal, and having to deal with this kind of thing is one of the prices of switching away from server-side rendering to a distributed system
    4. In short: storing the token in HttpOnly cookies mitigates XSS being used to get the token, but opens you up to CSRF, while the reverse is true for storing the token in localStorage.
    5. Therefore, since each method had both an attack vector they opened up to and shut down, I perceived either choice as being equal.
    6. I started off really wanting to use HttpOnly cookies
    7. On the security side I think code injection is still a danger. If someone does smuggle js into your js app they'll be able to read your CSRF cookie and make ajax requests using your logged-in http session, just like your own code does
    8. This stuff is all rather boring or frustrating when you just want to get your app finished
    9. Handling 401s well is important for the user's experience. They won't happen often (though more often than I expected), but really do break everything if you're not careful. Getting a good authentication abstraction library for Vue or Ember or whatever you are using should help with a lot of the boring parts. You'll probably need to define some extra strategies/rules for this cookie session approach, but if it's anything like in ember-simple-auth they're so simple it feels like cheating, because the Rails app is doing all of the hard work and you just need the js part to spot a 401 and handle logging in and retrying whatever it was doing before.
    10. I went for session cookies in a very lazy time-pressured "aha" moment some years ago. It's been working in production for 3-4 years on a well used site without issue. It wouldn't be appropriate for a back-end API like a payment gateway where there's no user with a browser to send to a log-in screen, but for normal web pages, and especially carving js apps out of / on top of an existing site, it's extending what we have instead of starting again.
    1. This status is sent with a WWW-Authenticate header that contains information on how to authorize correctly.
    2. The HTTP 401 Unauthorized client error status response code indicates that the request has not been applied because it lacks valid authentication credentials for the target resource.
    1. Similar to 403 Forbidden, but specifically for use when authentication is required and has failed or has not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource.
    1. if you just need a screenshot of a webpage, that’s built in:
    2. This poses a few problems for automation. In some environments, there may be no graphical display available, or it may be desirable to not have the browser appear at all when being controlled.
    3. Browsers are at their core a user interface to the web, and a graphical user interface in particular.
    1. ractors
    2. Mocking is a form of global state like others (including ENV sharing), which will cause difficulties here (more with threads, a bit less with forks).
    3. Process based parallelisation is simpler than thread based due to well, the GIL on MRI rubies and lack of 100% thread safety within the other gems. (I'm fairly certain for example that there are threaded bugs lurking within the mocks code).
    4. No I'm writing it from first principles using the bisect runner as a guide and some other external gems.
    1. Parallel testing in this implementation utilizes forking processes over threads. The reason we (tenderlove and eileencodes) chose forking processes over threads is forking will be faster with single databases, which most applications will use locally. Using threads is beneficial when tests are IO bound but the majority of tests are not IO bound.
    1. For me the diagrams make it easier to talk about what the tests do without getting bogged down by how they do it.
    2. I’m going to represent tests as sequence diagrams (handily created via plantuml) rather than actually coding them out. For me the diagrams make it easier to talk about what the tests do without getting bogged down by how they do it.
    3. I’m going to add the API Server as an actor to my first test sequence to give some granularity as to what I’m actually testing.
    4. For features like websocket interactions, a single full-stack smoke test is almost essential to confirm that things are going as planned, even if the individual parts of the interaction are also covered by unit tests.
    1. app_host is used whenever you call visit to generate the url, server_host sets the ip the server should accept connections from to use (0.0.0.0 means all network interfaces) and finally server_port sets the server port (auto generated by default).You are correct in that both app and server host should be set. Could you try server_host = “0.0.0.0” and app_host = “http://rails:#{Capybara.server_port}”.

      app_host ~ server_host

    1. Local Testing establishes a secure connection between your machine and the BrowserStack cloud. Once you set up Local Testing, all URLs work out of the box, including HTTPS URLs and those behind a proxy or firewall.

      .

    1. How to test at the correct level?
    2. As many things in life, deciding what to test at each level of testing is a trade-off:
    3. Unit tests are usually cheap, and you should consider them like the basement of your house
    4. A system test is often better than an integration test that is stubbing a lot of internals.
    5. Only test the happy path, but make sure to add a test case for any regression that couldn’t have been caught at lower levels with better tests (for example, if a regression is found, regression tests should be added at the lowest level possible).
    6. These tests should only be used when: the functionality/component being tested is small the internal state of the objects/database needs to be tested it cannot be tested at a lower level
    7. White-box tests at the system level (formerly known as System / Feature tests)
    8. GitLab is transitioning from controller specs to request specs.
    9. These kind of tests ensure that individual parts of the application work well together, without the overhead of the actual app environment (i.e. the browser). These tests should assert at the request/response level: status code, headers, body. They’re useful to test permissions, redirections, what view is rendered etc.
    10. These tests should be isolated as much as possible. For example, model methods that don’t do anything with the database shouldn’t need a DB record. Classes that don’t need database records should use stubs/doubles as much as possible.
    11. Black-box tests at the system level (aka end-to-end or QA tests)
    12. White-box tests at the system level (aka system or feature tests)
    1. Levels
    2. White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of software testing that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing)
    1. A common cause of a large number of created factories is factory cascades, which result when factories create and recreate associations.
    2. We’ve enabled deprecation warnings by default when running specs. Making these warnings more visible to developers helps upgrading to newer Ruby versions.
    3. Test speed GitLab has a massive test suite that, without parallelization, can take hours to run. It’s important that we make an effort to write tests that are accurate and effective as well as fast.
    4. :js is particularly important to avoid. This must only be used if the feature test requires JavaScript reactivity in the browser. Using a headless browser is much slower than parsing the HTML response from the app.
    5. Use Factory Doctor to find cases where database persistence is not needed in a given test.
    6. :clean_gitlab_redis_cache which provides a clean Redis cache to the examples.