389 Matching Annotations
  1. Mar 2021
    1. What is the point of avoiding the semicolon in concat_javascript_sources

      For how detailed and insightful his analysis was -- which didn't elaborate or even touch on his not understanding the reason for adding the semicolon -- it sure appeared like he knew what it was for. Otherwise, the whole issue would/should have been about how he didn't understand that, not on how to keep adding the semicolon but do so in a faster way!

      Then again, this comment from 3 months afterwards, indicates he may not think they are even necessary: https://github.com/rails/sprockets/issues/388#issuecomment-252417741

      Anyway, just in case he really didn't know, the comment shortly below partly answers the question:

      Since the common problem with concatenating JavaScript files is the lack of semicolons, automatically adding one (that, like Sam said, will then be removed by the minifier if it's unnecessary) seems on the surface to be a perfectly fine speed optimization.

      This also alludes to the problem: https://github.com/rails/sprockets/issues/388#issuecomment-257312994

      But the explicit answer/explanation to this question still remains unspoken: because if you don't add them between concatenated files -- as I discovered just to day -- you will run into this error:

         (intermediate value)(...) is not a function
             at something.source.js:1
      

      , apparently because when it concatenated those 2 files together, it tried to evaluate it as:

         ({
           // other.js
         })()
         (function() {
           // something.js
         })();
      

      It makes sense that a ; is needed.

    1. I'm kinda stuck at the moment, going around in circles. Everything is really heavily coupled. I would like to get to the point where no load is called from within processors, but i'm not sure if that's possible. Currently the API and the caching strategies are fighting me at every step of the way. I have a branch where i'm hacking through some refactoring, no light at the end of the tunnel yet though :(
    1. Hyperstack gives you full access to the entire universe of JavaScript libraries and components directly within your Ruby code.Everything you can do in JavaScript is simple to do in Ruby; this includes passing parameters between Ruby and JavaScript and even passing Ruby methods as JavaScript callbacks.There is no need to learn JavaScript, all you need to understand is how to bridge between JS and Ruby.
  2. Feb 2021
    1. To understand this helper, you should understand that every step invocation calls Output() for you behind the scenes. The following DSL use is identical to the one [above]. class Execute < Trailblazer::Activity::Railway step :find_provider, Output(Trailblazer::Activity::Left, :failure) => Track(:failure), Output(Trailblazer::Activity::Right, :success) => Track(:success)
    2. For branching out a separate path in an activity, use the Path() macro. It’s a convenient, simple way to declare alternative routes

      Seems like this would be a very common need: once you switch to a custom failure track, you want it to stay on that track until the end!!!

      The problem is that in a Railway, everything automatically has 2 outputs. But we really only need one (which is exactly what Path gives us). And you end up fighting the defaults when there are the automatic 2 outputs, because you have to remember to explicitly/verbosely redirect all of those outputs or they may end up going somewhere you don't want them to go.

      The default behavior of everything going to the next defined step is not helpful for doing that, and in fact is quite frustrating because you don't want unrelated steps to accidentally end up on one of the tasks in your custom failure track.

      And you can't use fail for custom-track steps becase that breaks magnetic_to for some reason.

      I was finding myself very in need of something like this, and was about to write my own DSL, but then I discovered this. I still think it needs a better DSL than this, but at least they provided a way to do this. Much needed.

      For this example, I might write something like this:

      step :decide_type, Output(Activity::Left, :credit_card) => Track(:with_credit_card)
      
      # Create the track, which would automatically create an implicit End with the same id.
      Track(:with_credit_card) do
          step :authorize
          step :charge
      end
      

      I guess that's not much different than theirs. Main improvement is it avoids ugly need to specify end_id/end_task.

      But that wouldn't actually be enough either in this example, because you would actually want to have a failure track there and a path doesn't have one ... so it sounds like Subprocess and a new self-contained ProcessCreditCard Railway would be the best solution for this particular example... Subprocess is the ultimate in flexibility and gives us all the flexibility we need)


      But what if you had a path that you needed to direct to from 2 different tasks' outputs?

      Example: I came up with this, but it takes a lot of effort to keep my custom path/track hidden/"isolated" and prevent other tasks from automatically/implicitly going into those steps:

      class Example::ValidationErrorTrack < Trailblazer::Activity::Railway
        step :validate_model, Output(:failure) => Track(:validation_error)
        step :save,           Output(:failure) => Track(:validation_error)
      
        # Can't use fail here or the magnetic_to won't work and  Track(:validation_error) won't work
        step :log_validation_error, magnetic_to: :validation_error,
          Output(:success) => End(:validation_error), 
          Output(:failure) => End(:validation_error) 
      end
      
      puts Trailblazer::Developer.render o
      Reloading...
      
      #<Start/:default>
       {Trailblazer::Activity::Right} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=validate_model>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=validate_model>
       {Trailblazer::Activity::Left} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=save>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=save>
       {Trailblazer::Activity::Left} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<End/:success>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Left} => #<End/:validation_error>
       {Trailblazer::Activity::Right} => #<End/:validation_error>
      #<End/:success>
      
      #<End/:validation_error>
      
      #<End/:failure>
      

      Now attempt to do it with Path... Does the Path() have an ID we can reference? Or maybe we just keep a reference to the object and use it directly in 2 different places?

      class Example::ValidationErrorTrack::VPathHelper1 < Trailblazer::Activity::Railway
         validation_error_path = Path(end_id: "End.validation_error", end_task: End(:validation_error)) do
          step :log_validation_error
        end
        step :validate_model, Output(:failure) => validation_error_path
        step :save,           Output(:failure) => validation_error_path
      end
      
      o=Example::ValidationErrorTrack::VPathHelper1; puts Trailblazer::Developer.render o
      Reloading...
      
      #<Start/:default>
       {Trailblazer::Activity::Right} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=validate_model>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=validate_model>
       {Trailblazer::Activity::Left} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=save>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<End/:validation_error>
      #<Trailblazer::Activity::TaskBuilder::Task user_proc=save>
       {Trailblazer::Activity::Left} => #<Trailblazer::Activity::TaskBuilder::Task user_proc=log_validation_error>
       {Trailblazer::Activity::Right} => #<End/:success>
      #<End/:success>
      
      #<End/:validation_error>
      
      #<End/:failure>
      

      It's just too bad that:

      • there's not a Railway helper in case you want multiple outputs, though we could probably create one pretty easily using Path as our template
      • we can't "inline" a separate Railway acitivity (Subprocess "nests" it rather than "inlines")
    1. The sole purpose to add Dev::Trace::Inspector module is to make custom inspection possible and efficient while tracing. For example, ActiveRecord::Relation#inspect makes additional queries to fetch top 10 records and generate the output everytime. To avoid this, Inspector will not call inspect method when it finds such objects (deeply nested anywhere). Instead, it’ll call AR::Relation#to_sql to get plain SQL query which doesn’t make additional queries and is better to understand in tracing output.
    1. While you could program this little piece of logic and flow yourself using a bunch of Ruby methods along with a considerable amount of ifs and elses, and maybe elsif, if you’re feeling fancy, a Trailblazer activity provides you a simple API for creating such flow without having to write and maintain any control code. It is an abstraction.
    1. While Trailblazer offers you abstraction layers for all aspects of Ruby On Rails, it does not missionize you. Wherever you want, you may fall back to the "Rails Way" with fat models, monolithic controllers, global helpers, etc. This is not a bad thing, but allows you to step-wise introduce Trailblazer's encapsulation in your app without having to rewrite it.
    1. @adisos if reform-rails will not match, I suggest to use: https://github.com/orgsync/active_interaction I've switched to it after reform-rails as it was not fully detached from the activerecord, code is a bit hacky and complex to modify, and in overall reform not so flexible as active_interaction. It has multiple params as well: https://github.com/orgsync/active_interaction/blob/master/spec/active_interaction/modules/input_processor_spec.rb#L41

      I'm not sure what he meant by:

      fully detached from the activerecord I didn't think it was tied to ActiveRecord.

      But I definitely agree with:

      code is a bit hacky and complex to modify

    1. Patreon is a good example of why generic community platforms are not the future. You could certainly say that Patreon is already at the center of content, social, and commerce. But of course, Patreon has for years been getting unbundled into A, B, and C type businesses. The reason for Patreon’s erosion is that its product design is totally unsuitable for all three use cases: it’s a bad brand platform, a bad content platform, and a bad social platform. Patreon is a classic case of what Venkatesh Rao calls “Too Big to Nail,” and businesses that try to follow in its stead will face similar challenges.

      An interesting analysis of why Patreon is doomed to failure.

  3. Jan 2021
    1. Group Rules from the Admins1NO POSTING LINKS INSIDE OF POST - FOR ANY REASONWe've seen way too many groups become a glorified classified ad & members don't like that. We don't want the quality of our group negatively impacted because of endless links everywhere. NO LINKS2NO POST FROM FAN PAGES / ARTICLES / VIDEO LINKSOur mission is to cultivate the highest quality content inside the group. If we allowed videos, fan page shares, & outside websites, our group would turn into spam fest. Original written content only3NO SELF PROMOTION, RECRUITING, OR DM SPAMMINGMembers love our group because it's SAFE. We are very strict on banning members who blatantly self promote their product or services in the group OR secretly private message members to recruit them.4NO POSTING OR UPLOADING VIDEOS OF ANY KINDTo protect the quality of our group & prevent members from being solicited products & services - we don't allow any videos because we can't monitor what's being said word for word. Written post only.

      Wow, that's strict.

    1. It appears that Canonical is continuing it's vice grip of unliateral, maybe dictatorial control on the development of Snap to the benefit of Ubuntu, but to the detriment of groups like Linuxmint, and all other non-Ubuntu based Linux distributions - like CentOS/Redhat, Suse/openSuSe, Solus, Arch/Manjaro, PCLinuxOS, etc, that are pushing Flatpak as a truly cross-distro application solution that works equally well and non-problematic for all. .
    2. If we're not careful, it could become the new 'systemd' problem It probably already is. I don't want to sound too Stallman, but this is the inevitable "company" influence you'll always have. Companies do have their objectives which they will pursue determinedly, since they are not philanthropic (no judgment, just observation). Systemd and Red Hat. Nvidia and their drivers. Google and Android. Apple and iOS. Manufacturers with MS only support. And Canonical also has a history there: the Amazon links, Unity, Mir, and now snap.
  4. Dec 2020
  5. Nov 2020
    1. If I understand the problem correctly, just changing the imports to point to svelte/internal isn't enough because they could still point to different packages depending on how your components are bundled. It solved your specific issue, but if you had two completely unrelated Svelte components compiled to vanilla javascript bundled with Svelte, you'd still hit issues with mismatching current_component when using slots or callbacks.
    2. It sounds like another case of multiple svelte/internal modules? I think we need to look into reworking how svelte/internal keeps track of the current component since it breaks when mixing components not bundled with the app. It sounds like we need to find a way to pass Svelte's internal runtime state when instantiating components, since slots and callbacks end up mixing different svelte/internal together.
    1. I'm still calling this v1.00 as this is what will be included in the first print run.

      There seems to be an artificial pressure and a false assumption that the version that gets printed and included in the box be the "magic number" 1.00.

      But I think there is absolutely nothing bad or to be ashamed of to have the version number printed in the rule book be 1.47 or even 2.0. (Or, of course, you could just not print it at all.) It's just being transparent/honest about how many versions/revisions you've made. 

    1. A Chrome Extension designed with one intention: Increase the speed and privacy of your web browsing by skipping tracking redirects and removing the tracking parameters from URLs to keep them short and cleaner for sharing, bookmarking, etc.
    1. It is impossible to rebuild the base from the Dockerfile as the 3rd party dependencies have changed significantly since 8 months ago when the base was last built. The tags for my base image have been overwritten and I can only restore them from a descendant image. With Docker 1.8 I simply pulled the descendant image, tagged the base layer and I was done. With Docker 1.10+ I'd need to save, then manually construct the base image descriptor and reload it. Doable but sad that it's far more complex.
    2. Sorry, I don't totally know how the internals work, but does there currently exist a workaround? By that I mean, can I pull an image, then run it at a layer other than the top layer? (I basically use this for testing purposes, its certainly possible to build the image myself then do it, but its slightly less convenient)
  6. Oct 2020
    1. Form validation can get complex (synchronous validations, asynchronous validations, record validations, field validations, internationalization, schemas definitions...). To cope with these challenges we will leverage this into Fonk and Fonk Final Form adaptor for a React Final Form seamless integration.
    1. One of the primary tasks of engineers is to minimize complexity. JSX changes such a fundamental part (syntax and semantics of the language) that the complexity bubbles up to everything it touches. Pretty much every pipeline tool I've had to work with has become far more complex than necessary because of JSX. It affects AST parsers, it affects linters, it affects code coverage, it affects build systems. That tons and tons of additional code that I now need to wade through and mentally parse and ignore whenever I need to debug or want to contribute to a library that adds JSX support.
    2. However, as a developer that uses JSX, I find it too useful/concise to give up in the name of syntax purity, especially when I know that what it translates to is still very isolated and computationally pure.

      What does "isolated" mean in this case? Is it a different sense than how isolated is usually used in programming context?

      What does "computationally pure" mean? Sounds like a bit of a vague weasel word, but this is an honest question of curiosity and wanting to understand/learn.

    1. This library is built in TypeScript, and for TypeScript users it offers an additional benefit: one no longer needs to declare action types. The example above, if we were to write it in TypeScript with useReducer, would require the declaration of an Action type: type Action = | { type: 'reset' } | { type: 'increment' } | { type: 'decrement' }; With useMethods the "actions" are implicitly derived from your methods, so you don't need to maintain this extra type artifact.
    1. A new option --proximate=N groups together lines of output that are within N lines of each other in the file. This is useful when looking for matches that are related to each other.

      I'd been wishing for a feature like this with grep/etc. tools.

      I've had to use some really ugly workarounds (chain grep -C5 | grep -B5) which end up showing extra irrelevant context lines.

      So I'm glad there's a clean way to do this now!

  7. Sep 2020
    1. It would be tiresome - and bloated - to include a class pass-through for every component or assigning custom properties (from the RFC linked) for all potential properties on every component, just in case it's gonna be used in layouts that requires it. Wrapping them in a wrapper div is certainly an option, but potentially creates 100s or 1000s (long lists, several lists etc.) of new elements in the DOM slowing down low-end devices.
    1. It's fashionable to dislike CSS. There are lots of reasons why that's the case, but it boils down to this: CSS is unpredictable. If you've never had the experience of tweaking a style rule and accidentally breaking some layout that you thought was completely unrelated — usually when you're trying to ship — then you're either new at this or you're a much better programmer than the rest of us.
    2. It gets worse when you're working on a team. No-one dares touch styles authored by someone else, because it's often unclear what they're doing, what markup they apply to, and what disasters will unfold if you remove them. The consequence of all this is the append-only stylesheet. There's no way of knowing which code can safely be removed, so it's common to undo some existing style with another, more specific style — even on relatively small projects.
  8. Aug 2020
    1. Safari sends following order application/xml (q is 1) application/xhtml+xml (q is 1) image/png (q is 1) text/html (q is 0.9) text/plain (q is 0.8) \*/\* (q is 0.5) So you visit www.myappp.com in safari and if the app supports .xml then Rails should render .xml file. This is not what user wants to see. User wants to see .html page not .xml page.
  9. Jul 2020
    1. In the Set class we already called this - and difference, which it is ok but not really accurate because of the previous explanation, but probably not worthwhile to change it.

      Is this saying that the name difference is inaccurate?

      Why is it inaccurate? You even called it the "theoretic difference" above.

      Is that because "relative complement" would be better? Or because the full phrase "theoretic difference" [https://en.wiktionary.org/wiki/set-theoretic_difference] is required in order for it to be accurate rather than just "difference"?

  10. Jun 2020
    1. Some large tech behemoths could hypothetically shoulder the enormous financial burden of handling hundreds of new lawsuits if they suddenly became responsible for the random things their users say, but it would not be possible for a small nonprofit like Signal to continue to operate within the United States. Tech companies and organizations may be forced to relocate, and new startups may choose to begin in other countries instead.
  11. May 2020
  12. Apr 2020
    1. For instance, one recent blog entry from the Irish Data Protection Commission discussing events at schools borders on the absurd:“Take the scenario whereby a school wants to take and publish photos at a sports day ­– schools could inform parents in advance that photographs are going to be taken at this event and could provide different-coloured stickers for the children to wear to signify whether or not they can be photographed,” the Commission suggested. The post goes on to discuss the possibility of schools banning photographs at a high school musical, but suggests that might be unwieldy.
  13. Mar 2020
  14. Feb 2020
  15. Jan 2020
  16. Dec 2019
  17. Oct 2019
  18. Jul 2019
    1. but Salt Lake City’s cost of living is 16 percent lower than in Denver, 37 percent lower than Seattle’s and 48 percent under San Francisco’s, according to PayScale. The state — often led personally by Governor Gary Herbert — pitches its advantages well to firms considering relocation, says Joe Vranich, whose consulting firm helps small businesses looking to move. “They will roll out the carpet for you and treat you like a king.” The approach is working. Utah’s “Silicon Slopes”

      Utah's low cost of living attracts tech companies to operate in Utah. This will make more outsiders to relocate to Utah for jobs which can further aggravate the burden of housing shortage and pricing.

  19. Mar 2019
  20. Jan 2019
  21. Dec 2017
  22. Apr 2016