10,000 Matching Annotations
  1. May 2021
    1. And asking them if they think they know what they are doing will not help, because many people will overestimate their knowledge, making the support even more complicated as the tech guy may at first believe them and only find out later that they told wrong things because they do not actually know what they are pretending to know.
    2. Of course, if you're too successful with migrating all your clients and friends to your friendly small provider it grows into a big provider and needs to hire cheap first level support to deal with all the customers ;-)
    3. Heisenberg for customer support quality ;-)
    4. The best advice I can give you is: Seek a smaller provider which often are less formal and more approachable. When you found one where you have a good support, request your friends and family to move to this. You are doing something for them, then it can only happen on your terms.
      • supporting those you like by sending business to them
      • less formal and more approachable
    5. If the person answering the call misses something, nothing prevents them from asking you to repeat something. I think the key point that should be added to this answer is to not sound or act annoyed if the support tech asks for something you've already rattled off. To accept that you gave them a whole bunch of information at once, and that they might legitimately have missed or forgot one bit of it. Or, especially if you know the order in which they ask these questions, to take it slower; don't say it all in five seconds, take half a minute. Give them time to click!
    6. Tech support works with scripts. Just get to know these scripts by heart and answer all questions from the script you can in one long sentence, before they ask it. Like in "Hi I have a problem with this and that...I have restarted the router, I have checked the cables, the red light is on, the green light is off, not other lights are blinking......etc.etc.etc. That way the person at the other end of the line can just go click-click-click and you'll be 10 steps further in their script in 5 seconds.
    7. So what can you do to demonstrate your technical knowledge? Well, you are doing the right thing by using the correct technical terms. That will give an indication to the person handling the ticket. Explicitly explaining your role as the administrator or developer should also help.
    8. From experience I can say that professionals will be more forgiving if you go through things at a basic level than amateurs who have no idea what you're talking about, so people will probably err on the side of caution and not assume the customer has a high level of expertise.
    9. "Put as much information about the problem itself into the email". This is where you show your ability to know what is important and relevant and establish your technical level. Don't be brief, don't imply, and break it down Barney style so the person receiving it knows to escalate your ticket.
    10. Look for certain questions that have been asked every time, and put those answers into the initial email you send about the new problem. Try to add things that make the potential problem sound local. The more information you give them that you know they will be asking for in their script, the faster you will get someone who can help you. And they will thank you for it.
    11. If you email helpdesk (us specifically), if you use appropriate technical detail you will probably get someone who knows what they're doing, and will greatly appreciate it. If you call, you will get me only. I will ask you lots of questions, with awkward pauses in between while I write my notes, and at the end of it I probably won't be able to help you. Technical detail is still welcome, but there are some questions I will ask you anyway even if they sound useless to you
    12. Put as much information about the problem itself into the email, within reason. No need to write a paragraph, that takes time away from you and from us. Bullet points are perfect (preferred).
    13. And if your answers tell me it's something too advanced for me, only then would I escalate it.
    14. e-mailing
    15. Calling over e-mailing has a number of advantages, you're able to empathize with the person and they're able to hear how comfortable you are with the topic over the phone.
    16. because you display knowledge of the field naturally and you also show them you know how system administration in general works
    17. So, +1 for play ball. Level 1 is supposed to filter out all simple issues (and once upon a time, you'll have forgotten something, happens to all of us), and they are not supposed to be creative. They get a script that has been refined over and over. Learn the scripts, prepare the answers, and you'll get to Level 2 more quickly than with any other method.
    18. In one of my internship, I got to befriend a level 2 tech support, so learned a couple thing of how it worked (in that company). Level 1 was out-sourced, and they had a script to go from, regularly updated. From statistics, this took care of 90% of issues. Level 2 was a double handful of tech people, they had basic troubleshooting tools and knowledge and would solve 90% of the remaining issues. Level 3 was the engineering department (where I was), and as a result of level 1 and 2 efficiency less than 1% of issues ever got escalated. The process worked!
    19. OP is referring to letting people know they can speak like proper adults when talking about technical terms, without going through the usual nanny-like discourse that tech support has to provide to non-techies. For instance, it happened to me with Amazon support. The speaker told me exactly where to touch in order to clear the cache of the Android Amazon App Store. Given that I work as an app developer the guy could have just said "please clear the cache". No need to go through "tap here, then here, now you should see this, tap that"...
    20. the problem is that I write a lot of these emails and they are a waste of my and everyone elses time
    21. Please don't write answers in comments; we have a policy against this. If you have an answer to the question, write it up as an answer. Thanks.
    22. If possible I'd like to avoid writing my academic and professional titles in my email signature as this might be seen as "showing off".
    23. I have tried different tactics of showing the tech support that I am proficient in their field of work (I work as a web developer). Specifically: using accurate terms and technologies to show my knowledge of them and telling the support that I am the "administrator" of the website in question.
    24. How to let tech support subtly know that I am proficient without showing off?
    25. Unfortunately the tech support people you are speaking to are probably as frustrated as you are at having to go through the basic stuff with you.
    26. Large companies especially deal with the massive volume of tech support calls they receive by employing some staff on lower pay as a "buffer," dealing with simple or "known" issues so that they don't need to employ as many higher paid "second line" support staff.
    27. Very often the first people you get through to on tech support lines are reading from a script.
    28. They have to ask you the dumb questions, either because their employer demands they do, or sometimes because their computer system doesn't let them get to the next part of the script unless they play ball.
    29. Which is not to say that people employed on first line support are not knowledgeable; in my experience lots of over-qualified people have to take less advanced jobs in IT just to get into the industry.

      .

    30. Another will employ smart people who apologise to you profusely for having to go through all the pointless steps, but that's just what they have to do!
    31. However I appreciate that price and functionality often dictates who we deal with.
    32. So my best advice if you need to stick with them is just to expect the treatment you have become used to and 'play along'. Actually, I find some things often run smoother when you act dumber than you are.
    1. Historically, the uncertainty principle has been confused[5][6] with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system, that is, without changing something in a system.
    2. the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology.
    3. Such variable pairs are known as complementary variables or canonically conjugate variables
    1. This looks cool but right now, let's say i have an external api which depends on users cookies, the cookies only gets send through internal sk endpoints while ssr even if its the same domain. Couldn't we pass the 'server' request to the serverFetch hook? I would currently have to patch package svelte kit to pass request headers to the external api or create an sk endpoint which proxies the request.
    1. ah you are talking about a external api endpoint server? then you could use the svelte-kit endpoints as proxy handler
    2. I want to avoid nginx overhead (especially if they have tons of alias and rewrites) for in-server communication. Basically, you can have sveltekit server, backend server and nginx server, in that case, communicate inside your internal network will be very expensive like: browser->nginx server(10.0.0.1)->sveltekit server(10.0.0.3)->nginx server(10.0.0.1)->backend server(10.0.0.2) instead just: browser->nginx server(10.0.0.1)->sveltekit server(10.0.0.3)->backend server(10.0.0.2)
    3. (name subject to bikeshedding)
    4. We certainly wouldn't want to add non-standard appendages to the fetch API, partly because it's confusing but mostly because it would be repetitious; you would need to include that logic in every load function that used the API in question.
    1. also it's can be helpful for geo deploying when your browser should get access to closest API by GeoDNS but server part can touch neighborhood server or same server.

      "geo deploy"

    1. Skirmish mode, where the original game did great, this version lacks a bit of content. For instance, in the original game you could give your CPU (AI) players a name, so you could for instance relive the Avernii vs the XII Legion or anything for that matter. In the remasterd version you can't name any CPU players, which in my opinion is a loss. A lot of the skirmish fun was with the immersion of the factions.

      .

    1. Due to the cost and complexity of VAT/GST, Frozen Soul Games won't be able to register within each country. VAT/GST will be due upon pick-up after we ship to you. Thank you for your understanding.

      .

    1. Unfortunately one can only buy the standard or the soundtrack version, without any chance to upgrade, to buy the DLC extra. In this case I can only say if you get the game on a good sale (75 percent or more) and collect music, or if you want to support the developer, you might want the soundtrack edition.
    1. Because constants in Ruby aren't meant to be changed, Ruby discourages you from assigning to them in parts of code which might get executed more than once, such as inside methods.
    2. It doesn't say that the constant is dynamic. It says that the assignment is dynamic.
    1. git diff --relative will print paths from the dir you are in.

      first sighting: git diff --relative

    2. I've been using (and recently, contributing slightly to) Git for well over a decade. I don't have any single thing I'd specifically recommend at this point, but if you're looking for a decent book on Git, the Pro Git book has a bunch of plus-es: it's on line and kept up to date, it's free, and it's correct (unlike far too many online tutorials). There is also Think Like (a) Git, which covers most of what's missing from Pro Git.
    3. $ ed - var.c << end > 0a > xxx > . > wq > end
    1. If you would like to make a code change, go ahead. Fork the repository, open a pull request. Do this early, and talk about the change you want to make. Maybe we can work together on it.
    2. Local development and testing has huge advantages, but sometimes one needs to test web applications against their real-world domain name. Editing /etc/hosts is a pain however, and error prone. Node Foreman can start up an HTTP forward proxy which your browser can route requests through. The forward proxy will intercept requests based on domain name, and route them to the local application.
    3. For users with Google Chrome, this can be paired with FelisCatus SwitchyOmega for great results.
    1. this is incomplete. Yes you get a load of commits, but they no longer refer to the right paths. git log dir-B/somefile won't show anything except the one merge. See Greg Hewgill's answer references this important issue.
    2. kdiff3 can be used solely with keyboard, so 5 conflict file takes when reading the code just few minutes.
    3. If you're wondering, to insert a <tab> in osx, you need to Ctrl-V <tab>

      .

    4. I think so...I actually can't remember. I've used this script quite a bit.

      where did it come from? don't remember

      after a while, something that came from another starts to feel like your own

      you make it your own

    5. Thanks. Worked for me. I needed to move the merged directory into a sub-folder so after following the above steps I simply used git mv source-dir/ dest/new-source-dir
    6. In case you want to put project-a into a subdirectory, you can use git-filter-repo (filter-branch is discouraged)
    7. Note: This rewrites history;
    8. Shorter: git fetch /path/to/project-a master; git merge --allow-unrelated-histories FETCH_HEAD.
    1. Before we dive into the details of the actual migration, let’s discuss the theory behind it.
    2. Seamless transitions; changes made to the old repositories after they were migrated must be imported to the new monorepository.
    3. Preserving history; we often find ourselves using the git blame tool to discover why a certain change was made.
    4. Our requirements:
    5. A transition period rather than Stop-The-World migration; we want to merge in a few repositories per day, with minimal disruption to work-flow.
    6. The implicit dependencies between different versions of different services were not expressed anywhere, which led to various problems in building, continuous integration, and, notably, repeatable builds.
    7. Preserving commit hashes; we use commit hashes in binary names and our issue tracker; ideally, these references remain intact.
    1. You may want to try putting the one-liner (everything in the single quotes) in an actual script, with a bash shebang line. I think filter-branch is probably trying to run this in sh, not bash.
    2. If you want the project's history to look as though all files have always been in the directory foo/bar, then you need to do a little surgery. Use git filter-branch with the "tree filter" to rewrite the commits so that anywhere foo/bar doesn't exist, it is created and all files are moved to it:
    1. git push -b

      What is this -b option?

      It's not documented, at least in my version:

             git push [--all | --mirror | --tags] [--follow-tags] [--atomic] [-n | --dry-run] [--receive-pack=<git-receive-pack>]
                        [--repo=<repository>] [-f | --force] [-d | --delete] [--prune] [-v | --verbose]
                        [-u | --set-upstream] [-o <string> | --push-option=<string>]
                        [--[no-]signed|--signed=(true|false|if-asked)]
                        [--force-with-lease[=<refname>[:<expect>]] [--force-if-includes]]
                        [--no-verify] [<repository> [<refspec>...]]
      
    2. New changes to the old repositories can be imported into the monorepo and merged in. For example, in the above example, say repository one had a branch my_branch which continued to be developed after the migration. To pull those changes in:
    3. Don't Stop The World: keep working in your other repositories during the migration and pull the changes into the monorepo as you go.
    1. For filter-branch, using pipelines like git ls-files | grep -v ... | xargs -r git rm might be a reasonable workaround but can get unwieldy and isn't as straightforward for users; plus those commands are often operating-system specific (can you spot the GNUism in the snippet I provided?)
    2. None of the existing repository filtering tools did what I wanted; they all came up short for my needs. No tool provided any of the first eight traits below I wanted, and all failed to provide at least one of the last four traits as well:
    3. [Old commit references] Provide a way for users to use old commit IDs with the new repository (in particular via mapping from old to new hashes with refs/replace/ references).
    4. a cheat sheet is available showing how to convert example commands from the manual of filter-branch into filter-repo commands.
    5. die-hard fans of filter-branch may be interested in filter-lamely (a.k.a. filter-branch-ish), a reimplementation of filter-branch based on filter-repo which is more performant
    6. Let's say that we want to extract a piece of a repository, with the intent on merging just that piece into some other bigger repo.
    7. --tag-rename '':'my-module-' (the single quotes are unnecessary, but make it clearer to a human that we are replacing the empty string as a prefix with my-module-)
    1. Also, it is definitely NOT okay to recommend --force on forums, Q&A sites, or in emails to other users without first carefully explaining that --force means putting your repositories’ data at risk. I am especially bothered by people who suggest the flag when it clearly is NOT needed; they are needlessly putting other peoples' data at risk.
    2. These checks can have both false positives and false negatives.
    3. The .git/filter-repo/ref-map file contains a mapping of which local references were changed.
    4. Every time filter-repo is run, files are created in the .git/filter-repo/ directory. These files overwritten unconditionally on every run.
    5. The .git/filter-repo/commit-map file contains a mapping of how all commits were (or were not) changed.
    1. However, the place where pip places that package might not be in your $PATH (thus requiring you to manually update your $PATH afterwards), and on windows the pip install might not take care of python-specific issues for you (see "Notes for Windows Users", above). As such, installation via package managers is recommended instead.
    2. If your python3 executable is named "python" instead of "python3" (this particularly appears to affect a number of Windows users), then you'll also need to modify the first line of git-filter-repo to replace "python3" with "python".
    3. one of the following package repositories:
    4. Installation via Package Manager
    1. By default, groups created in: GitLab 12.2 or later allow both Owners and Maintainers to create subgroups. GitLab 12.1 or earlier only allow Owners to create subgroups.
    2. Make it easier to manage people and control visibility. Give people different permissions depending on their group membership.
    3. Organize large projects. For large projects, subgroups makes it potentially easier to separate permissions on parts of the source code.
    1. Cross-site scripting (XSS) vulnerabilities (even in other applications running on the same domain) allow attackers to bypass essentially all CSRF preventions.
    2. The NoScript extension for Firefox mitigates CSRF threats by distinguishing trusted from untrusted sites, and removing authentication & payloads from POST requests sent by untrusted sites to trusted ones. The Application Boundary Enforcer module in NoScript also blocks requests sent from internet pages to local sites (e.g. localhost), preventing CSRF attacks on local services (such as uTorrent) or routers.
    3. The Self Destructing Cookies extension for Firefox does not directly protect from CSRF, but can reduce the attack window, by deleting cookies as soon as they are no longer associated with an open tab.
    4. The advantage of this technique over the Synchronizer pattern is that the token does not need to be stored on the server.
    5. The same-origin policy prevents an attacker from reading or setting cookies on the target domain, so they cannot put a valid token in their crafted form.

      .

    6. Security of this technique is based on the assumption that only JavaScript running on the client side of an HTTPS connection to the server that initially set the cookie will be able to read the cookie's value.
    7. As the token is unique and unpredictable, it also enforces proper sequence of events (e.g. screen 1, then 2, then 3) which raises usability problem (e.g. user opens multiple tabs). It can be relaxed by using per session CSRF token instead of per request CSRF token.
    8. In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.

      .

    9. Attacks were launched by placing malicious, automatic-action HTML image elements on forums and email spam, so that browsers visiting these pages would open them automatically, without much user action. People running vulnerable uTorrent version at the same time as opening these pages were susceptible to the attack.

      .

    10. Details were not released, citing "obvious security reasons".

      .

    11. Cross-site request forgery is an example of a confused deputy attack against a web browser because the web browser is tricked into submitting a forged request by a less privileged attacker.
    12. This link may be placed in such a way that it is not even necessary for the victim to click the link. For example, it may be embedded within an html image tag on an email sent to the victim which will automatically be loaded when the victim opens their email.
    13. A user who is authenticated by a cookie saved in the user's web browser could unknowingly send an HTTP request to a site that trusts the user and thereby causes an unwanted action.

      Can a user really unknowingly send an HTTP request? Or would it be more accurate to say the browser (user agent) sends the HTTP request, unknown to its (supposed) operator (user)?

    1. Data tainting[edit] Netscape Navigator briefly contained a taint checking feature. The feature was experimentally introduced in 1997 as part of Netscape 3.[10] The feature was turned off by default, but if enabled by a user it would allow websites to attempt to read JavaScript properties of windows and frames belonging to a different domain. The browser would then ask the user whether to permit the access in question.

      seems to have nothing to do with tainted data, more about trusting frames from other domains?!

    2. This mechanism bears a particular significance for modern web applications that extensively depend on HTTP cookies[1] to maintain authenticated user sessions, as servers act based on the HTTP cookie information to reveal sensitive information or take state-changing actions. A strict separation between content provided by unrelated sites must be maintained on the client-side to prevent the loss of data confidentiality or integrity.

      .

    1. Cross-site scripting attacks are a case of code injection.

      is-a hyponym

    2. A reflected attack is typically delivered via email or a neutral web site. The bait is an innocent-looking URL, pointing to a trusted site but containing the XSS vector. If the trusted site is vulnerable to the vector, clicking the link can cause the victim's browser to execute the injected script.

      explains how

    3. By finding ways of injecting malicious scripts into web pages, an attacker can gain elevated access-privileges to sensitive page content, to session cookies, and to a variety of other information maintained by the browser on behalf of the user.

      .

    4. Exploiting one of these, attackers fold malicious content into the content being delivered from the compromised site. When the resulting combined content arrives at the client-side web browser, it has all been delivered from the trusted source, and thus operates under the permissions granted to that system.

      .

    1. Any code designed to do more than spread the worm is typically referred to as the "payload".
    1. How do I setup a path alias? permalink First, you need to add it to the Vite configuration. In svelte.config.js add vite.resolve.alias: // svelte.config.js import path from 'path'; export default { kit: { vite: { resolve: { alias: { $utils: path.resolve('./src/utils') } } } } }; Then, to make TypeScript aware of the alias, add it to tsconfig.json (for TypeScript users) or jsconfig.json: { "compilerOptions": { "paths": { "$utils/*": ["src/utils/*"] } } }
    2. How do I hash asset file names for caching? permalink You can have Vite process your assets by importing them as shown below: <script> import imageSrc from '$lib/assets/image.png'; </script> <img src="{imageSrc}" />
    1. noReload Type: bool Default: false

      double negative

    2. noPreserveState Deprecated: removed and default changed from version 0.12. Use preserveLocalState instead.

      double negative

    1. most of my 1K+ packages

      !

    2. There are two ways to move your packages to ESM:Pure ESMThis has the benefit that it’s easier to set up. You just add "type": "module" to your package.json, require Node.js 12, update docs & code examples, and do a major release.

      .

    1. Rip off the bandaid and completely move to JavaScript Modules.

      .

    2. There are two ways to handle the migration:Pure: Rip off the bandaid and completely move to JavaScript Modules.Dual: Introduce a build step that transpiles a CommonJS fallback.

      .

    1. After discussion with the team, we're going to avoid Request and Response in favour of POJOs, which are much less cumbersome.

      prefer simpler option

    2. In a serverless-first world this gets a bit trickier. It needs to be possible to do both things, in a way that maps to the various serverless platforms out there, which most likely precludes using the (req, res) => {...} signature (and by extension, the ecosystem of Express middleware).

      wrapper / translating/mapping

      serverless

    1. Something people seem to trip over a bit is the fact that session, despite being a writable store, doesn't get persisted. I wonder if we can address that:

      caveat principle of least surprise

    1. We don't interact with the req/res objects you might be familiar with from Node's http module or frameworks like Express, because they're only available on certain platforms. Instead, SvelteKit translates the returned object into whatever's required by the platform you're deploying your app to.

      wrapper / proxy

    2. Building an app with all the modern best practices — code-splitting, offline support, server-rendered views with client-side hydration — is fiendishly complicated. SvelteKit does all the boring stuff for you so that you can get on with the creative part.
    3. makes your app inaccessible to users if JavaScript fails or is disabled (which happens more often than you probably think).
    1. When you add a redirect with cPanel interface, the system places redirect rules at the bottom of the .htaccess file.

      .

    2. Select a redirect type from the Type menu. Permanent (301) — This setting notifies the visitor’s browser to update its records. Temporary (302) — This setting does not update the visitor’s bookmarks.
    1. Forwarding and URL is equivalent to Redirecting an URL. Is the same concept. You can use the words interchangeably. However, while redirecting normally refer to the practice of sending an HTTP 30x status code (generally 301 for permanent and 302 for temporary redirects) the word forwarding assumes a broader meaning. In fact, several companies (including GoDaddy) provides different type of forwarding: forward (redirect) forward with masking Forwarding an URL using the masking technique means instead of redirecting to the target transparently, the target URL is opened in a frame so that the visitor will always see the source URL in the address bar.

      good explanation distinction

    1. Post/Redirect/Get (PRG) is a web development design pattern that lets the page shown after a form submission be reloaded, shared, or bookmarked without ill effects, such as submitting the form another time.

      .

    1. To solve this, many people resort to a nounVerb naming schema, but this has it’s problems. For one thing, it feels unnatural to many people; postAdd just doesn’t read as well as addPost.

      .

    2. Current tooling doesn’t allow for a simple way to group your mutations, so a large list of them can make it difficult to see what sort of operations you can perform on a given resource (eg. add, delete, promote, hide, like, etc).

      .

    1. The query name doesn't have any meaning on the server whatsoever. It's only used for clients to identify the responses (since you can send multiple queries/mutations in a single request).

      .

    2. In fact, you can send just an anonymous query object if that's the only thing in the GraphQL request (and doesn't have any parameters):

      .

    1. Name your mutations verb first. Then the object, or “noun,” if applicable; createAnimal is preferable to animalCreate.

      .

    2. Case stylesField names should use camelCase. Many GraphQL clients are written in JavaScript, Java, Kotlin, or Swift, all of which recommend camelCase for variable names.Type names should use PascalCase. This matches how classes are defined in the languages mentioned above.Enum names should use PascalCase.Enum values should use ALL_CAPS, because they are similar to constants.

      .

    1. It is common good practice to use camelCase for your fields and pascalCase for the names of types.
    2. When working with mutations it is considered good design to return mutated records as a result of the mutation. This allows us to update the state on the frontend accordingly and keep things consistent

      .

    1. Can you re-open this until we fix it?

      leaving issue open until actually resolved

    2. I hope I won’t forget, but I’ll come back to you once we’ve got an idea on how to improve this Svelte API

      idiomatic Svelte

    3. Also don’t forget to call toPromise() on the Return value or it won’t execute :)

      .

    4. the only way to make it work is to do something like $: result = mutation(...) but it doesn't make sense, I don't want to run the mutation after each keystroke.

      not: idioms/conventions

      best practice

    5. Currently the mutate helper in Svelte runs immediately as we’re still figuring out patterns. However, if you call a mutation programmatically you can use getClient() and call client.mutation, like so: https://formidable.com/open-source/urql/docs/concepts/core-package/#one-off-queries-and-mutations We’re still working on idiomatic Svelte APIs so this one’s also on our list to figure out what the best way forward is

      idiomatic Svelte patterns

    6. For context, the previous API had a lazy promise. Currently I’m thinking we could just return a closure like in the React API

      API comparison to React

    1. So it can issue a cross-sign whose validity extends beyond the expiration of its own self-signed certificate without any issues.

      !

    2. The self-signed certificate which represents the DST Root CA X3 keypair is expiring. But browser and OS root stores don’t contain certificates per se, they contain “trust anchors”, and the standards for verifying certificates allow implementations to choose whether or not to use fields on trust anchors. Android has intentionally chosen not to use the notAfter field of trust anchors. Just as our ISRG Root X1 hasn’t been added to older Android trust stores, DST Root CA X3 hasn’t been removed. So it can issue a cross-sign whose validity extends beyond the expiration of its own self-signed certificate without any issues.

      innovative solution

    1. Please ensure that if the lookup fails, the exception indicates which part of the name caused the failure. It's waaay past time that the industry moves past "ENOENT: No such file or directory" in its exception reporting :)

      good error messages

    1. GraphQL Field Resolution Method Dispatch type class field method obj receiver args method arguments ctx runtime state

      equivalents between GraphQL terminology and Ruby terminology

    1. In real life I ride a Ninja, the last in a line of many bikes over more than forty-five years. However, within this game I've apparently never ridden a sport bike. Or any motorcycle. Or a bicycle. Or watched people ride. Or walked upright. I'm playing with a Thrustmaster joystick, but frankly I might as well be controlling the bike with a Ouija board. If I can not hit a wall, it's a personal victory. Personal victories do not occur often. Instead of the feeling that I'm controlling an exquisitely balanced, steep fork angle sport bike, or even a full dress Harley with an enormously fat passenger and two flat tires, I feel like I'm controlling a rocket-powered lawnmower with several missing tires. Perhaps towing a couple trailers connected with springs. Dying fish don't flop around like me. In forty minutes I've not come close to anything resembling control, much less fun, and I've hit my limit on time I'm willing to throw at it. Wasted money for me; time to acknowledge my mistake, uninstall and get on with my life.
    1. 1. The main folder names have numbers in front of them, such as 0-base to ensure that the folders stay in that particular order. You can certainly omit this or choose different folder names.
    1. Note that not all of the colors in SMUI read from CSS variables, so some parts will still use the colors defined in the original Sass compile.