2,548 Matching Annotations
  1. Apr 2020
    1. Language tooling is richer today. A programming language was usually a compiler and perhaps a debugger. Today, they usually come with the linter, source code formatter, template creators, self-update ability and a list of arguments that you can use in a debate against the competing language.

      How coding became much more supported in comparison to the last 20 years

    1. All multi-lined docstrings have the following parts: A one-line summary line A blank line proceeding the summary Any further elaboration for the docstring Another blank line

      Multi-line docstring example:

      """This is the summary line
      
      This is the further elaboration of the docstring. Within this section,
      you can elaborate further on details as appropriate for the situation.
      Notice that the summary and the elaboration is separated by a blank new
      line.
      
      # Notice the blank line above. Code should continue on this line.
      
    2. say_hello.__doc__ = "A simple function that says hello... Richie style"

      Example of using __doc:

      Code (version 1):

      def say_hello(name):
          print(f"Hello {name}, is it me you're looking for?")
      
      say_hello.__doc__ = "A simple function that says hello... Richie style"
      

      Code (alternative version):

      def say_hello(name):
          """A simple function that says hello... Richie style"""
          print(f"Hello {name}, is it me you're looking for?")
      

      Input:

      >>> help(say_hello)
      

      Returns:

      Help on function say_hello in module __main__:
      
      say_hello(name)
          A simple function that says hello... Richie style
      
    3. Commenting your code serves multiple purposes

      Multiple purposes of commenting:

      • planning and reviewing code - setting up a code template
      • code description
      • algorithmic description - for example, explaining the work of an algorithm or the reason of its choice
      • tagging - BUG, FIXME, TODO
    4. In general, commenting is describing your code to/for developers. The intended main audience is the maintainers and developers of the Python code. In conjunction with well-written code, comments help to guide the reader to better understand your code and its purpose and design

      Commenting code:

      • describing code to/for developers
      • help to guide the reader to better understand your code, its purpose/design
    5. If you use argparse, then you can omit parameter-specific documentation, assuming it’s correctly been documented within the help parameter of the argparser.parser.add_argument function. It is recommended to use the __doc__ for the description parameter within argparse.ArgumentParser’s constructor.

      argparse

    6. Daniele Procida gave a wonderful PyCon 2017 talk and subsequent blog post about documenting Python projects. He mentions that all projects should have the following four major sections to help you focus your work:

      Public and Open Source Python projects should have the docs folder, and inside of it:

      • Tutorials
      • How-To Guides
      • References
      • Explanations

      (check the table below for a summary)

    1. Creating meticulous tests before exploring the data is a big mistake, and will result in a well-crafted garbage-in, garbage-out pipeline. We need an environment flexible enough to encourage experiments, especially in the initial place.

      Overzealous nature of TDD may discourage from explorable data science

    1. the developer doesn’t need to worry about allocating memory, or the character set encoding of the string, or a host of other things.

      Comparison of C (1972) and TypeScript (2012) code.

      (check the code above)

    2. With someone else’s platform, you often end up needing to construct elaborate work-arounds for missing functionality, or indeed cannot implement a required feature at all.

      You can quickly implement 80% of the solution in Salesforce using a mix of visual programming (basic rule setting and configuration), but later it's not so straightforward to add the missing 20%

    1. This kind of “exploring” is easiest when you develop on the prompt (or REPL), or using a notebook-oriented development system like Jupyter Notebooks

      It's easier to explore the code:

      • when you develop on the prompt (or REPL)
      • in notebook-oriented system like Jupyter

      but, it's not efficient to develop in them

    2. notebook contains an actual running Python interpreter instance that you’re fully in control of. So Jupyter can provide auto-completions, parameter lists, and context-sensitive documentation based on the actual state of your code

      Notebook makes it easier to handle dynamic Python features

    3. They switch to get features like good doc lookup, good syntax highlighting, integration with unit tests, and (critically!) the ability to produce final, distributable source code files, as opposed to notebooks or REPL histories

      Things missed in Jupyter Notebooks:

      • good doc lookup
      • good syntax highlighting
      • integration with unit tests
      • ability to produce final, distributable source code files
    4. Exploratory programming is based on the observation that most of us spend most of our time as coders exploring and experimenting

      In exploratory programming, we:

      • experiment with a new API to understand how it works
      • explore the behavior of an algorithm that we're developing
      • debug our code through combination of inputs
    1. The best way to explain the difference between launch and attach is to think of a launch configuration as a recipe for how to start your app in debug mode before VS Code attaches to it, while an attach configuration is a recipe for how to connect VS Code's debugger to an app or process that's already running.

      Simple difference between two core debugging modes: Launch and Attach available in VS Code.

      Depending on the request (attach or launch), different attributes are required, and VS Code's launch.json validation and suggestions should help with that.

    2. Logpoint is a variant of a breakpoint that does not "break" into the debugger but instead logs a message to the console. Logpoints are especially useful for injecting logging while debugging production servers that cannot be paused or stopped. A Logpoint is represented by a "diamond" shaped icon. Log messages are plain text but can include expressions to be evaluated within curly braces ('{}').

      Logpoints - log messages to the console when breakpoint is hit.

      Can include expressions to be evaluated with {}, e.g.:

      fib({num}): {result}

      (animation)

    3. Here are some optional attributes available to all launch configurations

      Optional arguments for launch.json:

      • presentation ("order", "group" or "hidden")
      • preLaunchTask
      • postDebugTask
      • internalConsoleOptions
      • debugServer
      • serverReadyAction
    4. The following attributes are mandatory for every launch configuration

      In the launch.json file you've to define at least those 3 variables:

      • type (e.g. "node", "php", "go")
      • request ("launch" or "attach")
      • name (name to appear in the Debug launch configuration drop-down)
    5. Many debuggers support some of the following attributes

      Some of the possibly supported attributes in launch.json:

      • program
      • args
      • env
      • cwd
      • port
      • stopOnEntry
      • console (e.g. "internalConsole", "integratedTerminal", "externalTerminal")
    1. The priorities in building a production machine learning pipeline—the series of steps that take you from raw data to product—are not fundamentally different from those of general software engineering.
      1. Your pipeline should be reproducible
      2. Collaborating on your pipeline should be easy
      3. All code in your pipeline should be testable
    2. Reproducibility is an issue with notebooks. Because of the hidden state and the potential for arbitrary execution order, generating a result in a notebook isn’t always as simple as clicking “Run All.”

      Problem of reproducibility in notebooks

    3. A notebook, at a very basic level, is just a bunch of JSON that references blocks of code and the order in which they should be executed.But notebooks prioritize presentation and interactivity at the expense of reproducibility. YAML is the other side of that coin, ignoring presentation in favor of simplicity and reproducibility—making it much better for production.

      Summary of the article:

      Notebook = presentation + interactivity

      YAML = simplicity + reproducibility

    4. Notebook files, however, are essentially giant JSON documents that contain the base-64 encoding of images and binary data. For a complex notebook, it would be extremely hard for anyone to read through a plaintext diff and draw meaningful conclusions—a lot of it would just be rearranged JSON and unintelligible blocks of base-64.

      Git traces plaintext differences and with notebooks it's a problem

    1. CRDTs are designed for decentralized systems where there is no single central authority to decide what the final state should be. There is some unavoidable performance and memory overhead with doing this. Since Figma is centralized (our server is the central authority), we can simplify our system by removing this extra overhead and benefit from a faster and leaner implementation

      CRDTs are designed for decentralized systems

    1. The biggest problem with JSON.stringify is that it doesn't serialize certain inputs, like functions and Symbols (and anything you wouldn't find in JSON).

      Problem with JSON.stringify.

      This is why the previous code shouldn't be used in production

    2. Memoization is an optimization technique used in many programming languages to reduce the number of redundant, expensive function calls. This is done by caching the return value of a function based on its inputs.

      Memoization (simple definition)

    1. Stary, dobry Uncle Bob mówi, że poza etatem trzeba na programowanie poświęcić 20h tygodniowo.Gdy podzielimy to na 7 dni w tygodniu, to wychodzi prawie 3 godziny dziennie.Dla jednych mało, dla innych dużo.

      Uncle Bob's advice: ~ 3h/day for programming

    2. Z gier można wyciągnąć też inną naukę. Jeśli Twoim celem jest przejście do następnej lokacji, to czy musisz wykonywać wszystkie zadania poboczne?No nie musisz. Dlatego wyżej, gdy podawałem wymagane umiejętności dla osoby, która prowadzi szkolenia z Reacta, albo pracuje dla startupów, to napisałem “dobra znajomość JSa”, bo “doskonała” nie pomoże Ci w osiągnięciu tego celu.

      Don't overlearn

    3. Arnie miał wielkie plany, ale nie realizował wszystkich na raz.Skupił się na jednej rzeczy - kulturystyce - bo wiedział, że to otworzy mu drogę do Ameryki i do aktorstwa.

      Think BIG, act small

      (small actions lead to big changes)

    4. Jeśli wybierzesz kilka rzeczy na raz to ryzykujesz, że znowu zaczniesz miotać się we wszystkich kierunkach.Nie polecam takiej opcji, bo właśnie przez takie myślenie kończymy potem wkurwieni, z siwymi włosami i podkrążonymi oczami.Doświadczyłem wszystkich tych trzech objawów i dopiero kiedy skupiłem się na jednej rzeczy, to odzyskałem balans i przestałem się denerwować.

      It's not so effective to have dozens of goals, but the one you can purely focus on

    5. W czasach młodości Arniego, Reg Park był wielką gwiazdą kulturystyki, sławnym oraz bogatym aktorem i otaczały go piękne dziewczyny.Arnie chciał się wydostać z zadupia w Austrii, przeprowadzić do Ameryki i mieć dokładnie to samo co Reg.Wolność, sławę, kasę i dziewczyny.Wszystkie jego działania były podporządkowane dotarciu do tego celu.Arnie dokładnie zawęził czego chce.

      Taking example from Arnold Schwarzenegger we shall have clear & precise goals

    6. 👉 Wiele osób ma cele w stylu “chcę się nauczyć Reacta”. I jest to zbyt słabo sprecyzowany cel.👉 Celem musi być coś w stylu “chcę się nauczyć Reacta, żeby pracować w firmie X i robić projekty dla startupów z Doliny Krzemowej”.👉 Albo “chcę się nauczyć Reacta, żeby pracować w Facebooku z Danem Abramovem nad kolejnymi wersjami Reacta”.👉 Albo “chcę się nauczyć Reacta, żeby prowadzić szkolenia stacjonarne dla backendowców, którzy nie umieją we frontendy”.

      Instead of planning to "learn machine learning" give it a bit more details, e.g.:

      • "I want to learn machine learning to work at Amazon with the latest technologies"

      In result, achieving your goal will become different (imho better)

    7. Pytać wszystkich wszędzie i powiększaj swoją znajomość tematu. Poszerzaj worek możliwości.Przeszukaj Reddita, Stacka, grupy na Facebooku. Znajdź jakiegoś mądrego człowieka na social media i napisz mu DM.

      Ask everyone/everywhere for advice if you really don't know what you want to do

    1. Suppose you have only two rolls of dice. then your best strategy would be to take the first roll if its outcome is more than its expected value (ie 3.5) and to roll again if it is less.

      Expected payoff of a dice game:

      Description: You have the option to throw a die up to three times. You will earn the face value of the die. You have the option to stop after each throw and walk away with the money earned. The earnings are not additive. What is the expected payoff of this game?

      Rolling twice: $$\frac{1}{6}(6+5+4) + \frac{1}{2}3.5 = 4.25.$$

      Rolling three times: $$\frac{1}{6}(6+5) + \frac{2}{3}4.25 = 4 + \frac{2}{3}$$

    1. Therefore, En=2n+1−2=2(2n−1)

      Simplified formula for the expected number of tosses (e) to get n consecutive heads (n≥1):

      $$e_n=2(2^n-1)$$

      For example, to get 5 consecutive heads, we've to toss the coin 62 times:

      $$e_n=2(2^5-1)=62$$


      We can also start with the longer analysis of the 5 scenarios:

      1. If we get a tail immediately (probability 1/2) then the expected number is e+1.
      2. If we get a head then a tail (probability 1/4), then the expected number is e+2.
      3. If we get two head then a tail (probability 1/8), then the expected number is e+2.
      4. If we get three head then a tail (probability 1/16), then the expected number is e+4.
      5. If we get four heads then a tail (probability 1/32), then the expected number is e+5.
      6. Finally, if our first 5 tosses are heads, then the expected number is 5.

      Thus:

      $$e=\frac{1}{2}(e+1)+\frac{1}{4}(e+2)+\frac{1}{8}(e+3)+\frac{1}{16}\\(e+4)+\frac{1}{32}(e+5)+\frac{1}{32}(5)=62$$

      We can also generalise the formula to:

      $$e_n=\frac{1}{2}(e_n+1)+\frac{1}{4}(e_n+2)+\frac{1}{8}(e_n+3)+\frac{1}{16}\\(e_n+4)+\cdots +\frac{1}{2^n}(e_n+n)+\frac{1}{2^n}(n) $$

    1. It's responsible for allocating and scheduling containers, providing then with abstracted functionality like internal networking and file storage, and then monitoring the health of all of these elements and stepping in to repair or adjust them as necessary.In short, it's all about abstracting how, when and where containers are run.

      Kubernetes (simple explanation)

    1. You’ll see pressure to push towards “Cloud neutral” solutions using Kubernetes in various places

      Maybe Kubernetes has the advantage of being cloud neutral, but: you pay the cost of a cloud migration:

      • maintaining abstractions
      • isolating your way from useful vendor specific features
    2. Kubernetes (often irritatingly abbreviated to k8s, along with it’s wonderful ecosystem of esoterically named additions like helm, and flux) requires a full time ops team to operate, and even in “managed vendor mode” on EKS/AKS/GKS the learning curve is far steeper than the alternatives.

      Kubernetes:

      • require a full time ops team to operate
      • the learning curve is far steeper than the alternatives
    3. Azure App Services, Google App Engine and AWS Lambda will be several orders of magnitude more productive for you as a programmer. They’ll be easier to operate in production, and more explicable and supported.

      Use the closest thing to a pure-managed platform as you possibly can. It will be easier to operate in production, and more explicable and supported:

      • Azure App Service
      • Google App Engine
      • AWS Lambda
    4. With the popularisation of docker and containers, there’s a lot of hype gone into things that provide “almost platform like” abstractions over Infrastructure-as-a-Service. These are all very expensive and hard work.

      Kubernetes aren't always required unless you work on huge problems

    5. By using events that are buffered in queues, your system can support outage, scaling up and scaling down, and rolling upgrades without any special consideration. It’s normal mode of operation is “read from a queue”, and this doesn’t change in exceptional circumstances.

      Event driven architectures with replay / message logs

    6. Circuit breaking is a useful distributed system pattern where you model out-going connections as if they’re an electrical circuit. By measuring the success of calls over any given circuit, if calls start failing, you “blow the fuse”, queuing outbound requests rather than sending requests you know will fail.

      Circuit breaking - useful distributed system pattern. It's phenomenal way to make sure you don't fail when you know you might.

    7. Scaling is hard if you try do it yourself, so absolutely don’t try do it yourself. Use vendor provided, cloud abstractions like Google App Engine, Azure Web Apps or AWS Lambda with autoscaling support enabled if you can possibly avoid it.

      Scaling shall be done with cloud abstractions

    8. Hexagonal architectures, also known as “the ports and adapters” pattern

      Hexagonal architectures - one of the better pieces of "real application architecture" advice.

      • have all your logic, business rules, domain specific stuff - exist in a form that isn't tied to your frameworks, your dependencies, your data storage, your message busses, your repositories, or your UI
      • all your logic is in files, modules or classes that are free from framework code, glue, or external data access
      • it means you can test everything in isolation, without your web framework or some broken API getting in the way
    9. Good microservice design follows a few simple rules

      Microservice design rules:

      • Be role/operation based, not data centric
      • Always own your data store
      • Communicate on external interfaces or messages
      • What changes together, and is co-dependent, is actually the same thing
      • All services are fault tolerant and survive the outages of their dependencies
    10. What Microservices are supposed to be: Small, independently useful, independently versionable, independently shippable services that execute a specific domain function or operation. What Microservices often are: Brittle, co-dependent, myopic services that act as data access objects over HTTP that often fail in a domino like fashion.

      What Microservices are supposed to be: independent

      VS

      what they often are: dependent

    11. In the mid-90s, “COM+” (Component Services) and SOAP were popular because they reduced the risk of deploying things, by splitting them into small components

      History of Microservices:

      1. Came from COM+ and SOAP in the mid-90s.
      2. Which later led to the popularisation of N-Tier ("split up the data-tier, the business-logic-tier and the presentation-tier"). This worked for some people, but the horizontal slices through a system often required changing every “tier” to finish a full change.
      3. Product vendors got involved and SOAP became complicated and unfashionable, which pushed people towards the "second wave" - Guerrilla SOA. This led to the proliferation of smaller, more nimble services.
      4. The "third wave" of SOA, branded as Microservice architecture is very popular, but often not well understood
    12. CDNs are web servers run by other people, all over the world. You upload your data to them, and they will replicate your data across all of their “edges” (a silly term that just means “to all the servers all over the world that they run”) so that when someone requests your content, the DNS response will return a server that’s close to them, and the time it takes them to fetch that content will be much quicker.

      CDNs (Content Delivery Networks)

      Offloading to a CDN is one of the easiest ways you can get extra performance for a very minimal cost

    13. Understanding how a distributed cache works is remarkably simple – when an item is added, the key (the thing you use to retrieve that item) that is generated includes the address or name of the computer that’s storing that data in the cache. Generating keys on any of the computers that are part of the distributed cache cluster will result in the same key. This means that when the client libraries that interact with the cache are used, they understand which computer they must call to retrieve the data.

      Distributed caching (simple explanation)

    14. Memory caches are used to store the result of something that is “heavy” to calculate, takes time, or just needs to be consistent across all the different computers running your server software. In exchange for a little bit of network latency, it makes the total amount of memory available to your application the sum of all the memory available across all your servers.

      Distributed caching:

      • use to store results of heavy calculation
      • improve consistency
      • prevents "cache sampedes"
      • all the major hosting providers tend to support memcached or redis compatible managed caches
    15. All a load balancer does, is accept HTTP requests for your application (or from it), pick a server that isn’t very busy, and forward the request.

      Load balancers:

      • used when you've a lot of traffic and you're not using PaaS
      • mostly operated by sysops, or are just running copies of NGINX
      • you might see load balancers load balancing a "hot path" in your software onto a dedicated pool of hardware to keep it safe or isolate from failure
      • you might see load balancers used to take care of SSL certificates for you (SSL Termination)
    16. BFF is an API that serves one, and specifically only one application

      BFF (backend for frontend):

      • translates an internal domain into the terminal language of the app it serves
      • takes care of authentication, rate limiting, and other stuff you don't want to do more than once
      • reduces needless roundtrips to the server
      • translates data to be more suitable for its target app
      • is pretty front-end-dev friendly. Queries and schema strongly resemble JSON & it keeps your stack "JS all the way down" without being beholden to some distant backend team
    17. What sets GraphQL apart a little from previous approaches (notably Microsofts’ OData) is the idea that Types and Queries are implemented with Resolver code on the server side, rather than just mapping directly to some SQL storage.

      What makes GraphQL favorable over other implementations.

      In result:

      • GraphQL can be a single API over a bunch of disparate APIs in your domain
      • it solves the "over fetching" problem that's quite common in REST APIs (by allowing the client to specify a subset of data to return)
      • acts as an anti-corruption layer of sorts, preventing unbounded access to underlying storage
      • GraphQL is designed to be the single point of connection that your web or mobile app talks to (it highly optimises performance)
    18. JSON-RPC is “level 0” of the Richardson Maturity Model – a model that describes the qualities of a REST design.

      JSON-RPC.

      You can build it by:

      • using HTTP VERBs correctly
      • organising your APIs into logical "resources", e.g. ("customer", "product", "catalogue")
      • using correct HTTP response codes for interactions with your API
    19. there was a push for standardisation of “SOAP” (simple object access protocol) APIs

      Standarisation of SOAP brought a lot of good stuff but people found XML cumbersome to read.

      A lot of things being solved in SOAP had to subsequently be re-solved on top of JSON using emerging open-ish standards like Swagger (now OpenAPI) and JSON:API

    20. The basics of HTTP are easy to grasp – there’s a mandatory “request line”

      Mandatory HTTP request line:

      • verb (GET, POST, PUT and HEAD most frequently)
      • URL (web address)
      • protocol version (HTTP/1.1)

      Then, there's a bunch of optional request header fields.

      Example HTTP request:

      GET http://www.davidwhitney.co.uk/ HTTP/1.1
      Host: www.davidwhitney.co.uk
      Connection: keep-alive
      User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64…
      Accept: text/html,application/xhtml+xml,application/xml;q=0.9
      Accept-Encoding: gzip, deflate
      Accept-Language: en-GB,en-US;q=0.9,en;q=0.8
      

      Example response:

      HTTP/1.1 200 OK
      Cache-Control: public,max-age=1
      Content-Type: text/html; charset=utf-8
      Vary: Accept-Encoding
      Server: Kestrel
      X-Powered-By: ASP.NET
      Date: Wed, 11 Dec 2019 21:52:23 GMT
      Content-Length: 8479
      
      
      <!DOCTYPE html>
      <html lang="en">...
      
    21. web is an implementation of the design pattern REST – which stands for “Representational State Transfer”. You’ll hear people talk about REST a lot – it was originally defined by Roy Fielding in his PhD dissertation, but more importantly was a description of the way HTTP/1.0 worked at the time

      Origins of the REST design pattern

    22. Honestly for the most part it’s a matter of taste, and they’re all perfectly appropriate ways to build web applications.

      Which model to choose:

      • Server-rendered MVC - good for low-interactivity websites. Web apps on the other side have their complexity cost
      • SPAs (React, Angular, Vue) offer high fidelity UX. The programming models work well for responsive UX
      • Static sites - great for blogs, marketing microsites, CMS (anything where content is the most important). They scale well, basically cannot crash, and are cheap to run
    23. static site generators became increasingly popular – normally allowing you to use your normal front-end web dev stack, but then generating all the files using a build tool to bundle and distribute to dumb web servers or CDNs

      Examples of static site generators: Gatsby, Hugo, Jekyll, Wyam

    24. SPAs are incredibly common, popularised by client-side web frameworks like Angular, React and Vue.js.

      SPAs:

      • popularised by client-side web frameworks like Angular, React and Vue.js
      • real difference between the MVP app is shifting most of its work on the client side
      • there's client side MVC, MVVM (model-view-view-model) and FRP (functional reactive programming)

      Angular - client side MVC framework following its pattern, except it's running inside the users web browser.

      React - implementation of FRP. A little more flexible, but more concerned with state change events in data (often using some event store like Redux)

    25. http://www.mycoolsite.com/Home/Index

      With such a website, MVC model would try to:

      • find a "HomeController" file/module (depending on programming language) inside the controllers directory.
      • "Index" function would probably exist and would return a model (some data) that would be rendered by a view (HTML template from the views folder)

      (All the different frameworks do this slightly differently, but the core idea stays the same – features grouped together by controllers, with functions for returning pages of data and handling input from the web)

    26. when most people say “MVC” they’re really describing “Rails-style” MVC apps where your code is organised into a few different directories

      Different directories of the MVC pattern:

      • /controllers
      • /models
      • /views
    1. He is intrigued by the unique qualities of each person, organizes for maximum productivity, has a great desire to learn, can sense other people’s feelings, and is introspective and appreciates intellectual discussions—strengths that have been confirmed both through the Clifton StrengthsFinder assessment and are reflected in his personal and professional life.

      Way in which you can summarise the results of Gallup's test on your LinkedIn.

      Example profile

    1. Repeated measures involves measuring the same cases multiple times. So, if you measured the chips, then did something to them, then measured them again, etc it would be repeated measures. Replication involves running the same study on different subjects but identical conditions. So, if you did the study on n chips, then did it again on another n chips that would be replication.

      Difference between repeated measures and replication

    1. I’m sharing a few insights I specifically found useful for developers who are not specialized in this domain.

      Insights on databases from a Google engineer:

      1. You are lucky if 99.999% of the time network is not a problem.
      2. ACID has many meanings.
      3. Each database has different consistency and isolation capabilities.
      4. Optimistic locking is an option when you can’t hold a lock.
      5. There are anomalies other than dirty reads and data loss.
      6. My database and I don’t always agree on ordering.
      7. Application-level sharding can live outside the application.
      8. AUTOINCREMENT’ing can be harmful.
      9. Stale data can be useful and lock-free.
      10. Clock skews happen between any clock sources.
      11. Latency has many meanings.
      12. Evaluate performance requirements per transaction.
      13. Nested transactions can be harmful.
      14. Transactions shouldn’t maintain application state.
      15. Query planners can tell a lot about databases.
      16. Online migrations are complex but possible.
      17. Significant database growth introduces unpredictability.
    1. Think of mental laziness as a lack of mental exercise. Mental exercise, such as tough decisions, actually burn more calories and impact your overall physiology.

      Mental laziness

    1. At the company I work at, one of our products is an embeddable commenting system. Unlike single-page applications, when we encounter bugs they’re usually on the client’s website. This raised the question, how can we embed a piece of code that will run on all our client’s websites, that will help us debug and improve our overall build experience.

      Case when userscripts apply (not extensions)

    1. “Hey, I have a good idea for a game,” I said. “It’s called the function machine game. I will think of a function machine. You tell me things to put into the function machine, and I will tell you what comes out. Then you have to guess what the function machine does.” He immediately liked this game and it has been a huge hit; he wants to play it all the time. We played it while driving to a party yesterday, and we played it this morning while I was in the shower.

      Great idea for a game with your kids to develop logical thinking in them

    1. Tau Day is an annual celebration of the circle constant τ=6.283185…\tau = 6.283185\ldots, which takes place every June 28 (6/28 in the American calendar system).

      Tau (τ) = 2π radians = 360°

      It's simply a more intuitive way of representing a full circular rotation. Possibly Euler could've changed it 100 years ago, but too many textbooks already applied the concept of 2π radians instead

    Tags

    Annotators

    URL

    1. To customize settings for debugging tests, you can specify "request":"test" in the launch.json file in the .vscode folder from your workspace.

      Customising settings for debugging tests while running

      Python: Debug All Tests

      or

      Python: Debug Test Method

    2. For example, the test_decrement functions given earlier are failing because the assertion itself is faulty.

      Debugging tests themselves

      1. Set a breakpoint on the first line of the failing function (e.g. test_decrement)
      2. Click the "Debug Test" option above the function
      3. Open Debug Console and type: inc_dec.decrement(3) to see what is the actual output when we use x=3
      4. Stop the debugger and correct the tests
      5. Save the test file and run the tests again to look for a positive result
    3. Support for running tests in parallel with pytest is available through the pytest-xdist package.

      pytest-xdist provides support for parallel testing.

      1. To enable it on Windows:

      py -3 -m pip install pytest-xdist

      1. Create a file pytest.ini in your project directory and specify in it the number of CPUs to be used (e.g. 4):
        [pytest]
        addopts=-n4
        
      2. Run your tests
    4. Testing in Python is disabled by default. To enable testing, use the Python: Configure Tests command on the Command Palette.

      Start testing in VS Code by using Python: Configure Tests (it automatically chooses one testing framework and disables the rest).

      Otherwise, you can configure tests manually by setting only one of the following to True:

      • python.testing.unittestEnabled
      • python.testing.pytestEnabled
      • python.testing.nosetestsEnabled
    5. Create a file named test_unittest.py that contains a test class with two test methods

      Sample test file using unittest framework. inc_dec is the file that's being tested:

      import inc_dec    # The code to test
      import unittest   # The test framework
      
      class Test_TestIncrementDecrement(unittest.TestCase):
          def test_increment(self):
              self.assertEqual(inc_dec.increment(3), 4) # checks if the results is 4 when x = 3
      
          def test_decrement(self):
              self.assertEqual(inc_dec.decrement(3), 4)
      
      if __name__ == '__main__':
          unittest.main()
      
    6. Each test framework has its own conventions for naming test files and structuring the tests within, as described in the following sections. Each case includes two test methods, one of which is intentionally set to fail for the purposes of demonstration.
      • each testing framework has own naming conventions
      • each case includes two test methods (one of which fails)
    7. Developers typically run unit tests even before committing code to a repository; gated check-in systems can also run unit tests before merging a commit.

      When to run unit tests:

      • before committing
      • ideally before merging
      • many CI systems run it after every build
    8. each test is very simple: invoke the function with an argument and assert the expected return value.

      e.g. test of an exact number entry:

          def test_validator_valid_string():
              # The exact assertion call depends on the framework as well
              assert(validate_account_number_format("1234567890"), true)
      
    9. unit is a specific piece of code to be tested, such as a function or a class. Unit tests are then other pieces of code that specifically exercise the code unit with a full range of different inputs, including boundary and edge cases.

      Essence of unit testing

    1. Stroną, która jako jedyna wyświetla wszystkie dostępne do śledzenia statki powietrze jest ADSBexchange.com, którą zasilimy naszymi danymi. Jest to największa na świecie społeczność właścicieli odbiorników ADS-B / Mode S / MLAT i jednocześnie największe na świecie, w pełni publiczne i darmowe, źródło niefiltrowanych danych o lotach. Zapewniany przez to narzędzie dostęp do danych lotów na całym świecie jest wykorzystywany zarówno przez hobbystów, badaczy, jak i, co ciekawe, dziennikarzy.

      ADSBexchange.com in comparison to commercial sites like FlightAware or Flightradar24 doesn't hide any flights (as here no one can pay to hide them from the public access).

    1. AinD launches Android apps in Docker, by nesting Anbox containers inside Docker.

      AinD - useful tool when we need to run an Android app 24/7 in the cloud.

      Unlike the alternatives, AinD is not VM, but IaaS based

    1. gh repo create hello-world -d "A react app for the web" --public

      GitHub released a new CLI: gh with which you can do much more operations.

      For example, you can create repo without going into your browser:

      gh repo create hello-world -d "A react app for the web" --public
      

      Generally, it will be great for CI/CD pipelines

    1. An emerging way to bypass the need for passwords is to use magic links. A magic link is a temporary URL that expires after use, or after a specific interval of time. Magic links can be sent to your email address, an app, or a security device. Clicking the link authorizes you to sign in.

      Magic Links to replace passwords?

    2. Hashing passwords won’t save you or your users. Once a database of passwords has been stolen, hackers aim immense distributed computing power at those password databases. They use parallel GPUs or giant botnets with hundreds of thousands of nodes to try hundreds of billions of password combinations per second in hopes of recovering plaintext username/password pairs.

      What happens when the database is in hacker's hands

    1. The solution will be to go for serverless functions. This means that instead of occupying a server completely, it will only use the server capacity when the function needs to run.

      Serverless as a cheap back end option

    2. My favourites are Zeit and Netlify. They are quite similar in the features they provide: continuous deployment, around 100GB of bandwidth per month, and a built-in CDN. Another benefit is that they both provide the option of serverless functions, as we will see in the next section. This simplifies the number of services we need to integrate for our entire stack.

      Good website hosting:

      Zeit or Netlify

      (CD + 100 GB of bandwith / month + built-in CDN)

    1. Guédelon Castle (Château de Guédelon) is a castle currently under construction near Treigny, France. The castle is the focus of an experimental archaeology project aimed at recreating a 13th-century castle and its environment using period technique, dress, and material.

      Guédelon Castle (Château de Guédelon)

      More info on HN

    1. process of modular feature engineering and observation engineering while emphasising the order of augmentation to achieve the best predicted outcome from a given information set

      Tabular Data Augmentation

    1. 1) Redash and Falcon focus on people that want to do visualizations on top of SQL2) Superset, Tableau and PowerBI focus on people that want to do visualizations with a UI3) Metabase and SeekTable focus on people that want to do quick analysis (they are the closest to an Excel replacement)

      Comparison of data analysis tools:

      1) Redash & Falcon - SQL focus

      2) Superset, Tableau & PowerBI - UI workflow

      3) Metabase & SeekTable - Excel like experience

    1. Visual Studio Code supports working with Jupyter Notebooks natively, as well as through Python code files.

      To run cells inside a Python script in VSCode, all you need to is to define Jupyter-like code cells within Python code using a # %% comment:

      # %%
      msg = "Hello World"
      print(msg)
      
      # %%
      msg = "Hello again"
      print(msg)
      

    1. I could probably bootstrap my way up from this with the C compiler to write a terrible editor, then write a terrible TCP client, find my way out to ftp.gnu.org, get wget, and keep going from there. Assume that documentation is plentiful. You want a copy of the Stevens book so you can figure out how to do a DNS query by banging UDP over the network? Done.

      What would the author do in a situation of being alone in a room with:

      HD #1 is blank. HD #2 has a few scraps of a (Linux) OS on it: bootloader, kernel, C library and compiler, that sort of thing. There's a network connection of some sort, and that's about it. There are no editors and nothing more advanced than 'cat' to read files. You don't have jed, joe, emacs, pico, vi, or ed (eat flaming death). Don't even think about X. telnet, nc, ftp, ncftp, lftp, wget, curl, lynx, links? Luxury! Gone. Perl, Python and Ruby? Nope.

    1. We find three important trends in the evolution of musical discourse: the restriction of pitch sequences (with metrics showing less variety in pitch progressions), the homogenization of the timbral palette (with frequent timbres becoming more frequent), and growing average loudness levels. The picture at right shows the timbral variety: Smaller values of β indicate less timbral variety: frequent codewords become more frequent, and infrequent ones become even less frequent. This evidences a growing homogenization of the global timbral palette. It also points towards a progressive tendency to follow more fashionable, mainstream sonorities.

      Statistical evidence that modern pop music is boring or at least more homogeneous than in the past.

      Dataset used: Million Song Dataset (which doesn't contain entries from the most recent years)

    1. Unfortunately no - it cannot be done without Trusted security devices. There are several reasons for this. All of the below is working on the assumption you have no TPM or other trusted security device in place and are working in a "password only" environment.

      Devices without a TPM module will be always asked for a password (e.g. by BitLocker) on every boot

    1. Most people are surprised to learn that of Plotly’s 50 engineers, the vast majority are React developers. This is because Dash is primarily a frontend library — there are far more lines of JavaScript (Typescript) than Python, R, or Julia code. Plotly only has 3 full-time Python developers, 2 full-time R developers, and 0 full-time Julia developers.

      Who works behind Plotly/Dash: 50 engineers:

      • 45 JavaScript (?)
      • 3 Python
      • 2 R
      • 0 Julia
    2. With Dash, any open-source React UI component can be pulled from npm or GitHub, stirred with water, transmogrified into a Dash component, then imported into your Dash app as a Python, R, or Julia library. C’est magnifique! 👨‍🍳 Dash makes the richness and innovation of the React frontend ecosystem available to Python, R, and Julia engineers for the first time.

      Dash components are based on React

    1. To help you get started quickly, we created a special Installer of Visual Studio Code for Java developers. Download Visual Studio Code Java Pack Installer Note: The installer is currently only available for Windows. For other OS, please install those components (JDK, VS Code and Java extensions) individually. We're working on the macOS version, please stay tuned. The package can be used as a clean install or an update for an existing development environment to add Java or Visual Studio Code. Once downloaded and opened, it automatically detects if you have the fundamental components in your local development environment, including the JDK, Visual Studio Code, and essential Java extensions.

      If you wish to use Java inside VSCode, try downloading the Installer of Visual Studio Code for Java developers

    1. Note: When you create a new virtual environment, you should be prompted by VS Code to set it as the default for your workspace folder. If selected, the environment will automatically be activated when you open a new terminal.

      After creating a new project related environment, it shall be specified as a default for this specific project

    2. Tip: Use Logpoints instead of print statements: Developers often litter source code with print statements to quickly inspect variables without necessarily stepping through each line of code in a debugger. In VS Code, you can instead use Logpoints. A Logpoint is like a breakpoint except that it logs a message to the console and doesn't stop the program. For more information, see Logpoints in the main VS Code debugging article.

      Try to use logpoints instead of print statements.

      More info: https://code.visualstudio.com/docs/editor/debugging#_logpoints

    1. Open an Anaconda command prompt and run conda create -n myenv python=3.7 pandas jupyter seaborn scikit-learn keras tensorflow

      Command to quickly create a new Anaconda environment: conda create -n myenv python=3.7 pandas jupyter seaborn scikit-learn keras tensorflow

    1. On the job, if you notice poor infrastructure, speak up to your manager early on. Clearly document the problem, and try to incorporate a data engineering, infrastructure, or devops team to help resolve the issue! I'd also encourage you to learn these skills too!

      Recommendation to Data Scientists dealing with poor infrastructure

    2. When I worked at Target HQ in 2012, employees would arrive to work early - often around 7am - in order to query the database at a time when few others were doing so. They hoped they’d get database results quicker. Yet, they’d still often wait several hours just to get results.

      What poor data infrastructure leads to

    3. In regards to quality of data on the job, I’d often compare it to a garbage bag that ripped, had its content spewed all over the ground and your partner has asked you to find a beautiful earring that was accidentally inside.

      From my experience, I can only agree

    4. On the job, I’d recommend you document your work well and calculate the monetary value of your analyses based on factors like employee salary, capital investments, opportunity cost, etc. These analyses will come in handy for a promotion/review packet later too.

      Factors to keep a track of as a Data Scientist

    5. you as a Data Scientist should try to find a situation to be incredibly valuable on the job! It’s tough to find that from the outskirts of applying to jobs, but internally, you can make inroads supporting stakeholders with evidence for their decisions!

      Try showcasing the evidence of why the employer needs you

    6. As a Data Scientist in the org, are you essential to the business? Probably not. The business could go on for a while and survive without you. Sales will still be made, features will still get built, customer support will handle customer concerns, etc.

      As a Data Scientist, you are more of a "support" to the overall team

    7. As the resident Data Scientist, you may become easily inundated with requests from multiple teams at once. Be prepared to ask these teams to qualify and defend their requests, and be prepared to say “no” if their needs fall outside the scope of your actual priority queue. I’d recommend utilizing the RICE prioritization technique for projects.

      Being the only data person be sure to prioritise the requests

    8. Common questions are: How many users click this button; what % of users that visit a screen click this button; how many users have signed up by region or account type? However, the data needed to answer those questions may not exist! If the data does exists, it’s likely “dirty” - undocumented, tough to find or could be factually inaccurate. It’ll be tough to work with! You could spend hours or days attempting to answer a single question only to discover that you can’t sufficiently answer it for a stakeholder. In machine learning, you may be asked to optimize some process or experience for consumers. However, there’s uncertainty with how much, if at all, the experience can be improved!

      Common types of problems you might be working with in Data Science / Machine Learning industry

    9. In one experience, a fellow researcher spent over a month researching a particular value among our customers through qualitative and quantitative data. She presented a well-written and evidence-backed report. Yet, a few days later, a key head of product outlined a vision for the team and supported it with a claim that was antithetical to the researcher’s findings! Even if a data science project you advocate for is greenlighted, you may be on your own as the rare knowledgeable person to plan and execute it. It’s unlikely leadership will be hands-on to help you research and plan out the project.

      Data science leadership is sorely lacking

    10. Because people don’t know what data science does, you may have to support yourself with work in devops, software engineering, data engineering, etc.

      You might have multiple roles due to lack of clear "data science" terminology

    11. In 50+ interviews for data related jobs, I’ve been asked about AB testing, SQL analytics questions, optimizing SQL queries, how to code a game in Python, Logistic Regression, Gradient Boosted Trees, data structures and/or algorithms programming problems!

      Data science interviews lack a clarity of scope

    12. seven most common (and at times flagrant) ways that data science has failed to meet expectations in industry
      1. People don’t know what “data science” does.
      2. Data science leadership is sorely lacking.
      3. Data science can’t always be built to specs.
      4. You’re likely the only “data person.”
      5. Your impact is tough to measure — data doesn’t always translate to value.
      6. Data & infrastructure have serious quality problems.
      7. Data work can be profoundly unethical. Moral courage required.
    1. In data science community the performance of the model on the test dataset is one of the most important things people look at. Just look at the competitions on kaggle.com. They are extremely focused on test dataset and the performance of these models is really good.

      In data science, performance of the model on the test model is the most important metric for the majority.

      It's not always the best measurement since the most efficient model can completely misperform while receiving a different type of a dataset

    2. It's basically a look up table, interpolating between known data points. Except, unlike other interpolants like 'linear', 'nearest neighbour' or 'cubic', the underlying functional form is determined to best represent the kind of data you have.

      You can describe AI/ML methods as a look up table that adjusts to your data points unlike other interpolants (linear,nearest neighbor or cubic)

    1. git reflog is a very useful command in order to show a log of all the actions that have been taken! This includes merges, resets, reverts: basically any alteration to your branch.

      Reflog - shows the history of actions in the repo.

      With this information, you can easily undo changes that have been made to a repository with git reset

      git reflog
      

      reflog animation

      Say that we actually didn't want to merge the origin branch. When we execute the git reflog command, we see that the state of the repo before the merge is at HEAD@{1}. Let's perform a git reset to point HEAD back to where it was on HEAD@{1}!

      reflog + reset animation

    2. it pull is actually two commands in one: a git fetch, and a git merge. When we're pulling changes from the origin, we're first fetching all the data like we did with a git fetch, after which the latest changes are automatically merged into the local branch.

      Pulling - downloads content from a remote branch/repository like git fetch would do, and automatically merges the new changes

      git pull origin master
      

      pulling animation

    3. When a certain branch contains a commit that introduced changes we need on our active branch, we can cherry-pick that command! By cherry-picking a commit, we create a new commit on our active branch that contains the changes that were introduced by the cherry-picked commit.

      Cherry-picking - creates a new commit with the changes that the cherry-picked commit introduced.

      By default, Git will only apply the changes if the current branch does not have these changes in order to prevent an empty commit

      git cherry-pick 76d12
      

      cherry-picking animation

    4. Another way of undoing changes is by performing a git revert. By reverting a certain commit, we create a new commit that contains the reverted changes!

      Reverting - reverts the changes that commits introduce. Creates a new commit with the reverted changes

      git revert ec5be
      

      reverting animation