- Feb 2022
-
szkoladockera.pl szkoladockera.pl
-
Jeżeli masz dylemat czy użyć CMD, czy ENTRYPOINT jako punkt startowy twojego kontenera, odpowiedz sobie na następujące pytanie.Czy zawsze moje polecenie MUSI się wykonać? Jeśli odpowiedź brzmi tak, użyj ENTRYPOINT. Co więcej, jeśli potrzebujesz przekazać dodatkowe parametry, które mogą być nadpisane podczas uruchomienia kontenera — użyj również instrukcji CMD.
How to simply decide if to use CMD or ENTRYPOINT in a Dockerfile
Tags
Annotators
URL
-
-
founder-fodder.ghost.io founder-fodder.ghost.io
-
More backlinks to you and your work: Being the teammate that contributes to the system of knowledge shared shows how much you care about the success of the organization. And, it does it in a way that helps you have more documented and attributable credibility for the value you create within your organization.
Why it is important to write clear documentation in an organization
-
Writing supplants meetings: When there is good documentation around a meeting (briefs, meeting notes, etc.), meetings can be leaner and more productive because people don’t have to be in the room to know what’s happening. So, only those who are actively contributing to the discussion need to attend.
-
Without meeting notes and documentation, companies become reliant on unreliable verbal accounts, 1:1 updates, and needing to be in the room to get things done.
-
Scales everyone’s knowledge: think about how many people you interact with on a given work day. I’m talking about the real human to human kind where you relay your ideas to others. It’s probably in the 5-20 range. If you put those same ideas into a doc or an email, your distribution goes to infinity. Anyone can read it.
Writing can greatly scale up
Tags
Annotators
URL
-
-
-
== and != for string comparison -eq, -ne, -gt, -lt, -le -ge for numerical comparison
Comparison syntax in Bash
-
> will overwrite the current contents of the file, if the file already exists. If you want to append lines instead, use >>
> - overwrites text
>> - appends text
-
The syntax for “redirecting” some output to stderr is >&2. > means “pipe stdout into” whatever is on the right, which could be a file, etc., and &2 is a reference to “file descriptor #2” which is stderr.
Using stderr. On the other hand, >&1 is for stdout
-
single quotes, which don’t expand variables
In Bash, double quotes ("") expand variables, whereas single quotes ('') don't
-
This only works if you happen to have Bash installed at /bin/bash. Depending on the operating system and distribution of the person running your script, that might not necessarily be true! It’s better to use env, a program that finds an executable on the user’s PATH and runs it.
Shebang tip: instead of ```
!/bin/bash
use
!/usr/bin/env bash
alternatively, you can replace `bash` with `python`, `ruby`, etc. and later chmod it and run it:
$ chmod +x my-script.sh $ ./my-script.sh ```
Tags
Annotators
URL
-
- Jan 2022
-
news.ycombinator.com news.ycombinator.com
-
It isn't about note taking, it's about supporting a brain that simply isn't capable of retaining the level of information that we have to deal with. The ultimate note taking device is actually an augmented human brain that has perfect recollection and organisation.
What note taking is about
-
-
blog.emojipedia.org blog.emojipedia.org
-
In Japan, the country where the first emoji sets originated, red is traditionally used to represent increases in the value of a stock. Meanwhile, green is used to represent decreases in stock value.
Reason why chart increasing emoji 📈 is red
-
-
www.psychologytoday.com www.psychologytoday.com
-
the curse of knowledge. It’s a simple but devastating effect: Once we know something, it’s very difficult to imagine not knowing it, or to take the perspective of someone who doesn't.
The curse of knowledge
-
-
developers.redhat.com developers.redhat.com
-
Adopting Kubernetes-native environments ensures true portability for the hybrid cloud. However, we also need a Kubernetes-native framework to provide the "glue" for applications to seamlessly integrate with Kubernetes and its services. Without application portability, the hybrid cloud is relegated to an environment-only benefit. That framework is Quarkus.
Quarkus framework
-
Kubernetes-native is a specialization of cloud-native, and not divorced from what cloud native defines. Whereas a cloud-native application is intended for the cloud, a Kubernetes-native application is designed and built for Kubernetes.
Kubernetes-native application
-
According to Wilder, a cloud-native application is any application that was architected to take full advantage of cloud platforms. These applications: Use cloud platform services. Scale horizontally. Scale automatically, using proactive and reactive actions. Handle node and transient failures without degrading. Feature non-blocking asynchronous communication in a loosely coupled architecture.
Cloud-native applications
-
-
psyche.co psyche.co
-
two main problems with framing decisions and policies in terms of usefulness: (1) being useful is not always to our own benefit – sometimes, we are being used as a means to someone else’s end, and we end up miserable as a result; and (2) the lenses themselves of usefulness and uselessness can obscure our view of the good life.
2 main problems of usefulness
-
-
-
The script is in batch with some portions of powershell. The base code is fairly simple and most of it came from Googling ".bat transfer files" followed by ".bat how to only transfer certain file types" etc. The trick was making it work with my office, knowing where to scan for new files, knowing where not to scan due to lag (seriously, if you have a folder with 200,000 .txt files that crap will severally slow down your scans. Better to move it manually and then change the script to omit that folder from future searches)
-
It essentially scans the on-site drive for any new files, generates hash values for them, transfers them to the Cloud, then generates hash values again for fidelity (in court you have to prove digital evidence hasn't been tampered with).
Script to automate an 8 hour job: The firm gets thousands of digital documents, photos, etc on a daily basis. All of this goes on a local drive. My job is to transfer all of these files to the Cloud and then verify their fidelity.
-
-
towardsdatascience.com towardsdatascience.com
-
Salesforce has a unique use case where they need to serve 100K-500K models because the Salesforce Einstein product builds models for every customer. Their system serves multiple models in each ML serving framework container. To avoid the noisy neighbor problem and prevent some containers from taking significantly more load than others, they use shuffle sharding [8] to assign models to containers. I won’t go into the details and I recommend watching their excellent presentation in [3].
Case of Salesforce serving 100K-500K ML models with the use of shuffle sharding
-
Batching predictions can be especially beneficial when running neural networks on GPUs since batching takes better advantage of the hardware.
Barching predictions
-
Inference Service — provides the serving API. Clients can send requests to different routes to get predictions from different models. The Inference Service unifies serving logic across models and provides easier interaction with other internal services. As a result, data scientists don’t need to take on those concerns. Also, the Inference Service calls out to ML serving containers to obtain model predictions. That way, the Inference Service can focus on I/O-bound operations while the model serving frameworks focus on compute-bound operations. Each set of services can be scaled independently based on their unique performance characteristics.
Responsibilities of Inference Service
-
Provide a model config file with the model’s input features, the model location, what it needs to run (like a reference to a Docker image), CPU & memory requests, and other relevant information.
Contents of a model config file
-
what changes when you need to deploy hundreds to thousands of online models? The TLDR: much more automation and standardization.
MLOps focuses deeply on automation and standardization
-
-
christophergs.com christophergs.com
-
“Shadow Mode” or “Dark Launch” as Google calls it is a technique where production traffic and data is run through a newly deployed version of a service or machine learning model, without that service or model actually returning the response or prediction to customers/other systems. Instead, the old version of the service or model continues to serve responses or predictions, and the new version’s results are merely captured and stored for analysis.
Shadow mode
-
-
levelup.gitconnected.com levelup.gitconnected.com
-
you can also mount different FastAPI applications within the FastAPI application. This would mean that every sub-FastAPI application would have its docs, would run independent of other applications, and will handle its path-specific requests. To mount this, simply create a master application and sub-application file. Now, import the app object from the sub-application file to the master application file and pass this object directly to the mount function of the master application object.
It's possible to mount FastAPI applications within a FastAPI application
-
-
www.percona.com www.percona.com
-
There are officially 5 types of UUID values, version 1 to 5, but the most common are: time-based (version 1 or version 2) and purely random (version 3). The time-based UUIDs encode the number of 10ns since January 1st, 1970 in 7.5 bytes (60 bits), which is split in a “time-low”-“time-mid”-“time-hi” fashion. The missing 4 bits is the version number used as a prefix to the time-hi field. This yields the 64 bits of the first 3 groups. The last 2 groups are the clock sequence, a value incremented every time the clock is modified and a host unique identifier.
There are 5 types of UUIDs (source):
Type 1: stuffs MAC address+datetime into 128 bits
Type 3: stuffs an MD5 hash into 128 bits
Type 4: stuffs random data into 128 bits
Type 5: stuffs an SHA1 hash into 128 bits
Type 6: unofficial idea for sequential UUIDs
-
Even though most posts are warning people against the use of UUIDs, they are still very popular. This popularity comes from the fact that these values can easily be generated by remote devices, with a very low probability of collision.
-
-
contains.dev contains.dev
-
This basic example compiles a simple Go program. The naive way on the left results in a 961 MB image. When using a multi-stage build, we copy just the compiled binary which results in a 7 MB image.
# Image size: 7 MB FROM golang:1.17.5 as builder WORKDIR /workspace COPY . . ENV CGO_ENABLED=0 RUN go get && go build -o main . FROM scratch WORKDIR /workspace COPY --from=builder \ /workspace/main \ /workspace/main CMD ["/workspace/main"]
-
Docker introduced multi-stage builds starting from Docker Engine v17.05. This allows us to perform all preparations steps as before, but then copy only the essential files or output from these steps.
Multi-stage builds are great for Dockerfile steps that aren't used at runtime
-
Making a small change to a file or moving it will create an entire copy of the file. Deleting a file will only hide it from the final image, but it will still exist in its original layer, taking up space. This is all a result of how images are structured as a series of read-only layers. This provides reusability of layers and efficiencies with regards to how images are stored and executed. But this also means we need to be aware of the underlying structure and take it into account when we create our Dockerfile.
Summary of file duplication topic in Docker images
-
In this example, we created 3 copies of our file throughout different layers of the image. Despite removing the file in the last layer, the image still contains the file in other layers which contributes to the overall size of the image.
FROM debian:bullseye COPY somefile.txt . #1 # Small change but entire file is copied RUN echo "more data" >> somefile.txt #2 # File moved but layer now contains an entire copy of the file RUN mv somefile.txt somefile2.txt #3 # File won't exist in this layer, # but it still takes up space in the previous ones. RUN rm somefile2.txt
-
We’re just chmod'ing an existing file, but Docker can’t change the file in its original layer, so that results in a new layer where the file is copied in its entirety with the new permissions.In newer versions of Docker, this can now be written as the following to avoid this issue using Docker’s BuildKit:
Instead of this:
FROM debian:bullseye COPY somefile.txt . RUN chmod 777 somefile.txt
Try to use this:
FROM debian:bullseye COPY --chmod=777 somefile.txt .
-
when you make changes to files that come from previous layers, they’re copied into the new layer you’re creating.
-
Many processes will create temporary files, caches, and other files that have no benefit to your specific use case. For example, running apt-get update will update internal files that you don’t need to persist because you’ve already installed all the packages you need. So we can add rm -rf /var/lib/apt/lists/* as part of the same layer to remove those (removing them with a separate RUN will keep them in the original layer, see “Avoid duplicating files”). Docker recognize this is an issue and went as far as adding apt-get clean automatically for their official Debian and Ubuntu images.
Removing cache
-
An important way to ensure you’re not bringing in unintended files is to define a .dockerignore file.
.dockerignore sample:
# Ignore git and caches .git .cache # Ignore logs logs # Ignore secrets .env # Ignore installed dependencies node_modules ...
-
You can save any local image as a tar archive and then inspect its contents.
Example of inspecting docker image:
bash-3.2$ docker save <image-digest> -o image.tar bash-3.2$ tar -xf image.tar -C image bash-3.2$ cd image bash-3.2$ tar -xf <layer-digest>/layer.tar bash-3.2$ ls
One can also use Dive or Contains.dev
Tags
Annotators
URL
-
-
cushychicken.github.io cushychicken.github.io
-
Technology → Cool Stuff to Work OnIntellection → Smart People to Work WithCertainty → Repeatability in Work Environment
What engineers mostly want from job offers
Tags
Annotators
URL
-
-
blog.robsayers.com blog.robsayers.com
-
Smalltalk image is unlike a collection of Java class files in that it can store your programs (both source and compiled bytecode), their data, and execution state. You can quit and save your work while some code is executing, move your image to an entirely different machine, load your image… and your program picks up where it left off.
Advantage of *Smalltalk** VM
-
Smalltalk has a virtual machine which allows you to execute your code on any platform where the VM can run. The image however is not only all your code, but the entire Smalltalk system, including said virtual machine (because of course it’s written in itself).
Smalltalk has a VM
Tags
Annotators
URL
-
-
xdg.me xdg.me
-
The most important thing you can do is provide actionable feedback. This means being specific about your observations of their work. It also means providing direction about what they could have done differently or what they need to learn or practice in order to improve. If your mentee doesn’t know what to do next, your feedback wasn’t actionable.
In mentoring, make sure to provide actionable feedback
-
-
-
So when the morale on the team dips (usually related to lack of clarity or setbacks making things take a long time) people leave the team, which further damages morale and can easily result in a mass exodus.
-
People come and go (which has the net impact of making you feel like you are just a resource)
Tags
Annotators
URL
-
-
glyph.twistedmatrix.com glyph.twistedmatrix.com
-
Instead of “I have a type, it’s called MyType, it has a constructor, in the constructor I assign the property ‘A’ to the parameter ‘A’ (and so on)”, you say “I have a type, it’s called MyType, it has an attribute called a”
How class declariation in Plain Old Python compares to attr
-
attrs lets you declare the fields on your class, along with lots of potentially interesting metadata about them, and then get that metadata back out.
Essence on what attr does
-
>>> Point3D(1, 2, 3) == Point3D(1, 2, 3)
attr library includes value comparison and does not require an explicit implementation:
def __eq__(self, other): if not isinstance(other, self.__class__): return NotImplemented return (self.x, self.y, self.z) == (other.x, other.y, other.z) def __lt__(self, other): if not isinstance(other, self.__class__): return NotImplemented return (self.x, self.y, self.z) < (other.x, other.y, other.z)
-
>>> Point3D(1, 2, 3)
attr library includes string representation and does not require an explicit implementation:
def __repr__(self): return (self.__class__.__name__ + ("(x={}, y={}, z={})".format(self.x, self.y, self.z)))
-
Look, no inheritance! By using a class decorator, Point3D remains a Plain Old Python Class (albeit with some helpful double-underscore methods tacked on, as we’ll see momentarily).
attr library removes a lot of boilerplate code when defining Python classes, and includes such features as string representation or value comparison.
Example of a Plain Old Python Class:
class Point3D(object): def __init__(self, x, y, z): self.x = x self.y = y self.z = z
Example of a Python class defined with attr:
import attr @attr.s class Point3D(object): x = attr.ib() y = attr.ib() z = attr.ib()
Tags
Annotators
URL
-
-
-
The best way to improve your ability to think is to actually spend time thinking.
You need to take your time
-
Thinking means concentrating on one thing long enough to develop an idea about it.
Thinking
-
-
shkspr.mobi shkspr.mobi
-
This runs a loop 555 times. Takes a screenshot, names it for the loop number with padded zeros, taps the bottom right of the screen, then waits for a second to ensure the page has refreshed. Slow and dull, but works reliably.
Simple bash script to use via ADB to automatically scan pages:
#!/bin/bash for i in {00001..00555}; do adb exec-out screencap -p > $i.png adb shell input tap 1000 2000 sleep 1s done echo All done
-
-
www.reddit.com www.reddit.com
-
the need for multiple outputs (FL ASIO) vs better performance (ASIO4ALL)
Core difference between FL ASIO & ASIO4ALL
-
-
-
Gary Klein himself has made a name developing techniques for extracting pieces of tacit knowledge and making it explicit. (The technique is called ‘The Critical Decision Method’, but it is difficult to pull off because it demands expertise in CDM itself).
AI can help with turning tacit knowledge into explicit
-
If you are a knowledge worker, tacit knowledge is a lot more important to the development of your field of expertise than you might think.
Tacti knowledge is especially important for knowledge workers
-
Notice how little verbal instruction is involved. What is more important is emulation, and action — that is, a focus on the embodied feelings necessary to ride a bicycle successfully. And this exercise was quite magical for me, for within the span of an hour I could watch a kid go from conscious incompetence to conscious competence and finally to unconscious competence.In other words, tacit knowledge instruction happens through things like imitation, emulation, and apprenticeship. You learn by copying what the master does, blindly, until you internalise the principles behind the actions.
In learning, imitation, emulation and action are very important.
-
When I was a kid, I taught myself how to ride a bike … by accident. And then I taught my sisters and then my cousin and then another kid in the neighbourhood who was interested but a little scared. They were zooming around in about an hour each. The steps were as follows:
Interesting example on how to teach oneself to ride a bike (see steps below)
-
Tacit knowledge is knowledge that cannot be captured through words alone.
Tacit knowledge
-
- Dec 2021
-
arnoldgalovics.com arnoldgalovics.com
-
Artifactory/Nexus/Docker repo was unavailable for a tiny fraction of a second when downloading/uploading packagesThe Jenkins builder randomly got stuck
Typical random issues when deploying microservices
-
Microservices can really bring value to the table, but the question is; at what cost? Even though the promises sound really good, you have more moving pieces within your architecture which naturally leads to more failure. What if your messaging system breaks? What if there’s an issue with your K8S cluster? What if Jaeger is down and you can’t trace errors? What if metrics are not coming into Prometheus?
Microservices have quite many moving parts
-
If you’re going with a microservice:
9 things needed for deploying a microservice (listed below)
-
Let’s take a simple online store app as an example.
5 things needed for deploying a monolith (listed below)
-
some of the pros for going microservices
Pros of microservices (not always all are applicable):
- Fault isolation
- Eliminating the technology lock
- Easier understanding
- Faster deployment
- Scalability
-
-
news.ycombinator.com news.ycombinator.com
-
I think one of the ways that remote work changes this is that I can do other things while I think through a tricky problem; I can do dishes or walk my dog or something instead of trying to look busy in a room with 6-12 other people who are furiously typing because that's how the manager and project manager understand that work gets done.
Way work often looks like during remote dev work
-
-
www.docker.com www.docker.com
-
docker scan elastic/logstash:7.13.3 | grep 'Arbitrary Code Execution'
Example of scanning docker image for a log4j vulnerability
-
-
www.reddit.com www.reddit.com
-
AAX is Pro Tools' exclusive format for virtual synths. So you really only need it if you use Pro Tools (which doesn't support the VST format).
Only install VST plugin format (not AAX) for FL Studio
-
-
youcantdownloadthisimage.online youcantdownloadthisimage.online
-
When you usually try to download an image, your browser opens a connection to the server and sends a GET request asking for the image. The server responds with the image and closes the connection. Here however, the server sends the image, but doesn't close the connection! It keeps the connection open and sends bits of data periodically to make sure it stays open. So your browser thinks that the image is still being sent over, which is why the download seems to be going on infinitely.
How to not let the user downloading an image
-
-
byrnehobart.medium.com byrnehobart.medium.com
-
be microfamous. Microfame is the best kind of fame, because it combines an easier task (be famous to fewer people) with a better outcome (be famous to the right people).
Idea of being microfamous over famous
-
-
www.cyberciti.biz www.cyberciti.biz
-
:w !sudo tee %
Save a file in Vim / Vi without root permission with sudo
-
-
www.dobreprogramy.pl www.dobreprogramy.pl
-
Windows 10 Enterprise LTSC 2021 (tak brzmi pełna, poprawna nazwa wersji 21H2 LTSC) będzie otrzymywał aktualizacje do stycznia 2032.
Windows 10 LTSC
-
Doświadczeni użytkownicy wiedzieli jednak, że należy czekać na Server. W przypadku Windows 11, wydaje się że należy czekać na wersję LTSC.
You may want to wait for Windows 11 LTSC before updating from Windows 10 which gets LTSC first
-
-
www.iamjonas.me www.iamjonas.me
-
if it's important enough it will surface somewhere somehow.
Way to apply FOMO
-
The Feynman technique goes as follows: write down what you know about a subject. Explain it in words simple enough for a 6th grader to understand. Identify any gaps in your knowledge and read up on that again until all of the explanation is dead simple and short.
The Feyman technique for learning
-
Similar to frequency lists in a natural language there are concepts in any software product that the rest of the software builds upon. I'll call them core concepts. Let's use git as an example. Three core concepts are commits, branches and conflicts.If you know these three core concepts you can proceed with further learning.
To speed up learning, start with the core concepts like frequency lists for learning languages
Tags
Annotators
URL
-
- Nov 2021
-
www.simplethread.com www.simplethread.com
-
We can also see that converting the original non-dithered image to WebP gets a much smaller file size, and the original aesthetic of the image is preserved.
Favor converting images to WebP over ditchering them
-
-
amaca.substack.com amaca.substack.com
-
A lot of us are going to die of unpredictable diseases, some of us young. Really, don't spend your life getting fitter, healthier, more productive. We are all going to die, and Earth will explode in the Sun in a few billion years: please, enjoy some now.
:)
-
-
pythonspeed.com pythonspeed.com
-
I’d probably choose the official Docker Python image (python:3.9-slim-bullseye) just to ensure the latest bugfixes are always available.
python:3.9-slim-bullseye may be the sweet spot for a Python Docker image
-
So which should you use? If you’re a RedHat shop, you’ll want to use their image. If you want the absolute latest bugfix version of Python, or a wide variety of versions, the official Docker Python image is your best bet. If you care about performance, Debian 11 or Ubuntu 20.04 will give you one of the fastest builds of Python; Ubuntu does better on point releases, but will have slightly larger images (see above). The difference is at most 10% though, and many applications are not bottlenecked on Python performance.
Choosing the best Python base Docker image depends on different factors.
-
There are three major operating systems that roughly meet the above criteria: Debian “Bullseye” 11, Ubuntu 20.04 LTS, and RedHat Enterprise Linux 8.
3 candidates for the best Python base Docker image
-
-
www.binghamton.edu www.binghamton.edu
-
While people who are both trustworthy and competent are the most sought after when it comes to team assembly, friendliness and trustworthiness are often more important factors than competency.
-
The researchers found that people who exhibited both competence, through the use of challenging voice, and trustworthiness, through the use of supportive voice, were the most in-demand people when it came to assembling teams.
- Challenging voice: Communicating in a way that challenges the status quo and is focused on new ideas and efficiency.
- Supportive voice: Communicating in a way that strengthens social ties and trust, and builds friendly cohesion of a team.
-
-
www.taniarascia.com www.taniarascia.com
-
Feature GraphQL REST
GraphQL vs *REST (table)
-
There are advantages and disadvantages to both systems, and both have their use in modern API development. However, GraphQL was developed to combat some perceived weaknesses with the REST system, and to create a more efficient, client-driven API.
List of differences between REST and GraphQL (below this annotation)
Tags
Annotators
URL
-
-
linuxjourney.com linuxjourney.com
-
special permission bit at the end here t, this means everyone can add files, write files, modify files in the /tmp directory, but only root can delete the /tmp directory
t permission bit
Tags
Annotators
URL
-
-
aws.amazon.com aws.amazon.com
-
We implemented a bash script to be installed in the master node of the EMR cluster, and the script is scheduled to run every 5 minutes. The script monitors the clusters and sends a CUSTOM metric EMR-INUSE (0=inactive; 1=active) to CloudWatch every 5 minutes. If CloudWatch receives 0 (inactive) for some predefined set of data points, it triggers an alarm, which in turn executes an AWS Lambda function that terminates the cluster.
Solution to terminate EMR cluster; however, right now EMR supports auto-termination policy out of the box
-
-
-
git ls-files is more than 5 times faster than both fd --no-ignore and find
git ls-files is the fastest command to find entries in filesystem
Tags
Annotators
URL
-
-
www.die-welt.net www.die-welt.net
-
If we call this using Bash, it never gets further than the exec line, and when called using Python it will print lol as that's the only effective Python statement in that file.
#!/bin/bash "exec" "python" "myscript.py" "$@" print("lol")
-
For Python the variable assignment is just a var with a weird string, for Bash it gets executed and we store the result.
__PYTHON="$(command -v python3 || command -v python)"
-
-
buttondown.email buttondown.email
-
There are a handful of tools that I used to use and now it’s narrowed down to just one or two: pandas-profiling and Dataiku for columnar or numeric data - here’s some getting started tips. I used to also load the data into bamboolib but the purpose of such a tool is different. For text data I have written my own profiler called nlp-profiler.
Tools to help with data exploration:
- pandas-profiling
- Dataiku
- bamboolib
- nlp-profiler
-
-
thenewstack.io thenewstack.io
-
If for some reason you don’t see a running pod from this command, then using kubectl describe po a is your next-best option. Look at the events to find errors for what might have gone wrong.
kubectl run a –image alpine –command — /bin/sleep 1d
-
As with listing nodes, you should first look at the status column and look for errors. The ready column will show how many pods are desired and how many are running.
kubectl get pods -A -o wide
-
-o wide option will tell us additional details like operating system (OS), IP address and container runtime. The first thing you should look for is the status. If the node doesn’t say “Ready” you might have a problem, but not always.
kubectl get nodes -o wide
-
This command will be the easiest way to discover if your scheduler, controller-manager and etcd node(s) are healthy.
kubectl get componentstatus
-
If something broke recently, you can look at the cluster events to see what was happening before and after things broke.
kubectl get events -A
-
this command will tell you what CRDs (custom resource definitions) have been installed in your cluster and what API version each resource is at. This could give you some insights into looking at logs on controllers or workload definitions.
kubectl api-resources -o wide –sort-by name
-
kubectl get --raw '/healthz?verbose'
Alternative to
kubectl get --raw '/healthz?verbose'
. It does not show scheduler or controller-manager output, but it adds a lot of additional checks that might be valuable if things are broken -
Here are the eight commands to run
8 commands to debug Kubernetes cluster:
kubectl version --short kubectl cluster-info kubectl get componentstatus kubectl api-resources -o wide --sort-by name kubectl get events -A kubectl get nodes -o wide kubectl get pods -A -o wide kubectl run a --image alpine --command -- /bin/sleep 1d
-
-
www.swyx.io www.swyx.io
-
80% of developers are "dark", they dont write or speak or participate in public tech discourse.
After working in tech, I would estimate the same
-
They'll teach you for free. Most people don't see what's right in front of them. But not you. "With so many junior devs out there, why will they help me?", you ask. Because you learn in public. By teaching you, they teach many. You amplify them.
Senior engineers can teach you for free if you just open up online
-
Try your best to be right, but don't worry when you're wrong. Repeatedly. If you feel uncomfortable, or like an impostor, good. You're pushing yourself. Don't assume you know everything, but try your best anyway, and let the internet correct you when you are inevitably wrong. Wear your noobyness on your sleeve.
Truly inspiring! I need to save this as one of my favorite quotes (and share on my blog, of course)!
-
start building a persistent knowledge base that grows over time. Open Source your Knowledge! At every step of the way: Document what you did and the problems you solved.
That is why I am trying to be present even more on my social media, or on the personal blog. Maybe one day I will try to open-source my OneNote notes as a Wiki-like page
-
Whatever your thing is, make the thing you wish you had found when you were learning. Don't judge your results by "claps" or retweets or stars or upvotes - just talk to yourself from 3 months ago.
This is the exact same mindset I am following since some time, and it is awesome!
Tags
Annotators
URL
-
-
news.ycombinator.com news.ycombinator.com
-
For example instead of "the team were incredibly frustrating to work with and ignored all my suggestions", instead "the culture made it very hard to do quality engineering, which over time was demoralising"
Way to nicely explain why you have left your previous job
-
-
cuddly-octo-palm-tree.com cuddly-octo-palm-tree.com
-
Given all that, I simply do not understand why people keep recommending the {} syntax at all. It's a rare case where you'd want all the associated issues. Essentially, the only "advantage" of not running your functions in a subshell is that you can write to global variables. I'm willing to believe there are cases where that is useful, but it should definitely not be the default.
According to the author, strangely, {} syntax is more popular than ().
However, the subshell has its various disadvantages, as listed by the HackerNews user
-
All we've done is replace the {} with (). It may look like a benign change, but now, whenever that function is invoked, it will be run within a subshell.
Running bash functions within a subshell: () brings some advantages
-
-
romandesign.co romandesign.co
-
Coaching is external guidance and feedback on your performance.
Coaching - external guidance and feedback on performance
Mentoring - subset of coaching primarily focused on the creation of knowledge
-
-
www.ciemnastrona.com.pl www.ciemnastrona.com.pl
-
Dogadać się z firmami z podobnej branży i umieszczać u siebie zdjęcia ich produktów, bez żadnego kodu. Napisać czasem (odpowiednio oznaczony) artykuł sponsorowany. Dodać link do bezpośrednich wpłat na swoje konto. Pomysłów jest multum. Niestety wielu wybrało najłatwiejszą opcję i podpięcie się pod globalne sieci reklamowe. Niekoniecznie zyskują na tym „partnerstwie”. Elementy reklamowe zbierają informacje o użytkownikach nawet jeśli ich nie klikniemy (a zatem i tak nie przyniesiemy zarobków właścicielom stron).
Why it's worth to use ad blockers & how site owners could replace this business model
-
-
-
I’ll recap the steps in case you got lost. I start with the assumption that I’ve already downloaded the invite.ics file.
5 simple steps how to spoof invite.ics files
-
-
www.science.org www.science.org
-
border collies—could recall and retrieve at least 10 toys they had been taught the names of. One overachiever named Whisky correctly retrieved 54 out of 59 toys he had learned to identify.
Border collies can recall like 10-55 toy names, whereas most dogs can recall around 1-2
-
those capable of quickly memorizing multiple toy names—shows they often tilt their heads before correctly retrieving a specific toy. That suggests the behavior might be a sign of concentration and recall in our canine pals, the team suggests.
Dogs may tilt their head to recall a toy name
-
-
-
x() is the same as doing x.__call__()
-
How do you even begin to check if you can try and “call” a function, class, and whatnot? The answer is actually quite simple: You just see if the object implements the __call__ special method.
Use of
__call__
-
Python is referred to as a “duck-typed” language. What it means is that instead of caring about the exact class an object comes from, Python code generally tends to check instead if the object can satisfy certain behaviours that we are looking for.
-
everything is stored inside dictionaries. And the vars method exposes the variables stored inside objects and classes.
Python stores objects, their variables, methods and such inside dictionaries, which can be checked using vars()
-
-
www.atlassian.com www.atlassian.com
-
The overall flow of Gitflow is: A develop branch is created from main A release branch is created from develop Feature branches are created from develop When a feature is complete it is merged into the develop branch When the release branch is done it is merged into develop and main If an issue in main is detected a hotfix branch is created from main Once the hotfix is complete it is merged to both develop and main
The overall flow of Gitflow
-
- Oct 2021
-
www.atlassian.com www.atlassian.com
-
Hotfix branches are a lot like release branches and feature branches except they're based on main instead of develop. This is the only branch that should fork directly off of main. As soon as the fix is complete, it should be merged into both main and develop (or the current release branch), and main should be tagged with an updated version number.
Hotfix branches
-
Once develop has acquired enough features for a release (or a predetermined release date is approaching), you fork a release branch off of develop. Creating this branch starts the next release cycle, so no new features can be added after this point—only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it's ready to ship, the release branch gets merged into main and tagged with a version number. In addition, it should be merged back into develop, which may have progressed since the release was initiated.
Release branch
-
feature branches use develop as their parent branch. When a feature is complete, it gets merged back into develop. Features should never interact directly with main.
Feature branches should only interact with a develop branch
-
When using the git-flow extension library, executing git flow init on an existing repo will create the develop branch
git flow init will create:
- feature branch
- release branch
- hotfix branch
- support branch
-
-
medium.com medium.com
-
The main condition that needs to be satisfied in order to use OneFlow is that every new production release is based on the previous release. The most difference between One Flow and Git Flow that it not has develop branch.
Main difference between OneFlow and Git Flow
-
The most difference between GitLab Flow and GitHub Flow are the environment branches having in GitLab Flow (e.g. staging and production) because there will be a project that isn’t able to deploy to production every time you merge a feature branch
Main difference between GitLab Flow and GitHub Flow
-
4 branching workflows for Git
- Git Flow
- GitHub Flow
- GitLab Flow
- One Flow
-
release-* — release branches support preparation of a new production release. They allow many minor bug to be fixed and preparation of meta-data for a release. May branch off from develop and must merge into master anddevelop.
release branches
-
hotfix-* — hotfix branches are necessary to act immediately upon an undesired status of master. May branch off from master and must merge into master anddevelop.
hotfix branches
-
feature-* — feature branches are used to develop new features for the upcoming releases. May branch off from develop and must merge into develop.
feature branches
-
-
orkhanscience.medium.com orkhanscience.medium.com
-
The main idea behind the space-based pattern is the distributed shared memory to mitigate issues that frequently occur at the database level. The assumption is that by processing most of operations using in-memory data we can avoid extra operations in the database, thus any future problems that may evolve from there (for example, if your user activity data entity has changed, you don’t need to change a bunch of code persisting to & retrieving that data from the DB).The basic approach is to separate the application into processing units (that can automatically scale up and down based on demand), where the data will be replicated and processed between those units without any persistence to the central database (though there will be local storages for the occasion of system failures).
Space-based architecture
-
Microservices architecture consists of separately deployed services, where each service would have ideally single responsibility. Those services are independent of each other and if one service fails others will not stop running.
Microservices architecture
-
First of all, if you know the basics of architecture patterns, then it is easier for you to follow the requirements of your architect. Secondly, knowing those patterns will help you to make decisions in your code
2 main advantages of using design patterns:
- easier to follow requirements of an architect
- easier to make decisions in your code
-
Mikrokernel Architecture, also known as Plugin architecture, is the design pattern with two main components: a core system and plug-in modules (or extensions). A great example would be a Web browser (core system) where you can install endless extensions (or plugins).
Microkernel (plugin) architecture
-
The idea behind this pattern is to decouple the application logic into single-purpose event processing components that asynchronously receive and process events. This pattern is one of the popular distributed asynchronous architecture patterns known for high scalability and adaptability.
Event-driven architecture: high scalability and adaptability
-
It is the most common architecture for monolithic applications. The basic idea behind the pattern is to divide the app logic into several layers each encapsulating specific role. For example, the Persistence layer would be responsible for the communication of your app with the database engine.
Layered architecture
-
-
-
My favourite tactic is to ask a yes/no question. What I love about this is that there’s a much lower chance that the person answering will go off on an irrelevant tangent – they’ll almost always say something useful to me.
Asking yes/no questions can be more powerul than I thought
-
instead of finding someone who can easily give a clear explanation, I just need to find someone who has the information I want and then ask them specific questions until I’ve learned what I want to know. And I’ve found that most people really do want to be helpful, so they’re very happy to answer questions. And if you get good at asking questions, you can often find a set of questions that will get you the answers you want pretty quickly, so it’s a good use of everyone’s time!
Explaining things is extremely hard, especially for some people. Therefore, you need to be ready to ask more specific questions
-
-
www.vice.com www.vice.com
-
A screenshot from the document providing an overview of different data retention periods. Image: Motherboard.
Is it possible that FBI stores this data on us?
-
-
stackoverflow.com stackoverflow.com
-
$@ is all of the parameters passed to the script. For instance, if you call ./someScript.sh foo bar then $@ will be equal to foo bar.
Meaning of $@ in Bash
-
-
techcommunity.microsoft.com techcommunity.microsoft.com
-
Go ahead and Clear Data so you can start Onenote. Before you close Onenote click on the Sticky Notes button then close Onenote. Onenote will now open normally. If you forget to click on the Sticky Notes button Onenote will break and it wont start again. When this happens you'll need to Clear Data to get Onenote started. I use Onenote everyday and something changed last night.
I am having the same problem with OneNote and this is the solution
-
-
-
indent=True here is treated as indent=1, so it works, but I’m pretty sure nobody would intend that to mean an indent of 1 space
-
bool is actually not a primitive data type — it’s actually a subclass of int!
Python has only 5 primitives
-
complex is a supertype of float, which, in turn, is a supertype of int.
On some of Python's primitives
-
Now since the “compiling to bytecode” step above takes a noticeable amount of time when you import a module, Python stores (marshalls) the bytecode into a .pyc file, and stores it in a folder called __pycache__. The __cached__ parameter of the imported module then points to this .pyc file.When the same module is imported again at a later time, Python checks if a .pyc version of the module exists, and then directly imports the already-compiled version instead, saving a bunch of time and computation.
Python takes benefit of caching imports
-
Bytecode is a set of micro-instructions for Python’s virtual machine. This “virtual machine” is where Python’s interpreter logic resides. It essentially emulates a very simple stack-based computer on your machine, in order to execute the Python code written by you.
What bytecode does
-
Python is compiled. In fact, all Python code is compiled, but not to machine code — to bytecode
Python is compiled to bytecode
-
Python always runs in debug mode by default.The other mode that Python can run in, is “optimized mode”. To run python in “optimized mode”, you can invoke it by passing the -O flag. And all it does, is prevents assert statements from doing anything (at least so far), which in all honesty, isn’t really useful at all.
Python debug vs optimized mode
-
np = __import__('numpy') # Same as doing 'import numpy as np'
-
This refers to the module spec. It contains metadata such as the module name, what kind of module it is, as well as how it was created and loaded.
__spec__
-
let’s say you only want to support integer addition with this class, and not floats. This is where you’d use NotImplemented
Example use case of NotImplemented:
class MyNumber: def __add__(self, other): if isinstance(other, float): return NotImplemented return other + 42
-
__radd__ operator, which adds support for right-addition
class MyNumber: def __add__(self, other): return other + 42 def __radd__(self, other): return other + 42
-
Now I should mention that all objects in Python can add support for all Python operators, such as +, -, +=, etc., by defining special methods inside their class, such as __add__ for +, __iadd__ for +=, and so on.
For example:
class MyNumber: def __add__(self, other): return other + 42
and then:
>>> num = MyNumber() >>> num + 3 45
-
NotImplemented is used inside a class’ operator definitions, when you want to tell Python that a certain operator isn’t defined for this class.
NotImplemented constant in Python
-
Doing that would even catch KeyboardInterrupt, which would make you unable to close your program by pressing Ctrl+C.
except BaseException: ...
-
every exception is a subclass of BaseException, and nearly all of them are subclasses of Exception, other than a few that aren’t supposed to be normally caught.
on Python's exceptions
-
print(dir(__builtins__))
command to get all the builtins
-
builtin scope in Python:It’s the scope where essentially all of Python’s top level functions are defined, such as len, range and print.When a variable is not found in the local, enclosing or global scope, Python looks for it in the builtins.
builtin scope (part of LEGB rule)
-
Global scope (or module scope) simply refers to the scope where all the module’s top-level variables, functions and classes are defined.
Global scope (part of LEGB rule)
-
you can use the nonlocal keyword in Python to tell the interpreter that you don’t mean to define a new variable in the local scope, but you want to modify the one in the enclosing scope.
nonlocal
-
The enclosing scope (or nonlocal scope) refers to the scope of the classes or functions inside which the current function/class lives.
Enclosing scope (part of LEGB rule)
-
The local scope refers to the scope that comes with the current function or class you are in.
Local scope (part of LEGB rule)
-
A builtin in Python is everything that lives in the builtins module.
Python's builtin
-
-
sadh.life sadh.life
-
in Python 3.0 (alongside 2.6), A new method was added to the str data type: str.format. Not only was it more obvious in what it was doing, it added a bunch of new features, like dynamic data types, center alignment, index-based formatting, and specifying padding characters.
History of str.format in Python
-
-
cloudcasts.io cloudcasts.io
-
So, while DELETE operations are free, LIST operations (to get a list of objects) are not free (~$.005 per 1000 requests, varying a bit by region).
Deleting buckets on S3 is not free. If you use either Web Console or AWS CLI, it will execute the LIST call per 1000 objects
Tags
Annotators
URL
-
-
sadh.life sadh.life
-
TypedDict is a dictionary whose keys are always string, and values are of the specified type. At runtime, it behaves exactly like a normal dictionary.
TypedDict
-
you should only use reveal_type to debug your code, and remove it when you’re done debugging.
Because it's only used by mypy
-
What this says is “function double takes an argument n which is an int, and the function returns an int.
def double(n: int) -> int:
-
This tells mypy that nums should be a list of integers (List[int]), and that average returns a float.
from typing import List def average(nums: List[int]) -> float:
-
for starters, use mypy --strict filename.py
If you're starting your journey with mypy, use the --strict flag
-
-
www.oreilly.com www.oreilly.com
-
few battle-hardened options, for instance: Airflow, a popular open-source workflow orchestrator; Argo, a newer orchestrator that runs natively on Kubernetes, and managed solutions such as Google Cloud Composer and AWS Step Functions.
Current top orchestrators:
- Airflow
- Argo
- Google Cloud Composer
- AWS Step Functions
-
To make ML applications production-ready from the beginning, developers must adhere to the same set of standards as all other production-grade software. This introduces further requirements:
Requirements specific to MLOps systems:
- Large scale of operations
- Orchestration
- Robust versioning (data, models, code)
- Apps integrated to surrounding busness systems
-
In contrast, a defining feature of ML-powered applications is that they are directly exposed to a large amount of messy, real-world data which is too complex to be understood and modeled by hand.
One of the best ways to picture a difference between DevOps and MLOps
-
-
www.linkedin.com www.linkedin.com
-
As a character, I believe that I am quite a responsible and hardworking person. The combination of the two becomes dangerous when you lose the measure of how much you should work to get a job done right.
That sounds like me, and I completely agree with the author that it can be truly dangerous at times
-
-
-
UPDATE--SHA-1, the 25-year-old hash function designed by the NSA and considered unsafe for most uses for the last 15 years, has now been “fully and practically broken” by a team that has developed a chosen-prefix collision for it.
SHA-1 has been broken; therefore, make sure not to use it in a production based environment
-
-
blog.jooq.org blog.jooq.org
-
So, what’s a better way to illustrate JOIN operations? JOIN diagrams!
Apparently, SQL should be taught using JOIN diagrams not Venn diagrams?
-
-
www.kaggle.com www.kaggle.com
-
State of Data Science and Machine Learning 2021
Tags
Annotators
URL
-
-
naehrdine.blogspot.com naehrdine.blogspot.com
-
iOS 15.0 introduces a new feature: an iPhone can be located with Find My even while the iPhone is turned "off"
-
-
-
Argo Workflow is part of the Argo project, which offers a range of, as they like to call it, Kubernetes-native get-stuff-done tools (Workflow, CD, Events, Rollouts).
High level definition of Argo Workflow
-
Argo is designed to run on top of k8s. Not a VM, not AWS ECS, not Container Instances on Azure, not Google Cloud Run or App Engine. This means you get all the good of k8s, but also the bad.
Pros of Argo Workflow:
- Resilience
- Autoscaling
- Configurability
- Support for RBAC
Cons of Argo Workflow:
- A lot of YAML files required
- k8s knowledge required
-
If you are already heavily invested in Kubernetes, then yes look into Argo Workflow (and its brothers and sisters from the parent project).The broader and harder question you should ask yourself is: to go full k8s-native or not? Look at your team’s cloud and k8s experience, size, growth targets. Most probably you will land somewhere in the middle first, as there is no free lunch.
Should you go into Argo, or not?
-
In order to reduce the number of lines of text in Workflow YAML files, use WorkflowTemplate . This allow for re-use of common components.
kind: WorkflowTemplate
-
-
blog.royalsloth.eu blog.royalsloth.eu
-
Wet behind the ears: the ones that have just started working as a professional programmers and need lots of guidance. Sort of knows what they are doing: professional programmers, who have sort of figured what is going on and can mostly finish their tasks on their own. Experienced: the ones who have been around the block for a while and are most likely the brains behind all the major design decisions of a large software project. Coworkers usually turn to them when they need advice for a hard technical problem. Some programmers will never reach this level, despite their official title stating otherwise (but more on that later).
3 levels of programming experience:
- wet behind the ears
- sort of knows what they are doing
- experienced
-
-
lucasfcosta.com lucasfcosta.com
-
The problem with the first approach is that it considers software development to be a deterministic process when, in fact, it’s stochastic. In other words, you can’t accurately determine how long it will take to write a particular piece of code unless you have already written it.
Estimation around software development is a stochastic process
-
-
durmonski.com durmonski.com
-
Reading books, being aware of the curiosity gap, and asking a lot of questions:
Ways to boost creativity
-
The difference between smart and curious people and only smart people is that curiosity helps you move forward in life. If you shut the door to curiosity. You shut the door to learning. And when you don’t learn. You don’t move forward. You must be curious to learn. Otherwise, you won’t even consider learning.
Curiosity is the core drive of learning
-
Smart people become even smarter because they are smart enough to understand that they don’t have all the answers.
Smartness is driven by curiosity
-
-
github.com github.com
-
You probably shouldn't use Alpine for Python projects, instead use the slim Docker image versions.
(have a look below this highlight for a full reasoning)
-
-
medium.com medium.com
-
Cost — it’s by far the most affordable headset in its class, and while I have a tendency to be lavish with my gadgetry I’m still a cheapskate: I love a good deal and a favorably skewed cost/benefit ratio even more.
Oculus seems to be offering a good quality/cost ratio
-
Adapting to the new environment is immediate, like moving between rooms, and since the focal length in the headset matches regular human vision there’s no acclimating or adjustment.
Adaptation to VR work should be seamless
-
I do highly contextual work, with multiple work orders and their histories open, supporting reference documentation, API specifications, several areas of code (and calls in the stack), tests, logs, databases, and GUIs — plus Slack, Spotify, clock, calendar, and camera feeds. I tend to only look at 25% of that at once, but everything is within a comfortable glance without tabbing between windows. Protecting that context and augmenting my working memory maintains my flow.
Application types to look at during work:
- work orders and their histories
- supporting reference documentation
- API specs
- areas of code (and calls in the stack)
- tests
- logs
- databases
- GUIs
- Slack
- Spotify
- clock
- calendar
- camera feeds
With all that, we may look at around 25% of the stuff at once
-
Realism will increase (perhaps to hyperrealism) and our ability to perceive and interact with simulated objects and settings will be indistinguishable to our senses. Acting in simulated contexts will have physical consequences as systems interpret and project actions into the world — telepresence will take a quantum leap, removing limitations of time and distance. Transcending today’s drone piloting, remote surgery, etc., we will see through remote eyes and work through remote hands anywhere.
On the increase of realism in the future
-
It has a ton of promise, but… I don’t really care for the promise it’s making. 100% of what you can do in Workrooms is feasible in a physical setting, although it would be really expensive (lots of smart hardware all over the place). But that’s the thing: it’s imitating life within a tool that doesn’t share the same limitations, so as a VR veteran I find it bland and claustrophobic. That’s going to be really good for newcomers or casual users because the skeuomorphism is familiar, making it easy to immediately orient oneself and begin working together — and that illustrates a challenge in design vocabulary. While the familiar can provide a safe and comfortable starting point, the real power of VR requires training users for potentially unfamiliar use cases. Also, if you can be anywhere, why would you want to be in a meeting room, virtual-Lake Tahoe notwithstanding?
Author's feedback on why Workrooms do not fully use their potential
-
For meeting with those not in VR, or if I have a video call that needs input rather than passive attendance, I’ll frequently use a virtual webcam to attend by avatar. It’s sufficiently demonstrative for most team meetings, and the crew has gotten used to me showing up as a digital facsimile. I’ll surface from VR and use a physical webcam for anything sensitive or personal, however.
On VR meetings with other people while they are using normal webcams
-
Meetings are best in person, in VR, in MURAL, and in Zoom — in that order. As a remote worker of several years, “in person” is a rarity for me — so I use VR to preserve the feeling of shared presence, of inhabiting a place with other people, especially when good spatial audio is used. Hand tracking enables meaningful gestures and animated expression, despite the avatars cartoonish appearance — somehow it all “just works”, your brain accepts that these people you know are embodied through these virtual puppets, and you get on with communicating instead of quibbling about missing realism (which will be a welcome improvement as it becomes available but doesn’t stop this from working right now).
Author's shared feeling over working remotely
-
What’s it like to actually use? In a word: comfortable. Given a few more words, I’d choose productive and effective. I can resize, reposition, add, or remove as much screen space as I need. I never have to squint or lean forward, crane my neck, hunt for an application window I just had open, or struggle to find a place for something. Many trade-offs and compromises from the past no longer apply — I put my apps in convenient locations I can see at a glance, and without getting in my way. I move myself and my gaze enough throughout the day that I’m not stiff at the end of it and experience less eye strain than I ever did with a bunch of desk-bound LCDs.
Author's reflections on working in VR. It seems like he highly values the comfortability and space for multiple windows
-
Since all I need is a keyboard, mouse, and a place to park myself, I’ve completely ditched the traditional desk. I can use a floor setup for part of the day and mix it up with a standing arrangement for the rest.
Working in VR, you don't need the screens in front of your eyes
-
-
www.duckware.com www.duckware.com
-
Supporting 4×4 MIMO takes a lot more power, and for battery powered devices, runtime is FAR more important.
Reason why client devices still use 2x2 MIMO, not 4x4 MIMO
-
How did your router even get a 'rating' of 5300 Mbps in the first place? Router manufacturers combine/add the maximum physical network speeds for ALL wifi bands (usually 2 or 3 bands) in the router to produce a single aggregate (grossly inflated) Mbps number. But your client device only connects to ONE band (not all bands) on the router at once. So, '5300 Mbps' is all marketing hype.
Why routers get such a high rating
-
The only thing that really matters to you is the maximum speed of a single 5 GHz band (using all MIMO antennas).
What to focus on when choosing a router
-
You have 1 Gbps Internet, and just bought a very expensive AX11000 class router with advertised speeds of up to 11 Gbps, but when you run a speed test from your iPhone XS Max (at a distance of around 32 feet), you only get around 450 Mbps (±45 Mbps). Same for iPad Pro. Same for Samsung Galaxy S8. Same for a laptop computer. Same for most wireless clients. Why? Because that is the speed expected from these (2×2 MIMO) devices!
Reason why you may be getting slow internet speed on your client device (2x2 MIMO one)
Tags
Annotators
URL
-
-
www.python-engineer.com www.python-engineer.com
-
Before we dive into the details, here's a brief summary of the most important changes:
List of the most important upcoming Python 3.10 features (see below)
-
- Sep 2021
-
-
It’s been a hot, hot year in the world of data, machine learning and AI.
Summary of data tools in October 2021: http://46eybw2v1nh52oe80d3bi91u-wpengine.netdna-ssl.com/wp-content/uploads/2021/09/ML-AI-Data-Landscape-2021.pdf
Tags
Annotators
URL
-
-
blog.kubeflow.org blog.kubeflow.org
-
we will be releasing KServe 0.7 outside of the Kubeflow Project and will provide more details on how to migrate from KFServing to KServe with minimal disruptions
KFServing is now KServe
-
-
betterdatascience.com betterdatascience.com
-
You can attach Visual Studio Code to this container by right-clicking on it and choosing the Attach Visual Studio Code option. It will open a new window and ask you which folder to open.
It seems like VS Code offers a better way to manage Docker containers
-
You don’t have to download them manually, as a docker-compose.yml will do that for you. Here’s the code, so you can copy it to your machine:
Sample
docker-compose.yml
file to download both: Kafka and Zookeeper containers
-