- Sep 2024
-
www.codeotaku.com www.codeotaku.com
-
github.com github.com
-
If you'd like another method to do the waiting for you, e.g. Kernel.select, you can use Timers::Group#wait_interval to obtain the amount of time to wait. When a timeout is encountered, you can fire all pending timers with Timers::Group#fire
This is another way of achieving concurrency (progress made while waiting for other things) besides wrapping the timer's sleep in a separate thread like https://github.com/rubyworks/facets/blob/main/lib/standard/facets/timer.rb does.
-
-
en.wikipedia.org en.wikipedia.org
-
The complication comes from the fact that the execution model does not have any means for the execution of "give up ownership of the lock" to have any influence over which execution of "gain ownership of the lock" in some other timeline (thread) follows. Very often, only certain handoffs give valid results. Thus, the programmer must think of all possible combinations of one thread giving up a lock and another thread getting it next, and make sure their code only allows valid combinations.
-
- Dec 2023
-
-
What's the difference between concurrency and parallelism?
concurrent process performs multiple tasks at the same time whether they're being diverted total attention or not, a parallel process is physically performing multiple tasks all at the same time.
-
concurrency is great for I/O-intensive processes -- tasks that involve waiting on web requests or file read/write operations.
-
What is concurrency?
An effective definition for concurrency is "being able to perform multiple tasks at once". This is a bit misleading though, as the tasks may or may not actually be performed at exactly the same time. Instead, a process might start, then once it's waiting on a specific instruction to finish, switch to a new task, only to come back once it's no longer waiting. Once one task is finished, it switches again to an unfinished task until they have all been performed. Tasks start asynchronously, get performed asynchronously, and then finish asynchronously.
-
There are many reasons your applications can be slow. Sometimes this is due to poor algorithmic design or the wrong choice of data structure. Sometimes, however, it's due to forces outside of our control, such as hardware constraints or the quirks of networking.
That's where concurrency and parallelism fit in. They allow your programs to do multiple things at once, either at the same time or by wasting the least possible time waiting on busy tasks.
-
-
horaceguy.pages.dev horaceguy.pages.dev
-
Gunicorn and multiprocessing
Gunicorn forks a base process into
n
worker processes, and each worker is managed by Uvicorn (with the asynchronous uvloop). Which means:- Each worker is concurrent
- The worker pool implements parallelism
This way, we can have the best of both worlds: concurrency (multithreading) and parallelism (multiprocessing).
-
-
guicommits.com guicommits.com
-
If you want 2 or more functions to run concurrently, you need asyncio.create_task.
Creating a task triggers the async operation, and it needs to be awaited at some point.
For example:
task = create_task(my_async_function('arg1')) result = await task
As we're creating many tasks, we need
asyncio.gather
which awaits all tasks to be done. -
IO-bound operations are related to reading/writing operations.
A good example would be:
- Requesting some data from HTTP
- Reading/Writing some json/txt file
- Reading data from a database
All these operations consist of waiting for the data to be available.
While the data is UNAVAILABLE the EVENT LOOP does something else.
This is Concurrency.
NOT Parallelism.
-
-
www.bitecode.dev www.bitecode.dev
-
Which, means, if you think about it, that concurrency has a lot to do with sharing one resource.
In a computer, you may have to share different things:
- Battery charge.
- CPU calculation power.
- RAM space.
- Disk space and throughput.
- Network throughput.
- File system handles.
- User input.
- Screen real estate.
-
The typical analogy is this:
- concurrency is having two lines of customers ordering from a one cashier;
- parallelism is having two lines of customers ordering from two cashiers.
-
concurrency
"is about dealing with a lot of things as once" (As Rob Pike said)
-
-
www.bitecode.dev www.bitecode.dev
-
You can distribute work to a bunch of process workers or thread workers with a few lines of code:
```python from concurrent.futures import ThreadPoolExecutor, as_completed
with ThreadPoolExecutor(max_workers=5) as executor: executor.submit(do_something_blockint) ```
-
- Nov 2023
-
github.com github.com
-
Otherwise, M does not respond to works? (NoMethodError) because there was a context switch before the require triggered by autoload returned.
-
If it passed before I would think it was just lucky timings.
-
BTW to improve the reliability of that test I believe you would need a sleep (smaller, e.g. of 0.1) between the Thread.new and assert M.works?, otherwise it's likely the M.works? runs first and then the other thread will see the constant is autoloading and wait, and anyway that thread does not check what is defined on M. For the test to fail it needs to be the Thread.new running first and defining the constant but not yet the method, before the main thread keeps running and call the method.
-
- Mar 2023
-
codewithoutrules.com codewithoutrules.com
- Sep 2022
-
-
The variable a is incremented thanks to the atomic memory primitives function addInt that is concurrent-safe. However, we assign the result to the same variable, which is a not a concurrent-safe write operation. This a careless mistake detected by the atomic analyzer.
first sighting: concurrent-safe
-
- Aug 2022
-
-
This seemed like a good disambiguation of the terms at first glance, but actually isn't my favorite.
I found https://medium.com/@itIsMadhavan/concurrency-vs-parallelism-a-brief-review-b337c8dac350 more useful.
-
-
medium.com medium.com
-
I recommend using the term “parallel” when the simultaneous execution is assured or expected, and to use the term “concurrent” when it is uncertain or irrelevant if simultaneous execution will be employed.
-
Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.
-
A system is said to be concurrent if it can support two or more actions in progress at the same time. A system is said to be parallel if it can support two or more actions executing simultaneously.
-
Concurrency means executing multiple tasks at the same time but not necessarily simultaneously.
-
Concurrency means that an application is making progress on more than one task at the same time (concurrently)
-
-
github.com github.com
-
This very much appears to be a bug or design flaw in puma - The fact that a persistent connection ties up a thread on the chance a request might come over that connection seems like not great behavior. This would really only be an issue when puma is run with no workers (which wouldn't be done in production) but it still seems a little nuts.
-
- Jun 2022
-
insomnius.github.io insomnius.github.io
- May 2021
-
stackoverflow.com stackoverflow.com
-
github.com github.com
-
www.npmjs.com www.npmjs.com
Tags
Annotators
URL
-
- Jan 2021
-
doc.rust-lang.org doc.rust-lang.org
-
As an example, recall the Sync and Send marker traits we discussed in the “Extensible Concurrency with the Sync and Send Traits” section in Chapter 16: the compiler implements these traits automatically if our types are composed entirely of Send and Sync types. If we implement a type that contains a type that is not Send or Sync, such as raw pointers, and we want to mark that type as Send or Sync, we must use unsafe. Rust can’t verify that our type upholds the guarantees that it can be safely sent across threads or accessed from multiple threads; therefore, we need to do those checks manually and indicate as such with unsafe.
-
-
doc.rust-lang.org doc.rust-lang.org
-
The Sync marker trait indicates that it is safe for the type implementing Sync to be referenced from multiple threads. In other words, any type T is Sync if &T (a reference to T) is Send, meaning the reference can be sent safely to another thread. Similar to Send, primitive types are Sync, and types composed entirely of types that are Sync are also Sync.
-
Any type composed entirely of Send types is automatically marked as Send as well. Almost all primitive types are Send, aside from raw pointers, which we’ll discuss in Chapter 19.
-
-
doc.rust-lang.org doc.rust-lang.org
-
As you might suspect, Mutex<T> is a smart pointer. More accurately, the call to lock returns a smart pointer called MutexGuard, wrapped in a LockResult that we handled with the call to unwrap. The MutexGuard smart pointer implements Deref to point at our inner data; the smart pointer also has a Drop implementation that releases the lock automatically when a MutexGuard goes out of scope, which happens at the end of the inner scope in Listing 16-12. As a result, we don’t risk forgetting to release the lock and blocking the mutex from being used by other threads because the lock release happens automatically.
-
- Dec 2017
-
blog.discordapp.com blog.discordapp.com
-
How Discord Scaled Elixir to 5,000,000 Concurrent Users
Is this across the entire set of clusters or is this per single node or set of nodes for a given "guild"?
-
- Jan 2017