- Dec 2023
-
horaceguy.pages.dev horaceguy.pages.dev
-
There is another way to declare a route with FastAPI
Using the
asyncio
:``` import asyncio
from fastapi import FastAPI
app = FastAPI()
@app.get("/asyncwait") async def asyncwait(): duration = 0.05 await asyncio.sleep(duration) return {"duration": duration} ```
-
-
guicommits.com guicommits.com
-
Use Python asyncio.as_completed
There will be moments when you don't have to await for every single task to be processed right away.
We do this by using
asyncio.as_completed
which returns a generator with completed coroutines. -
When to use Python Async
Async only makes sense if you're doing IO.
There's ZERO benefit in using async to stuff like this that is CPU-bound:
``` import asyncio
async def sum_two_numbers_async(n1: int, n2: int) -> int: return n1 + n2
async def main(): await sum_two_numbers_async(2, 2) await sum_two_numbers_async(4, 4)
asyncio.run(main()) ```
Your code might even get slower by doing that due to the Event Loop.
That's because Python async only optimizes IDLE time!
-
If you want 2 or more functions to run concurrently, you need asyncio.create_task.
Creating a task triggers the async operation, and it needs to be awaited at some point.
For example:
task = create_task(my_async_function('arg1')) result = await task
As we're creating many tasks, we need
asyncio.gather
which awaits all tasks to be done. -
they think async is parallel which is not true
-
-
www.bitecode.dev www.bitecode.dev
-
The code isn't that different from your typical asyncio script:
``` import re import time
import httpx import trio
urls = [ "https://www.bitecode.dev/p/relieving-your-python-packaging-pain", "https://www.bitecode.dev/p/hype-cycles", "https://www.bitecode.dev/p/why-not-tell-people-to-simply-use", "https://www.bitecode.dev/p/nobody-ever-paid-me-for-code", "https://www.bitecode.dev/p/python-cocktail-mix-a-context-manager", "https://www.bitecode.dev/p/the-costly-mistake-so-many-makes", "https://www.bitecode.dev/p/the-weirdest-python-keyword", ]
title_pattern = re.compile(r"<title[^>]>(.?)</title>", re.IGNORECASE)
user_agent = ( "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/116.0" )
async def fetch_url(url): start_time = time.time()
async with httpx.AsyncClient() as client: headers = {"User-Agent": user_agent} response = await client.get(url, headers=headers) match = title_pattern.search(response.text) title = match.group(1) if match else "Unknown" print(f"URL: {url}\nTitle: {title}") end_time = time.time() elapsed_time = end_time - start_time print(f"Time taken for {url}: {elapsed_time:.4f} seconds\n")
async def main(): global_start_time = time.time()
# That's the biggest API difference async with trio.open_nursery() as nursery: for url in urls: nursery.start_soon(fetch_url, url) global_end_time = time.time() global_elapsed_time = global_end_time - global_start_time print(f"Total time taken for all URLs: {global_elapsed_time:.4f} seconds")
if name == "main": trio.run(main) ```
Because it doesn't create nor schedule coroutines immediately (notice the
nursery.start_soon(fetch_url, url)
is notnursery.start_soon(fetch_url(url)))
, it will also consume less memory. But the most important part is the nursery:# That's the biggest API difference async with trio.open_nursery() as nursery: for url in urls: nursery.start_soon(fetch_url, url)
The
with
block scopes all the tasks, meaning everything that is started inside that context manager is guaranteed to be finished (or terminated) when it exits. First, the API is better than expecting the user to wait manually like withasyncio.gather
: you cannot start concurrent coroutines without a clear scope in trio, it doesn't rely on the coder's discipline. But under the hood, the design is also different. The whole bunch of coroutines you group and start can be canceled easily, because trio always knows where things begin and end.As soon as things get complicated, code with curio-like design become radically simpler than ones with asyncio-like design.
-
-
tonybaloney.github.io tonybaloney.github.io
-
Both multiprocessing processes and interpreters have their own import state. This is drastically different to threads and coroutines. When you await an async function, you don’t need to worry about whether that coroutine has imported the required modules. The same applies for threads.
For example, you can import something in your module and reference it from inside the thread function:
```python import threading from super.duper.module import cool_function
def worker(info): # This already exists in the interpreter state cool_function()
info = {'a': 1} thread = Thread(target=worker, args=(info, )) ```
-