- Sep 2023
-
chromestatus.com chromestatus.com
-
gist.github.com gist.github.com
-
-
permit streams to be transferred between workers, frames and anywhere else that postMessage() can be used. Chunks can be anything which is cloneable by postMessage(). Initially chunks enqueued in such a stream will always be cloned, ie. all data will be copied. Future work will extend the Streams APIs to support transferring objects (ie. zero copy).
js const rs = new ReadableStream({ start(controller) { controller.enqueue('hello'); } }); const w = new Worker('worker.js'); w.postMessage(rs, [rs]);
js onmessage = async (evt) => { const rs = evt.data; const reader = rs.getReader(); const {value, done} = await reader.read(); console.log(value); // logs 'hello'. };
-
- Jul 2023
-
developer.chrome.com developer.chrome.com
-
html <meta http-equiv="Accept-CH" content="DPR, Viewport-Width, Width"> ... <picture> <!-- serve WebP to Chrome and Opera --> <source media="(min-width: 50em)" sizes="50vw" srcset="/image/thing-200.webp 200w, /image/thing-400.webp 400w, /image/thing-800.webp 800w, /image/thing-1200.webp 1200w, /image/thing-1600.webp 1600w, /image/thing-2000.webp 2000w" type="image/webp"> <source sizes="(min-width: 30em) 100vw" srcset="/image/thing-crop-200.webp 200w, /image/thing-crop-400.webp 400w, /image/thing-crop-800.webp 800w, /image/thing-crop-1200.webp 1200w, /image/thing-crop-1600.webp 1600w, /image/thing-crop-2000.webp 2000w" type="image/webp"> <!-- serve JPEGXR to Edge --> <source media="(min-width: 50em)" sizes="50vw" srcset="/image/thing-200.jpgxr 200w, /image/thing-400.jpgxr 400w, /image/thing-800.jpgxr 800w, /image/thing-1200.jpgxr 1200w, /image/thing-1600.jpgxr 1600w, /image/thing-2000.jpgxr 2000w" type="image/vnd.ms-photo"> <source sizes="(min-width: 30em) 100vw" srcset="/image/thing-crop-200.jpgxr 200w, /image/thing-crop-400.jpgxr 400w, /image/thing-crop-800.jpgxr 800w, /image/thing-crop-1200.jpgxr 1200w, /image/thing-crop-1600.jpgxr 1600w, /image/thing-crop-2000.jpgxr 2000w" type="image/vnd.ms-photo"> <!-- serve JPEG to others --> <source media="(min-width: 50em)" sizes="50vw" srcset="/image/thing-200.jpg 200w, /image/thing-400.jpg 400w, /image/thing-800.jpg 800w, /image/thing-1200.jpg 1200w, /image/thing-1600.jpg 1600w, /image/thing-2000.jpg 2000w"> <source sizes="(min-width: 30em) 100vw" srcset="/image/thing-crop-200.jpg 200w, /image/thing-crop-400.jpg 400w, /image/thing-crop-800.jpg 800w, /image/thing-crop-1200.jpg 1200w, /image/thing-crop-1600.jpg 1600w, /image/thing-crop-2000.jpg 2000w"> <!-- fallback for browsers that don't support picture --> <img src="/image/thing.jpg" width="50%"> </picture>
-
-
blog.logrocket.com blog.logrocket.com
-
pwa-workshop.js.org pwa-workshop.js.org
-
-
developer.mozilla.org developer.mozilla.org
- Jun 2023
-
developer.chrome.com developer.chrome.com
-
Tags
Annotators
URL
-
-
sergiodxa.com sergiodxa.com
-
-
```js /* * Response from cache / self.addEventListener('fetch', event => { const response = self.caches.open('example') .then(caches => caches.match(event.request)) .then(response => response || fetch(event.request));
event.respondWith(response); });
/* * Response to SSE by text / self.addEventListener('fetch', event => { const { headers } = event.request; const isSSERequest = headers.get('Accept') === 'text/event-stream';
if (!isSSERequest) { return; }
event.respondWith(new Response('Hello!')); });
/* * Response to SSE by stream / self.addEventListener('fetch', event => { const { headers } = event.request; const isSSERequest = headers.get('Accept') === 'text/event-stream';
if (!isSSERequest) { return; }
const responseText = 'Hello!'; const responseData = Uint8Array.from(responseText, x => x.charCodeAt(0)); const stream = new ReadableStream({ start: controller => controller.enqueue(responseData) }); const response = new Response(stream);
event.respondWith(response); });
/* * SSE chunk data / const sseChunkData = (data, event, retry, id) => Object.entries({ event, id, data, retry }) .filter(([, value]) => ![undefined, null].includes(value)) .map(([key, value]) =>
${key}: ${value}
) .join('\n') + '\n\n';/* * Success response to SSE from SW / self.addEventListener('fetch', event => { const { headers } = event.request; const isSSERequest = headers.get('Accept') === 'text/event-stream';
if (!isSSERequest) { return; }
const sseChunkData = (data, event, retry, id) => Object.entries({ event, id, data, retry }) .filter(([, value]) => ![undefined, null].includes(value)) .map(([key, value]) =>
${key}: ${value}
) .join('\n') + '\n\n';const sseHeaders = { 'content-type': 'text/event-stream', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', };
const responseText = sseChunkData('Hello!'); const responseData = Uint8Array.from(responseText, x => x.charCodeAt(0)); const stream = new ReadableStream({ start: controller => controller.enqueue(responseData) }); const response = new Response(stream, { headers: sseHeaders });
event.respondWith(response); });
/* * Result / self.addEventListener('fetch', event => { const { headers, url } = event.request; const isSSERequest = headers.get('Accept') === 'text/event-stream';
// Process only SSE connections if (!isSSERequest) { return; }
// Headers for SSE response const sseHeaders = { 'content-type': 'text/event-stream', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', }; // Function for formatting message to SSE response const sseChunkData = (data, event, retry, id) => Object.entries({ event, id, data, retry }) .filter(([, value]) => ![undefined, null].includes(value)) .map(([key, value]) =>
${key}: ${value}
) .join('\n') + '\n\n';// Map with server connections, where key - url, value - EventSource const serverConnections = {}; // For each request opens only one server connection and use it for next requests with the same url const getServerConnection = url => { if (!serverConnections[url]) { serverConnections[url] = new EventSource(url); }
return serverConnections[url];
}; // On message from server forward it to browser const onServerMessage = (controller, { data, type, retry, lastEventId }) => { const responseText = sseChunkData(data, type, retry, lastEventId); const responseData = Uint8Array.from(responseText, x => x.charCodeAt(0)); controller.enqueue(responseData); }; const stream = new ReadableStream({ start: controller => getServerConnection(url).onmessage = onServerMessage.bind(null, controller) }); const response = new Response(stream, { headers: sseHeaders });
event.respondWith(response); }); ```
-
-
-
```js self.addEventListener('fetch', event => { const { headers, url } = event.request; const isSSERequest = headers.get('Accept') === 'text/event-stream';
// We process only SSE connections if (!isSSERequest) { return; }
// Response Headers for SSE const sseHeaders = { 'content-type': 'text/event-stream', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', }; // Function formatting data for SSE const sseChunkData = (data, event, retry, id) => Object.entries({ event, id, data, retry }) .filter(([, value]) => ![undefined, null].includes(value)) .map(([key, value]) =>
${key}: ${value}
) .join('\n') + '\n\n'; // Table with server connections, where key is url, value is EventSource const serverConnections = {}; // For each url, we open only one connection to the server and use it for subsequent requests const getServerConnection = url => { if (!serverConnections[url]) serverConnections[url] = new EventSource(url);return serverConnections[url];
}; // When we receive a message from the server, we forward it to the browser const onServerMessage = (controller, { data, type, retry, lastEventId }) => { const responseText = sseChunkData(data, type, retry, lastEventId); const responseData = Uint8Array.from(responseText, x => x.charCodeAt(0)); controller.enqueue(responseData); }; const stream = new ReadableStream({ start: controller => getServerConnection(url).onmessage = onServerMessage.bind(null, controller) }); const response = new Response(stream, { headers: sseHeaders });
event.respondWith(response); }); ```
-
-
learn.microsoft.com learn.microsoft.com
-
cloudflare.tv cloudflare.tv
-
jeffy.info jeffy.info
-
astro-sw-demo.netlify.app astro-sw-demo.netlify.appAstro1
Tags
Annotators
URL
-
- May 2023
- Mar 2023
-
www.builder.io www.builder.io
Tags
Annotators
URL
-
-
blog.bitsrc.io blog.bitsrc.io
-
alistapart.com alistapart.com
-
www.youtube.com www.youtube.com
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
Tags
Annotators
URL
-
- Dec 2022
-
bugs.webkit.org bugs.webkit.org
Tags
Annotators
URL
-
- Nov 2022
-
www.aquib.dev www.aquib.dev
Tags
Annotators
URL
-
-
vite-pwa-org.netlify.app vite-pwa-org.netlify.app
-
stackoverflow.com stackoverflow.com
-
ponyfoo.com ponyfoo.comPony Foo1
- Sep 2022
-
web.dev web.dev
-
- Aug 2022
-
developers.cloudflare.com developers.cloudflare.com
-
workers.js.org workers.js.org
-
- Jul 2022
-
tanstack.com tanstack.com
- Feb 2022
-
localforage.github.io localforage.github.io
Tags
Annotators
URL
-
-
-
- Dec 2021
-
github.com github.com
-
// main.js const { RemoteReadableStream, RemoteWritableStream } = RemoteWebStreams; (async () => { const worker = new Worker('./worker.js'); // create a stream to send the input to the worker const { writable, readablePort } = new RemoteWritableStream(); // create a stream to receive the output from the worker const { readable, writablePort } = new RemoteReadableStream(); // transfer the other ends to the worker worker.postMessage({ readablePort, writablePort }, [readablePort, writablePort]); const response = await fetch('./some-data.txt'); await response.body // send the downloaded data to the worker // and receive the results back .pipeThrough({ readable, writable }) // show the results as they come in .pipeTo(new WritableStream({ write(chunk) { const results = document.getElementById('results'); results.appendChild(document.createTextNode(chunk)); // tadaa! } })); })();
// worker.js const { fromReadablePort, fromWritablePort } = RemoteWebStreams; self.onmessage = async (event) => { // create the input and output streams from the transferred ports const { readablePort, writablePort } = event.data; const readable = fromReadablePort(readablePort); const writable = fromWritablePort(writablePort); // process data await readable .pipeThrough(new TransformStream({ transform(chunk, controller) { controller.enqueue(process(chunk)); // do the actual work } })) .pipeTo(writable); // send the results back to main thread };
-
-
stackoverflow.com stackoverflow.com
-
What you're trying to do is known as the "Application Shell" architectural pattern.
The trick is to have your service worker's
fetch
handler check to see whether an incoming request is a navigation (event.request.mode === 'navigate'
), and if so, respond with the cached App Shell HTML (which sounds like/index.html
in your case).A generic way of doing this would be:
self.addEventListener('fetch', (event) => { if (event.request.mode === 'navigate') { event.respondWith(caches.match('/index.html')); } else { // Your other response logic goes here. } });
This will cause your service worker to behave in a similar fashion to how you're web server is already configured.
-
-
developers.google.com developers.google.com
-
developers.cloudflare.com developers.cloudflare.com
-
Fetch and modify response properties which are immutable by creating a copy first.
/** * @param {string} headerNameSrc Header to get the new value from * @param {string} headerNameDst Header to set based off of value in src */ const headerNameSrc = "foo" //"Orig-Header" const headerNameDst = "Last-Modified" async function handleRequest(request) { /** * Response properties are immutable. To change them, construct a new * Response and pass modified status or statusText in the ResponseInit * object. Response headers can be modified through the headers `set` method. */ const originalResponse = await fetch(request) // Change status and statusText, but preserve body and headers let response = new Response(originalResponse.body, { status: 500, statusText: "some message", headers: originalResponse.headers, }) // Change response body by adding the foo prop const originalBody = await originalResponse.json() const body = JSON.stringify({ foo: "bar", ...originalBody }) response = new Response(body, response) // Add a header using set method response.headers.set("foo", "bar") // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc) if (src != null) { response.headers.set(headerNameDst, src) console.log( `Response header "${headerNameDst}" was set to "${response.headers.get( headerNameDst, )}"`, ) } return response } addEventListener("fetch", event => { event.respondWith(handleRequest(event.request)) })
-
-
solidstudio.io solidstudio.io
Tags
Annotators
URL
-
- Sep 2021
-
developers.google.com developers.google.com
-
Ensure there's only one version of your site running at once. That last one is pretty important. Without service workers, users can load one tab to your site, then later open another. This can result in two versions of your site running at the same time. Sometimes this is ok, but if you're dealing with storage you can easily end up with two tabs having very different opinions on how their shared storage should be managed. This can result in errors, or worse, data loss.
I wonder how can we identify issues like this when they occur
-
- Mar 2021
-
www.jackfranklin.co.uk www.jackfranklin.co.uk
-
However, if these timeouts are moved into a web worker, they should run to time and not get de-prioritised by the browser.
-
- Aug 2020
-
covid-19.iza.org covid-19.iza.org
-
Immigrant Key Workers: Their Contribution to Europe’s COVID-19 Response. COVID-19 and the Labor Market. (n.d.). IZA – Institute of Labor Economics. Retrieved August 7, 2020, from https://covid-19.iza.org/publications/dp13178/
-
-
www.sciencedirect.com www.sciencedirect.com
-
Quinn, A. E., Trachtenberg, A. J., McBrien, K. A., Ogundeji, Y., Souri, S., Manns, L., Rennert-May, E., Ronksley, P., Au, F., Arora, N., Hemmelgarn, B., Tonelli, M., & Manns, B. J. (2020). Impact of payment model on the behaviour of specialist physicians: A systematic review. Health Policy, 124(4), 345–358. https://doi.org/10.1016/j.healthpol.2020.02.007
-
- May 2020
-
psyarxiv.com psyarxiv.com
-
Johnson, S. U., Ebrahimi, O. V., & Hoffart, A. (2020, May 20). Level and Predictors of PTSD Symptoms Among Health Workers and Public Service Providers During the COVID-19 Outbreak. https://doi.org/10.31234/osf.io/w8c6p
-
- Apr 2020
-
www.technologyreview.com www.technologyreview.com
-
Rotman, D. (2020 April 8). Stop covid or save the economy? We can do both. MIT Technology Review. https://www.technologyreview.com/2020/04/08/998785/stop-covid-or-save-the-economy-we-can-do-both/
-
- Nov 2018
-
serviceworke.rs serviceworke.rs
-
-
developer.mozilla.org developer.mozilla.org
- Oct 2018
-
github.com github.com
Tags
Annotators
URL
-
-
developer.mozilla.org developer.mozilla.org
-
stackoverflow.com stackoverflow.com
-
developer.mozilla.org developer.mozilla.org
-
-
A completely different Service Workers’ storyThis section was added on Feb, 8th.Apple followed the Service Worker API, but it creates an entirely different story of what it is and what we can do with it in the future. The main differences appear when Apple says:“To keep only the stored information that is useful to the user, WebKit will remove unused service worker registrations after a period of a few weeks. Caches that do not get opened after a few weeks will also be removed. Web Applications must be resilient to any individual cache, cache entry or service worker being removed.”https://webkit.org/blog/8090/workers-at-your-service/That is a huge change! At Chrome, Firefox, Samsung Internet, and other browsers, a Service Worker registration is not going to be unregistered automatically, and we can rely on being there in the future. That’s why an installed PWA will be able to work offline in the future. But with Apple’s idea of a service worker, there is no guarantee that the service worker or the cache will be available in the future. It might be if the user comes back to the web app within “a few weeks.” I know, the web app should work anyway while online, but we can’t guarantee one of the key concepts of PWAs: offline access.
-