- Dec 2020
-
stackoverflow.com stackoverflow.com
-
class Session extends Map { set(id, value) { if (typeof value === 'object') value = JSON.stringify(value); sessionStorage.setItem(id, value); } get(id) { const value = sessionStorage.getItem(id); try { return JSON.parse(value); } catch (e) { return value; } } }
-
I think that the webStorage is one of the most exciting improvement of the new web. But save only strings in the value key-map I think is a limitation.
-
-
developer.mozilla.org developer.mozilla.org
-
The Web Storage API provides mechanisms by which browsers can store key/value pairs, in a much more intuitive fashion than using cookies.
-
-
-
This is the accepted way to handle problems related to authentication, because user data has a couple of important characteristics: You really don't want to accidentally leak it between two sessions on the same server, and generating the store on a per-request basis makes that very unlikely It's often used in lots of different places in your app, so a global store makes sense.
-
-
www.securistore.co.za www.securistore.co.za
-
Small Unit
Size: 1.3m x 2m
-
- Nov 2020
-
www.theatlantic.com www.theatlantic.com
-
The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path.
Bush emphasises the importance of retrieval in the storage of information. He talks about technical limitations, but in this paragraph he stresses that retrieval is made more difficult by the "artificiality of systems of indexing", in other words, our default file-cabinet metaphor for storing information.
Information in such a hierarchical architecture is found by descending down into the hierarchy, and back up again. Moreover, the information we're looking for can only be in one place at a time (unless we introduce duplicates).
Having found our item of interest, we need to ascend back up the hierarchy to make our next descent.
-
So much for the manipulation of ideas and their insertion into the record. Thus far we seem to be worse off than before—for we can enormously extend the record; yet even in its present bulk we can hardly consult it. This is a much larger matter than merely the extraction of data for the purposes of scientific research; it involves the entire process by which man profits by his inheritance of acquired knowledge. The prime action of use is selection, and here we are halting indeed. There may be millions of fine thoughts, and the account of the experience on which they are based, all encased within stone walls of acceptable architectural form; but if the scholar can get at only one a week by diligent search, his syntheses are not likely to keep up with the current scene.
Retrieval is the key activity we're interested in. Storage only matters in as much as we can retrieve effectively. At the time of writing (1945) large amounts of information could be stored (extend the record), but consulting that record was still difficult.
-
-
hypothes.is hypothes.is
-
Many engineers and tech companies are struggling to come up with an effective system for energy storage. One innovation that may work is ARES, which stands for Advanced Rail Energy Storage.
-
- Oct 2020
-
hypothes.is hypothes.is
-
Gravity Storage is a system that utilizes the power of gravity to store the electricity supply in the form of potential energy. As a storage media, the technology uses water and rocks, which are largely available on the earth. https://allsustainablesolutions.com/gravity-storage-the-new-innovation-for-clean-energy-supply/
-
Energy Storage Storage options include batteries, thermal, or mechanical systems.There are many types of Energy Storage; this list serves as an informational resource for anyone interested in getting to know some of the most common technologies available.
Tags
Annotators
URL
-
-
docs.microsoft.com docs.microsoft.com
-
To request tokens for Azure Storage
That is, to request token if the app is not running in the Azure cloud with a managed identity:
Acquire a token from Azure AD for authorizing requests from a client application
Request an access token in Azure Active Directory B2C (and the other chapters in the Authorization protocols section)
-
- Sep 2020
-
kwokchain.com kwokchain.com
-
This impacts monetization and purchasing at companies. Paying for a new design tool because it has new features for designers may not be a top priority. But if product managers, engineers, or even the CEO herself think it matters for the business as a whole—that has much higher priority and pricing leverage.
If a tool benefits the entire team, vs. just the designer, it becomes an easier purchase decision.
-
- Jul 2020
-
www.digitaltrends.com www.digitaltrends.com
-
A key strength of OnlyOffice is its cloud-based storage options, which let you connect your Google Drive, Dropbox, Box, OneDrive, and Yandex.Disk accounts.
-
-
edpb.europa.eu edpb.europa.eu
-
If there is no other lawful basisjustifying the processing (e.g. further storage) of the data, they should be deleted by the controller.
-
-
www.youtube.com www.youtube.comYouTube1
-
Supporting Open Science Data Curation, Preservation, and Access by Libraries. (2020, June 25). https://www.youtube.com/watch?v=SbmGWHpzAHs&feature=youtu.be
-
- Jun 2020
-
www.howtogeek.com www.howtogeek.com
-
However, when you use an SD card as internal storage, Android formats the SD card in such a way that no other device can read it. Android also expects the adopted SD card to always be present, and won’t work quite right if you remove it.
-
- May 2020
-
docs.aws.amazon.com docs.aws.amazon.com
-
Your Amazon Athena query performance improves if you convert your data into open source columnar formats, such as Apache Parquet
s3 perfomance use columnar formats
-
-
www.amazonaws.cn www.amazonaws.cn
-
Available Internet Connection Theoretical Min. Number of Days to Transfer 100TB at 80% Network Utilization When to Consider AWS Snowball? T3 (44.736Mbps) 269 days 2TB or more 100Mbps 120 days 5TB or more 1000Mbps 12 days 60TB or more
when snowball
1000Mbps 12 days 60TB
-
-
www.termsfeed.com www.termsfeed.com
-
One of the GDPR's principles of data processing is storage limitation. You must not store personal data for longer than you need it in connection with a specified purpose.
-
-
www.ikea.com www.ikea.com
Tags
Annotators
URL
-
-
www.revnote.io www.revnote.io
-
100MB storage
Destul de puțin...
-
- Apr 2020
-
aws.amazon.com aws.amazon.com
-
When you create a DynamoDB table, auto scaling is the default capacity setting, but you can also enable auto scaling on any table that does not have it active
-
-
keepass.info keepass.info
-
Data Erasure and Storage Time The personal data of the data subject will be erased or blocked as soon as the purpose of storage ceases to apply. The data may be stored beyond that if the European or national legislator has provided for this in EU regulations, laws or other provisions to which the controller is subject. The data will also be erased or blocked if a storage period prescribed by the aforementioned standards expires, unless there is a need for further storage of the data for the conclusion or performance of a contract.
-
- Mar 2020
-
code.djangoproject.com code.djangoproject.com
-
I would like to make an appeal to core developers: all design decisions involving involuntary session creation MUST be made with a great caution. In case of a high-load project, avoiding to create a session for non-authenticated users is a vital strategy with a critical influence on application performance. It doesn't really make a big difference, whether you use a database backend, or Redis, or whatever else; eventually, your load would be high enough, and scaling further would not help anymore, so that either network access to the session backend or its “INSERT” performance would become a bottleneck. In my case, it's an application with 20-25 ms response time under a 20000-30000 RPM load. Having to create a session for an each session-less request would be critical enough to decide not to upgrade Django, or to fork and rewrite the corresponding components.
-
- Feb 2020
-
www.igindustrialplastics.com www.igindustrialplastics.com
Tags
Annotators
URL
-
- Jan 2020
-
www.statista.com www.statista.com
-
Size of the warehouse management systems (WMS) market worldwide, from 2015 to 2024
-
-
ambainc.org ambainc.org
-
nmsp.cals.cornell.edu nmsp.cals.cornell.eduISNT1
-
extension.umn.edu extension.umn.edu
- Dec 2019
-
www.2ndquadrant.com www.2ndquadrant.com
-
Practical highlights in my opinion:
- It's important to know about data padding in PG.
- Be conscious when modelling data tables about columns ordering, but don't be pure-school and do it in a best-effort basis.
- Gains up to 25% in wasted storage are impressive but always keep in mind the scope of the system. For me, gains are not worth it in the short-term. Whenever a system grows, it is possible to migrate data to more storage-efficient tables but mind the operative burder.
Here follows my own commands on trying the article points. I added
- pg_column_size(row())
on each projection to have clear absolute sizes.-- How does row function work? SELECT pg_column_size(row()) AS empty, pg_column_size(row(0::SMALLINT)) AS byte2, pg_column_size(row(0::BIGINT)) AS byte8, pg_column_size(row(0::SMALLINT, 0::BIGINT)) AS byte16, pg_column_size(row(''::TEXT)) AS text0, pg_column_size(row('hola'::TEXT)) AS text4, 0 AS term ; -- My own take on that SELECT pg_column_size(row()) AS empty, pg_column_size(row(uuid_generate_v4())) AS uuid_type, pg_column_size(row('hola mundo'::TEXT)) AS text_type, pg_column_size(row(uuid_generate_v4(), 'hola mundo'::TEXT)) AS uuid_text_type, pg_column_size(row('hola mundo'::TEXT, uuid_generate_v4())) AS text_uuid_type, 0 AS term ; CREATE TABLE user_order ( is_shipped BOOLEAN NOT NULL DEFAULT false, user_id BIGINT NOT NULL, order_total NUMERIC NOT NULL, order_dt TIMESTAMPTZ NOT NULL, order_type SMALLINT NOT NULL, ship_dt TIMESTAMPTZ, item_ct INT NOT NULL, ship_cost NUMERIC, receive_dt TIMESTAMPTZ, tracking_cd TEXT, id BIGSERIAL PRIMARY KEY NOT NULL ); SELECT a.attname, t.typname, t.typalign, t.typlen FROM pg_class c JOIN pg_attribute a ON (a.attrelid = c.oid) JOIN pg_type t ON (t.oid = a.atttypid) WHERE c.relname = 'user_order' AND a.attnum >= 0 ORDER BY a.attnum; -- What is it about pg_class, pg_attribute and pg_type tables? For future investigation. -- SELECT sum(t.typlen) -- SELECT t.typlen SELECT a.attname, t.typname, t.typalign, t.typlen FROM pg_class c JOIN pg_attribute a ON (a.attrelid = c.oid) JOIN pg_type t ON (t.oid = a.atttypid) WHERE c.relname = 'user_order' AND a.attnum >= 0 ORDER BY a.attnum ; -- Whoa! I need to master mocking data directly into db. INSERT INTO user_order ( is_shipped, user_id, order_total, order_dt, order_type, ship_dt, item_ct, ship_cost, receive_dt, tracking_cd ) SELECT true, 1000, 500.00, now() - INTERVAL '7 days', 3, now() - INTERVAL '5 days', 10, 4.99, now() - INTERVAL '3 days', 'X5901324123479RROIENSTBKCV4' FROM generate_series(1, 1000000); -- New item to learn, pg_relation_size. SELECT pg_relation_size('user_order') AS size_bytes, pg_size_pretty(pg_relation_size('user_order')) AS size_pretty; SELECT * FROM user_order LIMIT 1; SELECT pg_column_size(row(0::NUMERIC)) - pg_column_size(row()) AS zero_num, pg_column_size(row(1::NUMERIC)) - pg_column_size(row()) AS one_num, pg_column_size(row(9.9::NUMERIC)) - pg_column_size(row()) AS nine_point_nine_num, pg_column_size(row(1::INT2)) - pg_column_size(row()) AS int2, pg_column_size(row(1::INT4)) - pg_column_size(row()) AS int4, pg_column_size(row(1::INT2, 1::NUMERIC)) - pg_column_size(row()) AS int2_one_num, pg_column_size(row(1::INT4, 1::NUMERIC)) - pg_column_size(row()) AS int4_one_num, pg_column_size(row(1::NUMERIC, 1::INT4)) - pg_column_size(row()) AS one_num_int4, 0 AS term ; SELECT pg_column_size(row(''::TEXT)) - pg_column_size(row()) AS empty_text, pg_column_size(row('a'::TEXT)) - pg_column_size(row()) AS len1_text, pg_column_size(row('abcd'::TEXT)) - pg_column_size(row()) AS len4_text, pg_column_size(row('abcde'::TEXT)) - pg_column_size(row()) AS len5_text, pg_column_size(row('abcdefgh'::TEXT)) - pg_column_size(row()) AS len8_text, pg_column_size(row('abcdefghi'::TEXT)) - pg_column_size(row()) AS len9_text, 0 AS term ; SELECT pg_column_size(row(''::TEXT, 1::INT4)) - pg_column_size(row()) AS empty_text_int4, pg_column_size(row('a'::TEXT, 1::INT4)) - pg_column_size(row()) AS len1_text_int4, pg_column_size(row('abcd'::TEXT, 1::INT4)) - pg_column_size(row()) AS len4_text_int4, pg_column_size(row('abcde'::TEXT, 1::INT4)) - pg_column_size(row()) AS len5_text_int4, pg_column_size(row('abcdefgh'::TEXT, 1::INT4)) - pg_column_size(row()) AS len8_text_int4, pg_column_size(row('abcdefghi'::TEXT, 1::INT4)) - pg_column_size(row()) AS len9_text_int4, 0 AS term ; SELECT pg_column_size(row(1::INT4, ''::TEXT)) - pg_column_size(row()) AS int4_empty_text, pg_column_size(row(1::INT4, 'a'::TEXT)) - pg_column_size(row()) AS int4_len1_text, pg_column_size(row(1::INT4, 'abcd'::TEXT)) - pg_column_size(row()) AS int4_len4_text, pg_column_size(row(1::INT4, 'abcde'::TEXT)) - pg_column_size(row()) AS int4_len5_text, pg_column_size(row(1::INT4, 'abcdefgh'::TEXT)) - pg_column_size(row()) AS int4_len8_text, pg_column_size(row(1::INT4, 'abcdefghi'::TEXT)) - pg_column_size(row()) AS int4_len9_text, 0 AS term ; SELECT pg_column_size(row()) - pg_column_size(row()) AS empty_row, pg_column_size(row(''::TEXT)) - pg_column_size(row()) AS no_text, pg_column_size(row('a'::TEXT)) - pg_column_size(row()) AS min_text, pg_column_size(row(1::INT4, 'a'::TEXT)) - pg_column_size(row()) AS two_col, pg_column_size(row('a'::TEXT, 1::INT4)) - pg_column_size(row()) AS round4; SELECT pg_column_size(row()) - pg_column_size(row()) AS empty_row, pg_column_size(row(1::SMALLINT)) - pg_column_size(row()) AS int2, pg_column_size(row(1::INT)) - pg_column_size(row()) AS int4, pg_column_size(row(1::BIGINT)) - pg_column_size(row()) AS int8, pg_column_size(row(1::SMALLINT, 1::BIGINT)) - pg_column_size(row()) AS padded, pg_column_size(row(1::INT, 1::INT, 1::BIGINT)) - pg_column_size(row()) AS not_padded; SELECT a.attname, t.typname, t.typalign, t.typlen FROM pg_class c JOIN pg_attribute a ON (a.attrelid = c.oid) JOIN pg_type t ON (t.oid = a.atttypid) WHERE c.relname = 'user_order' AND a.attnum >= 0 ORDER BY t.typlen DESC; DROP TABLE user_order; CREATE TABLE user_order ( id BIGSERIAL PRIMARY KEY NOT NULL, user_id BIGINT NOT NULL, order_dt TIMESTAMPTZ NOT NULL, ship_dt TIMESTAMPTZ, receive_dt TIMESTAMPTZ, item_ct INT NOT NULL, order_type SMALLINT NOT NULL, is_shipped BOOLEAN NOT NULL DEFAULT false, order_total NUMERIC NOT NULL, ship_cost NUMERIC, tracking_cd TEXT ); -- And, what about other varying size types as JSONB? SELECT pg_column_size(row('{}'::JSONB)) - pg_column_size(row()) AS empty_jsonb, pg_column_size(row('{}'::JSONB, 0::INT4)) - pg_column_size(row()) AS empty_jsonb_int4, pg_column_size(row(0::INT4, '{}'::JSONB)) - pg_column_size(row()) AS int4_empty_jsonb, pg_column_size(row('{"a": 1}'::JSONB)) - pg_column_size(row()) AS basic_jsonb, pg_column_size(row('{"a": 1}'::JSONB, 0::INT4)) - pg_column_size(row()) AS basic_jsonb_int4, pg_column_size(row(0::INT4, '{"a": 1}'::JSONB)) - pg_column_size(row()) AS int4_basic_jsonb, 0 AS term;
-
- Oct 2019
-
www.lifewire.com www.lifewire.com
-
Best Overall: SanDisk Extreme PRO 128 GB Drive 3.5 Buy on Amazon The SanDisk PRO gives you blistering speeds, offering 420 MB/s on the reading front and 380 MB/s on the writing end, which is 3–4x faster than what a standard USB 3.0 drive will offer. The sleek, aluminum casing is both super durable and very eye-catching, so you can bring it with you to your business meetings and look professional as well. The onboard AES, 128-bit file encryption gives you top-of-the-line security for your sensitive files.
-
-
engineering.linkedin.com engineering.linkedin.com
-
It is an append-only, totally-ordered sequence of records ordered by time.
-
- Apr 2019
-
www.producthunt.com www.producthunt.com
-
When you get started, you get signed up by default for the FREE Gaia storage provided by Blockstack PBC. Yes, that's right, you get FREE encrypted storage.
Tags
Annotators
URL
-
- Feb 2018
-
-
The NSA must be psyched about this.
-
- Sep 2017
-
uhra.herts.ac.uk uhra.herts.ac.uk
-
In 2005, the figure had raised to 1%. They are now responsible for more carbon-dioxide emissions per year than Argentina or the Netherlands and, if current trends hold, their emissions will have grown four-fold by 2020, reaching 670m tonnes
How is information, for example, a conversation accounted for in this model? As we go forward and find more efficient ways to store and convey information in fewer 1s and 0s, must we constantly reevaluate this relationship? Passive vs Active storage of information seems to be key here as well.
-
- Jul 2016
-
www.clir.org www.clir.orgpub1711
-
unprecedented accumulation of contemporary data
this is the storage question everyone always goes to 1st when we use the word "data" in libraries. Is there possibly another question we should ask first?
Tags
Annotators
URL
-
- Apr 2015
-
thegrid.io thegrid.io
-
Do I own my content on The Grid? Yes, you own your content. The engine AutoDesigns your site, publishes it, and stores it on Github. Your source content will live in a Github repository that you can access and download anytime.
Is access private/public?
Tags
Annotators
URL
-
- Sep 2014
-
www.aerospike.com www.aerospike.com
-
Fast restart. If a server is temporarily taken down, this capability restores the index from a saved copy, eliminating delays due to index rebuilding.
This point seems to be in direct contradiction to the claim above that "Indexes (primary and secondary) are always stored in DRAM for fast access and are never stored on Solid State Drives (SSDs) to ensure low wear."
-
Unlike other databases that use the linux file system that was built for rotational drives, Aerospike has implemented a log structured file system to access flash – raw blocks on SSDs – directly.
Does this really mean to suggest that Aerospike bypasses the linux block device layer? Is there a kernel driver? Does this mean I can't use any filesystem I want and know how to administrate? Is the claim that the "linux file system" (which I take to mean, I guess, the virtual file system layer) "built for rotation drives" even accurate? We've had ram disks for a long, long time. And before that we've had log structured filesystems, too, and even devices that aren't random access like tape drives. Seems like dubious claims all around.
Tags
Annotators
URL
-
- Jan 2014
-
blogs.msdn.com blogs.msdn.com
-
There are three kinds of storage locations: stack locations, heap locations, and registers.
-