48 Matching Annotations
  1. Sep 2024
  2. Jan 2024
  3. Oct 2023
    1. decade ago, these plants provided “low carbon” electricity in comparison to the grid at the time, but now in many cases, emit more carbon than local grids. Countries that already have decarbonized grids, France, Sweden, and Scotland, for example, will not benefit from a continuous system that uses natural gas to begin with.

      What would the green premium be for bio methane in this scenario? As in Methane from bio genie sources. Supply issues aside, obvs.

  4. Sep 2023
  5. Jul 2022
    1. // NB: Since line terminators can be the multibyte CRLF sequence, care // must be taken to ensure we work for calls where `tokenPosition` is some // start minus 1, where that "start" is some line start itself.

      I think this satisfies the threshold of "minimum viable publication". So write this up and reference it here.

      Full impl.:

      getLineStart(tokenPosition, anteTerminators = null) {
        if (tokenPosition > this._edge && tokenPosition != this.length) {
          throw new Error("random access too far out"); // XXX
        }
      
        // NB: Since line terminators can be the multibyte CRLF sequence, care
        // must be taken to ensure we work for calls where `tokenPosition` is some
        // start minus 1, where that "start" is some line start itself.
        for (let i = this._lineTerminators.length - 1; i >= 0; --i) {
          let current = this._lineTerminators[i];
          if (tokenPosition >= current.position + current.content.length) {
            if (anteTerminators) {
              anteTerminators.push(...this._lineTerminators.slice(0, i));
            }
            return current.position + current.content.length;
          }
        }
      
        return 0;
      }
      

      (Inlined for posterity, since this comes from an uncommitted working directory.)

  6. Mar 2022
  7. Nov 2021
    1. Saving Your Wallet With Lifecycle Rules Of course, storing multiple copies of objects uses way more space, especially if you’re frequently overwriting data. You probably don’t need to store these old versions for the rest of eternity, so you can do your wallet a favor by setting up a Lifecycle rule that will remove the old versions after some time. Under Management > Life Cycle Configuration, add a new rule. The two options available are moving old objects to an infrequent access tier, or deleting them permanently after
    1. S3 object versioning Many of the strategies to be discussed for data durability require S3 object versioning to be enabled for the bucket (this includes S3 object locks and replication policies). With object versioning, anytime an object is modified, it results in a new version, and when the object is deleted, it only results in the object being given a delete marker. This allows an object to be recovered if it has been overwritten or marked for deletion. However, it is still possible for someone with sufficient privileges to permanently delete all objects and their versions, so this alone is not sufficient. When using object versioning, deleting old versions permanently is done with the call s3:DeleteObjectVersion, as opposed to the usual s3:DeleteObject, which means that you can apply least privilege restrictions to deny someone from deleting the old versions. This can help mitigate some issues, but you should still do more to ensure data durability. Life cycle policies Old versions of objects will stick around forever, and each version is an entire object, not a diff of the previous version. So if you have a 100MB file that you change frequently, you’ll have many copies of this entire file. AWS acknowledges in the documentation “you might have one or more objects in the bucket for which there are millions of versions”. In order to reduce the number of old versions, you use lifecycle policies. Audit tip: It should be a considered a misconfiguration if you have object versioning enabled and no lifecycle policy on the bucket. Every versioned S3 bucket should have a `NoncurrentVersionExpiration` lifecycle policy to eventually remove objects that are no longer the latest version. For data durability, you may wish to set this to 30 days. If this data is being backed up, you may wish to set this to as little as one day on the primary data and 30 days on the backup. If you are constantly updating the same objects multiple times per day, you may need a different solution to avoid unwanted costs. Audit tip: In 2019, I audited the AWS IAM managed policies and found some issues, including what I called Resource policy privilege escalation. In a handful of cases AWS had attempted to create limited policies that did not allow `s3:Delete*`, but still allowed some form of `s3:Put*`. The danger here is the ability to call `s3:PutBucketPolicy` in order to grant an external account full access to an S3 bucket to delete the objects and versions within it, or `s3:PutLifecycleConfiguration` with an expiration of 1 day for all objects which will delete all objects and their versions in the bucket. Storage classes With lifecycle policies, you have the ability to transition objects to less expensive storage classes. Be aware that there are many constraints, specifically around the size of the object and how long you have to keep it before transitioning or deleting it. Objects in the S3 Standard storage class must be kept there for at least 30 days until they can be transitioned. Further, once an object is in the S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA, those objects must be kept there for 30 days before deletion. Objects in Glacier must be kept for 90 days before deleting, and objects in Glacier Deep Archive must be kept for 180 days. So if you had plans of immediately transitioning all non-current object versions to Glacier Deep Archive to save money, and then deleting them after 30 days, you will not be able to.
    1. take advantage of LVM snapshots. Take snapshots before and after an upgrade. In case, if the system is in unrecoverable position, rollback to the last snapshot from a system rescue LiveCD. A useful program for this, as well as regular system backups is timeshift
  8. Jun 2021
    1. This is where off-site backups come into play. For this purpose, I recommend Borg backup. It has sophisticated features for compression and encryption, and allows you to mount any version of your backups as a filesystem to recover the data from. Set this up on a cronjob as well for as frequently as you feel the need to make backups, and send them off-site to another location, which itself should have storage facilities following the rest of the recommendations from this article. Set up another cronjob to run borg check and send you the results on a schedule, so that their conspicuous absence may indicate that something fishy is going on. I also use Prometheus with Pushgateway to make a note every time that a backup is run, and set up an alarm which goes off if the backup age exceeds 48 hours. I also have periodic test alarms, so that the alert manager’s own failures are noticed.

      Solution for human failures and existential threads:

      • Borg backup on a cronjob
      • Prometheus with Pushgateway
  9. Mar 2021
  10. Oct 2020
    1. Creating backup plans automatically

      [Backup-4] Lack of statement (No procedure to enable the back up plan)

      Background: If a backup plan is created with “Backup and Recovery Assistant”, you need to take steps to activate the backup plan on the Acronis website after creating it. Without enabling it, WPH backup data will not be acquired. Issue: IG does not have the description. Request:Add a procedure.

    2. Nevertheless, if you need to update the release version to reflect settings as changed by Acronis, use this tool to install the latest Acronis agent version.

      [Backup-3]Lack of statement (No alert for executing the script.)

      Background: There are two ways to update Acronis agent.?a)wph-acronis-agent-maintenance --install-latest (Upgrade to the latest agent released by Acronis)?b)wph-acronis-agent-maintenance --install-release (Upgrade to the latest agent supported by WPH)?By using method a, the agent may be updated to a version not supported by WPH.?The IG has instructions to upgrade the agent using method a. Issue: For method a, the IG has not been alerted to that. Request:Add a warning sentence to the two related places.

    3. Acronis agent version maintenance

      [Backup-2]Lack of statement  (It is not described as an optional procedure)

      Issue : “Acronis agent version maintenance” is not listed as an option despite the optional work. Request : Write (optional) in the title.

    4. Updating an Acronis account

      [Backup-1]Lack of statement   (It is not described as an optional procedure)

      Issue : “Updating an Acronis account” is not listed as an option despite the optional work. Request : Write (optional) in the title.

  11. Jun 2020
  12. May 2020
  13. Mar 2020
  14. Jan 2020
    1. How efficient is deduplication?

      tarsnap 一个独立的 backup 加密应用

  15. Dec 2019
    1. It is possible to do a successful file system migration by using rsync as described in this article and updating the fstab and bootloader as described in Migrate installation to new hardware. This essentially provides a way to convert any root file system to another one.
    2. rsync provides a way to do a copy of all data in a file system while preserving as much information as possible, including the file system metadata. It is a procedure of data cloning on a file system level where source and destination file systems do not need to be of the same type. It can be used for backing up, file system migration or data recovery.
    1. I am familiar with using rsync to back up various files from my system but what is the best way to completely restore a machine.
    1. If you want to keep several days worth of backups, your storage requirements will grow dramatically with this approach. A tool called rdiff-backup, based on rsync, gets around this issue.
    2. Agreed, I use rdiff-backup because I found my rsync backups were getting cluttered with old files, and sometimes the lack of versioned backups was problematic. I'm a big fan of rdiff-backup. I don't think it actually leverages rsync, as such, but librsync. It's a great tool.
    3. I think that rsync is great but tools like dar, attic, bup, rdiff-backup or obnam are better. I use obnam, as it does "snapshot backups" with deduplication.
    4. I run the script daily, early every morning, as a cron job to ensure that I never forget to perform my backups.
    5. There are many options for performing backups. Most Linux distributions are provided with one or more open source programs specially designed to perform backups. There are many commercial options available as well. But none of those directly met my needs so I decided to use basic Linux tools to do the job.
    1. CloneZilla works perfectly. It produces small image files, has integrity check and works fast. If you want to use third device as image repository you should choose device-image when creating image of the first disk and then image-device when you restore it to second disk. If you want to use only two disks - you should use device-device mode. Optionally you may want generate new UUIDs, SSH-key (if SSH server installed), and change hostname.
    1. While there are so many tools to backup your systems, I find this method super easy and convenient, at least to me. Also, this method is way better than disk cloning with dd command. Because It doesn’t matter if your hard drive is different size, or use different filesystem.
  16. Jul 2019
  17. Jun 2019
    1. Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL servers
  18. Oct 2017
  19. Jan 2014
    1. An effective data management program would enable a user 20 years or longer in the future to discover , access , understand, and use particular data [ 3 ]. This primer summarizes the elements of a data management program that would satisfy this 20-year rule and are necessary to prevent data entropy .

      Who cares most about the 20-year rule? This is an ideal that appeals to some, but in practice even the most zealous adherents can't picture what this looks like in some concrete way-- except in the most traditional ways: physical paper journals in libraries are tangible examples of the 20-year rule.

      Until we have a digital equivalent for data I don't blame people looking for tenure or jobs for not caring about this ideal if we can't provide a clear picture of how to achieve this widely at an institutional level. For digital materials I think the picture people have in their minds is of tape backup. Maybe this is generational? New generations not exposed widely to cassette tapes, DVDs, and other physical media that "old people" remember, only then will it be possible to have a new ideal that people can see in their minds-eye.