14 Matching Annotations
  1. May 2020
    1. Not necessarily. Hosting companies tend to keep your backups in the same place as your primary files. You don’t carry around a copy of your birth certificate along with the actual one – you keep the real one safe at home for emergencies. So why not do the same for your backups? CodeGuard provides safe, offsite backup that is 100% independent from your hosting provider.
    1. involve a combination of Local backup for fast backup and restore, along with Off-site backup for protection against local disasters
    2. Recent backups are retained locally, to speed data recovery operations.
    1. After the initial backup, future backups are differential, both in the files that are transferred and the files that are stored on your behalf.

      I guess git can help with the differential part of the backup, but how exactly does it? In order for it to help with transfer from the subject server, wouldn't it have to keep the git repo on that server? Otherwise wouldn't it have to transfer everything to the remote cloud git repo so that git can do the diff there?

      Makes me wonder if simple rsync wouldn't be more efficient than all this.

  2. Apr 2020
    1. Of course, just because your users want a copy of their data doesn't necessarily mean that they're abandoning your product. Many users just feel safer having a local copy of their data as a backup.
  3. Dec 2019
    1. For any type of full backup, on an active server, it's recommended that you snapshot the filesystem first, then run your backup script on the contents of the snapshot. Your filesystem has to be on LVM to use snapshots, though, and have enough free physical extents (un-partitioned space) to shore up the amount of expected changed data during the snapshot window (more for very busy servers, less for not-so-busy servers). Look here... http://www.tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html
    1. Both /proc and /sys are virtual filesystems which reflect the state of the system, and allow you to change several runtime parameters (and sometimes do more dangerous things, like directly writing to the memory or to a device). You should never backup or restore them.
    1. You might think that a one-line configuration file is not worth backing up. However, if it took you three hours to figure out how to set that configuration, it will probably take you three hours again in six months time.
    1. I am not concerned with any need to exactly restore the system if there is a crash. I have had three previous computers, none of them failed, but I kept manual backups on external hard drives and just use them, now, as a source for any material on them that I still need.
    1. So if you create one backup per night, for example with a cronjob, then this retention policy gives you 512 days of retention. This is useful but this can require to much disk space, that is why we have included a non-linear distribution policy. In short, we keep only the oldest backup in the range 257-512, and also in the range 129-256, and so on. This exponential distribution in time of the backups retains more backups in the short term and less in the long term; it keeps only 10 or 11 backups but spans a retention of 257-512 days.
    1. But just creating the backups will not save your business. You need to make regular backups and keep the most recent copies at a remote location, that is not in the same building or even within a few miles of your business location, if at all possible. This helps to ensure that a large-scale disaster does not destroy all of your backups.
    2. No backup regimen would be complete without testing. You should regularly test recovery of random files or entire directory structures to ensure not only that the backups are working, but that the data in the backups can be recovered for use after a disaster. I have seen too many instances where a backup could not be restored for one reason or another and valuable data was lost because the lack of testing prevented discovery of the problem.