Google Photos Automatic Backups⚓
This process has been deprecated in favor of Rclone. Instructions on using Rclone to backup Google Photos automatically can be found here.
The idea behind this is that with the amount of priceless photos living in Google Photos, all it takes is something like getting hacked to lose it all. That's not an option, and therefore, automatically backing these photos up is the way forward. I'll be leveraging the Google Photos API to download photos to the Yunohost VM, and will then be shuttling them to Backblaze B2.
The following will be the steps taken to accomplish this.
The idea to do this came from Jake Wharton (who ironically works for Google) who created a sync tool to accomplish this, however, having it run in Docker only complicated it. I ended up coming across another sync tool that allowed for accomplishing this same thing, but by way of a bash script running in a loop to call the Google Photos API for new data every 45 seconds.
From the README.md file, with the addition of how to unzip the bz2 tar file...
gitmoo-goog tool uses Google Photos API to continuously download all photos on a local device.
It can be used as a daemon to keep in sync with a google-photos account.
- bz2 gitmoo-goog
Change to executable:
- chmod +x gitmoo-goog
Google Photos API⚓
Perform the following in order to enable the Google Photos API:
- Go to the Google API Console.
- From the menu bar, select a project or create a new project.
- To open the Google API Library, from the Navigation menu, select APIs & Services > Library.
- Search for "Google Photos Library API". Select the correct result and click Enable.
Although the API is enabled, the script will require an OAuth 2.0 client ID. The following will outline how to obtain this:
- Go to the Google API Console and select your project.
- From the menu, select APIs & Services > Credentials.
- On the Credentials page, click Create Credentials > OAuth client ID.
- Select your Application type. Choose Other and click Create.
- Download the credentials.json client configuration file.
The .json file will likely download with a long generated filename. This will need to be renamed and moved to the appropriate folder in order to work properly.
- Copy the downloaded
credentials.jsonto the same folder with
- Run ./gitmoo-goog
- Go to the following link in your browser then type the authorization code: https://accounts.google.com/o/oauth2/auth?access_type=...
- The link provided, when opened in a browser, will present a security warning. Accept the warning and click Allow.
- Copy the token into the terminal and hit enter.
- The script will begin running at this point, but should be cancelled with Ctrl + C.
Usage of ./gitmoo-goog: * album download only from this album (use google album id) -folder string backup folder -force ignore errors, and force working -logfile string log to this file -loop loops forever (use as daemon) -max int max items to download (default 2147483647) -pagesize int number of items to download on per API call (default 50) -throttle int Time, in seconds, to wait between API calls (default 5) -folder-format string Time format used for folder paths based on <https://golang.org/pkg/time/#Time.Format> (default "2016/Janurary") -use-file-name Use file name when uploaded to Google Photos (default off) -download-throttle Rate in KB/sec, to limit downloading of items (default off) -concurrent-downloads Number of concurrent item downloads (default 5)
Running in a Loop⚓
In order to run this in a continuous loop to look for new content, use the following:
This will start the process in background, making an API call every 45 seconds, looping forever on all items and saving them to
Logfile will be saved as
Naming and Folder Permissions⚓
Files are created as follows:
json file holds the metadata from
The default permissions may need to be changed. In order to do this, use the following:
Google Takeout is also utilized to take a complete export of all photos and videos on a bi-monthly basis. At the time of this writing, the job runs on the 14th of every odd month (ex: May 14th, July 14th, etc).
These exports are broken up into 50GB tar.gz archives - two in total at the time of this writing. They are stored as-is on AWS using AWS Deep Glacier Archive storage tier.