Export/Import SqlBak job in a .json file

Starting from SqlBak app version 3.2.9 you can export a job to a .json file and then import it using the SqlBak.Job.Cli.exe utility. This functionality is needed primarily for automated deployment of the SqlBak application and backup jobs creation. A simple tutorial on how to do this is described in the following blog post: How to deploy SqlBak app automatically.

Below you can find detailed instructions on how to export and import a job, there are also some general tips on how to create a universal job template for automated deployment of a backup job.

Export job’s settings to a .json file

To export a job, click the export button in the job information window.

A file containing a complete description of the job will be downloaded.

Import job’s settings from a .json file

You can import the downloaded .json file using the SqlBak.Job.Cli.exe utility, by default it is located in the C:\Program Files (x86)\Pranas.NET\SqlBak\directory.

Use the following command to import the .json file with the job’s configuration. Note a new backup job will be created on the server where this command is executed.

SqlBak.Job.Cli.exe --import-job --secret-key=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --file=sqlbak_job_1234.json

The --import-job command has two parameters:

  • --secret-key – can be found on the Download page after you login to sqlbak.com
  • --file– the path to the imported .json job’s configuration file

In addition, this command contains two more optional parameters:

  • --code-page, -cp  – File encoding is specified in the --file parameter. The default value is UTF-8
  • --timeout, -to The maximum time of command execution in seconds, after which, if a job is not created, an error will be generated. The default value is 120

As a rule, this command is executed for several seconds, if the creation of a job fails, the command will return a non-zero error code.

Universal template creation for automation job’s creation

Once settings are exported into a .json file, the file can be imported to a new server. The structure and fields of a .json file are self-explanatory and consistent with the user interface.

However, there are several features that will be discussed below.

Environment variables usage

In the .json configuration, you can use environment variables that will be automatically substituted upon import.

The syntax for environment variables is:

$(%имя_environment_variable%)

An environment variable, can be either a system variable or created by a user in a script that calls the command--import-job

Examples of using environment variables
Substitution of the computer name in the path of storing backups in the destination
If you export a template and then import it on several different servers, then all backups will be sent into one directory on a destination. To avoid this, substitute something unique in the path field. For example, it can be the system name of the server that is stored in the environment variable of the operating system COMPUTERNAME.
....................
....................
"destinations": [
  "destinations": [
    {
      "id": 25602,
      "destination_type": "amazon_s3",
      "is_emergency": false,
      "is_verify_after_upload": true,
      "full_backups_keep_months": 6,
      "full_backups_keep_days": 0,
      "inc_backups_keep_months": 0,
      "inc_backups_keep_days": 0,
      "store_backup_types": [
        "database_full",
        "folder",
        "database_incremental"
      ],
      "path": "/backups/$(%COMPUTERNAME%)",
      "upload_threads_count": 4
    }
  ],
....................
....................
Setting a custom database name

If you need to backup only a specific database, specify the DATABASE_NAME environment variable in the .json file in the databases_backup_settings -> databases section

"databases_backup_settings": {
  "databases": [
    "$(%DATABASE_NAME%)"
  ],
  "excluded_databases": [],
  "is_all_non_system_databases": false
},

Now you need to create an environment variable DATABASE_NAME that will contain the name of the database and then run the import.

set DATABASE_NAME=AdventureWorks
SqlBak.Job.Cli.exe --import-job –secret-key=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --file=sqlbak_job_1234.json

Backup Destinations

In SqlBak, one backup destination can be used in several jobs at once. For now, with automatic deployment, only the previously configured destination can be used.

First, you must create a backup job, at sqlbak. com, where set all the needed destinations, and then you can refer to them in the template by ID.
....................
....................
"destinations": [
  "destinations": [
    {
      "id": 25602,
      "destination_type": "amazon_s3",
      "is_emergency": false,
....................
....................

DBMS Connection

The DBMS connection in SqlBak is configured separately from a job. During backup job settings, you have to specify an already configured DBMS connection. You can automatically create a connection to DBMS via SqlBak.Job.Cli.exe. In the .json configuration, you can refer to the configured connection using two fields: dbms_type and name.

{
  "job_type": "backup",
  "name": "New Backup job",
  "dbms_connection": {
    "dbms_type": "sql_server_local",
    "name": "."
  },
....................
....................
  • dbms_type– can take on the following values sql_server_local,mysql,postgresql,sql_azure,sql_server_amazon,sql_server_remote,mysql_phpmyadmin
  • name – connection name, it matches the address. This can be an IP address, domain name, localhost. The dot character can also be specified for SQL Server.

Leave a Reply

Your email address will not be published. Required fields are marked *