B2 CLI Guide
B2 CLI Guide
B2 CLI Guide
This will cover documentation on calls made in the B2 Command-Line Tool (CLI) beyond the list of commands in the installation articles. We will
be referencing the output from the b2 help call in the CLI rather than the B2 docs and will go into a bit more detail than the individual help
guides for each call.
b2 authorize-account
b2 cancel-all-unfinished-large-files
b2 cancel-large-file
b2 clear-account
b2 copy-file-by-id
b2 create-bucket
b2 create-key
b2 delete-bucket
b2 delete-file-version
b2 delete-key
b2 download-file-by-id
b2 download-file-by-name
b2 get-account-info
b2 get-bucket
b2 get-file-info
b2 get-download-auth
b2 get-download-url-with-auth
b2 hide-file
b2 list-buckets
b2 list-keys
b2 list-parts
b2 list-unfinished-large-files
b2 ls
b2 make-url
b2 make-friendly-url
b2 sync
Deletions at the destination:
Sync Exclusions
REGEX Formatting
File Comparisons
Example of compareVersions/compareThreshold
Server-Side Encryption
b2 update-bucket
b2 upload-file
b2 update-file-legal-hold
b2 update-file-retention
Note: The calls in the CLI will be a bit different than their typical counterparts as listed in the B2 docs. Aside from different mechanics
and behavior, the name of the call will be entered into the CLI a little differently than the name in the B2 docs. There is no “-” or “_'
after “b2”.
*Below, the individual arguments will have optional portions of the call which will be in [blue] and required portions of the call.
All calls will have [-h] which brings up small guide on how to use the command in the CLI.
All calls can include the optional --verbose which will display the step by step process by the API. If there is an error or issue along the
way, it will be displayed with the verbose argument.
b2 authorize-account
Required information:
Notes:
There are two ways to call this command:
1. Type in the call with all of the arguments and press enter to authenticate.
2. Type in the call without any of the arguments and press enter. The CLI will ask you to input each argument individually.
a. When inputting the applicationKey, it will not be displayed in the terminal or command window.
b. This method is useful for if you are sharing your screen for others and you do not want to them see your applicationKey.
Output:
When successful, there will be no output. Success:
If it fails, it will tell you “ERROR: unable to authorize account: Invalid authorization token. Server
said: (bad_auth_token)“.
Link to screenshot
b2 cancel-all-unfinished-large-files
Required information:
bucketName - Name of the bucket whose unfinished large files will be canceled.
Notes:
This will list the fileId for all unfinished large files and then will cancel and delete all parts of them.
Output:
Input Output
b2 cancel-all-unfinished-large-files matttta
ta
Link to screenshot
b2 cancel-large-file
Require information:
fileId - The fileId of the unfinished large file that will be canceled.
This can be found with b2 list-unfinished-large-files.
Notes:
This will list the fileId for the unfinished large file and then will cancel and delete all parts of it.
Input Output
b2 cancel-large-file 4_z4015dbad2609f66a7780071b_f201b
37dab303575e_d20210426_m191704_c000_v0001158_t00
41 Link to screenshot
b2 clear-account
b2 clear-account [-h]
No required information.
Notes:
This will just invalidate the current session’s authorization token. This is essentially logging out of the b2 cli.
Output:
b2 copy-file-by-id
b2 copy-file-by-id [-h] [--metadataDirective {copy,replace}] [--contentType CONTENTTYPE] [--range RANGE] [--info INFO] [--
destinationServerSideEncryption {SSE-B2}] [--destinationServerSideEncryptionAlgorithm {AES256}] sourceFileId destinationBuck
etName b2FileName
Required information:
Notes:
This call can be used for a file in one bucket to create a copy in the same or another bucket in the same account. It cannot be used to move a file
from one account to another.
When entering the b2FileName, a prefix can be added as a folder path. If the folder path does not exist yet, it will be created when the call is
sent.
Optional arguments:
The option [--metadataDirective {copy,replace}] will allow you to copy or replace the metadata
By default, it copies the file info and content-type. You can replace those by setting the [--metadataDirective {copy,replace}] to replace.
For this call, [--contentType CONTENTTYPE] and [--info INFO] should only be provided (and MUST be provided) if [--metadataDirective
{copy,replace}] is set to replace.
[--contentType CONTENTTYPE] - This is to set the content-type to something other than the original metadata. If the content-type is to
stay the same, it should still be stated in the call. Here is a list of possible content-types: https://www.backblaze.com/b2/docs/content-types.
html
[--info INFO] - These are variables and values chosen and defined by the user.
This is to be formatted as VARIABLE=VALUE. Example below.
If more than one VARIABLE=VALUE is to be specified, and additional --info entry must be made. Example below.
If left out, this choice will just default to copy.
The option [--range RANGE] allows you to specify a part of a file to be copied.
The option [--destinationServerSideEncryption {SSE-B2}] will set Server-Side Encryption for the file at the file level.
This does not require Default Server-Side Encryption enabled on the bucket.
This will default to setting [--destinationServerSideEncryptionAlgorithm {AES256}] to AES256, so this is not a required argument to include
when setting --destinationServerSideEncryption SSE-B2.
Output:
Input Output
b2 copy-file-by-id 4_zd0c5ab2db6c916ea7780071b_f103f3baad
8578773_d20210417_m200400_c000_v0001034_t0021 matttt
ata newfolder/bird.jpeg
Link to screenshot
For multiple --info entries, the call will look like this:
Link to screenshot
Link to screenshot
b2 create-bucket
Required information:
Notes:
In the B2 docs for the CLI, the required bucketType argument show [allPublic | allPrivate] instead and is wrapped in square
brackets, but the choice between allPublic and allPrivate is required. You must choose one or the other and not both.
Optional arguments:
The following optional arguments need values in JSON format. There are some rules for JSON formatting.
If adding more than one VARIABLE:VALUE pair, it must be separated with a comma.
If a VALUE is to be treated as an integer (number), it shouldn’t be in double quotes. If a VALUE is to be treated as a string (text), it should be
in double quotes
[--bucketInfo BUCKETINFO] - These are user chosen and user defined values. They can be named anything and be anything the user wants.
BUCKETINFO Format:
The BUCKETINFO part must be replaced with a JSON formatted as ‘{“VARIABLE”: “VALUE”}’
If a VALUE is to be treated as an integer (number), it shouldn’t be in double quotes. If a VALUE is to be treated as a string (text), it should be in
double quotes
Example:
CORSRULES Format:
The CORSRULES part must be replaced with a JSON formatted as ‘[{“VARIABLE”: “VALUE”, “VARIABLE2”: “VALUE2”}]’
Since this JSON requires multiple VARIABLE:VALUE pairs, it must have those square braces [ ] in the wrapping.
Example:
LIFECYCLERULES Format:
The LIFECYCLERULES part must be replaced with a JSON formatted as ‘[{“VARIABLE”: “VALUE”, “VARIABLE2”: “VALUE2”}]’
Since this JSON requires multiple VARIABLE:VALUE pairs, it must have those square braces [ ] in the wrapping.
Example:
The option [--defaultServerSideEncryption {SSE-B2,none}] will set Server-Side Encryption for the bucket.
If the value chosen is SSE-B2 it will enable Server-Side Encryption for the bucket. This will not encrypt non-encrypted objects that are already
in the bucket.
If the value chosen is none it will disable Server-Side Encryption for the bucket. This will not decrypt encrypted objects that are already in the
bucket.
The option [--defaultServerSideEncryptionAlgorithm {AES256}] doesn’t have an effect on the call as enabling Server-Side Encryption with --
defaultServerSideEncryption SSE-B2 will set --defaultServerSideEncryptionAlgorithm to AES256 automatically.
Example:
Output:
The call will give back the brand new created bucket ID. Adding the optional arguments will not change the output.
Input Output
b2 create-key
b2 create-key [-h] [--bucket BUCKET] [--namePrefix NAMEPREFIX] [--duration DURATION] keyName capabilities
Required information:
Notes:
When making this call, the current application key must possess read AND write capabilities in order to make this call. Although the specific
permission is called writeKeys, it is a permission specific to being both read and write.
Permissions:
The list of possible permissions as stated on https://www.backblaze.com/b2/docs/b2_create_key.html is not the full list of permissions that you
can list in this call. You are able to list any and all possible permissions ranging from ones required for object lock, server-side encryption as well
as listAllBucketNames which is required to use the S3 compatible API when creating an application key restricted to a single bucket with [--
bucket BUCKET].
Optional arguments:
[--bucket BUCKET] - Like the above, the list of possible permissions as stated on https://www.backblaze.com/b2/docs/b2_create_key.html is not
the full list of permissions that you can list in this call.
[--namePrefix NAMEPREFIX] restricts file access to files whose names start with the prefix or file path.
[--duration DURATION] will set a timer for the application key that starts after the key is created. When the timer runs out, the key will be deleted
and will no longer be usable. If this is left out of the call, the key will not expire and will not be deleted until the user manually deletes it through
the CLI or the web UI.
Output:
The call will give back both the keyId and applicationKey. Optional arguments will not change the output of this call.
Input Output
b2 delete-bucket
Required information:
Notes:
The bucket that will be deleted must be empty and free of any files.
The placeholder file called “.DS_Store” left over from the cli command b2 sync with optional [--delete] and [--allowEmptySource] will count as a
file that prevents the bucket from being deleted. It can easily be deleted in the web UI or with the b2 delete-file-version so that you can
delete the bucket.
Input Output
b2 delete-bucket mattyooo When successful, this call will not give any response.
b2 delete-bucket himatt
This call failed because the bucket had files in it.
Link to screenshot
b2 delete-file-version
Required information:
Notes:
If there is a specific version of a file that you want to delete, each version of the same file will have its own unique fileId.
Optional arguments:
Specifying the [fileName] is more efficient than leaving it out. If you omit the [fileName], it requires an initial query to B2 to get the file name,
before making the call to delete the file. This extra query requires the readFiles capability.
Output:
Input Output
b2 delete-file-version 4_zb065ab9da639e6ba7780071b_f114197
c886ccb92e_d20210419_m214832_c000_v0001081_t0051
Link to screenshot
b2 delete-key
Required information:
Output:
Input Output
b2 delete-key 0002afe34c8bd690000000066
Link to screenshot
b2 download-file-by-id
Required information:
This call will download the file to the location that the CLI is currently in. However, a full file path plus file name can take the place of
localFileName.
When entering the b2FileName, a prefix can be added as a folder path if the file is nested in one or more subfolders.
When entering the localFileName a prefix can be added as a folder path. If the folder path does not exist yet, a new folder or new set of
folders will be created when the call is sent.
Optional arguments:
The argument [--noProgress] will simply do the same thing but will not show a progress bar for the download.
Output:
The output will give back a progress bar and will give back information about the uploaded file if successful.
Input Output
b2 download-file-by-id 4_z80359b0d1649c6da7780071b_f11653d1f8a8
63c74_d20210414_m183136_c000_v0001072_t0002 download.jpeg
Link to screenshot
b2 download-file-by-id 4_z80359b0d1649c6da7780071b_f11653d1f8a8
63c74_d20210414_m183136_c000_v0001072_t0002 /Users/matto
/desktop/download.jpeg
Link to screenshot
b2 download-file-by-name
Required information:
bucketName - Name of the bucket that the file will be downloaded from.
b2FileName - The full path plus file name that will be downloaded.
localFileName - User chosen local file name.
Does not have to match the original.
Notes:
This call will download the file to the location that the CLI is currently in. However, a full file path can be added as a prefix to the localFileName.
When entering the b2FileName, a prefix can be added as a folder path if the file is nested in one or more subfolders.
When entering the localFileName a prefix can be added as a folder path. If the folder path does not exist yet, a new folder or new set of
folders will be created when the call is sent.
Optional arguments:
The argument [--noProgress] will simply do the same thing but will not show a progress bar for the download.
Output:
The output will give back a progress bar and will give back information about the uploaded file if successful.
Input Output
b2 download-file-by-name matt-
cli-bucket pasta.jpeg pas
tadl.jpeg
Link to screenshot
b2 download-file-by-name matt-
cli-bucket dog.jpeg dog.
jpeg
Link to screenshot
b2 get-account-info
b2 clear-account [-h]
No required information.
Notes:
Despite the name of the call, this will not give back any information about the Backblaze account. This will actually only give back information
about the applicationKey, the current session’s authentication token, its permissions, apiUrl and downloadUrl.
Input Output
b2 get-account-info
Link to screenshot
b2 get-account-info
Required information:
Notes:
This is a lot like b2 get-file-info in that it returns information about the bucket.
Optional arguments:
[--showSize] - This will add totalSize to the bottom of the list and will display the total size of the bucket in bytes.
Output:
Input Output
Link to screenshot
b2 get-file-info
Required information:
Notes:
This will give general information about the file stored in B2.
Input Output
b2 get-file-info 4_zd0c5ab2db6c916ea7780071b_f106e7bb75a1ae55
7_d20210420_m210722_c000_v0001079_t0016
Link to screenshot
b2 get-download-auth
Notes:
This will return an authorization token that can be passed to b2_download_file_by_name (link; not the CLI version) through the Authorizat
ion header and allow another user or machine to download a file or files from their bucket.
Optional Arguments:
[--prefix PREFIX] - You can add a file path in the bucket that will restrict the authorization token to only download files from that specific folder
path.
The prefix does not have to match an existing folder or set of folders within the bucket. If done this way, the call will still return an authorization
token that is only allowed to download objects within the non-existent folder or set of folders. In other words, this token will not have access to
any objects in the bucket if the prefix does not exist.
[--duration DURATION] - Specifies how long the auth token is valid for in seconds.
Output:
All variations of this call will only return the authorization token.
Input Output
Link to screenshot
b2 get-download-url-with-auth
Required information:
bucketName - Name of bucket the url will download the file from.
fileName - Name of the file including file prefix/path if applicable.
Notes:
Similar to [--prefix PREFIX] in b2 get-download-auth, the fileName does not need to exist for the CLI to return a url. When the URL is
accessed in an attempt to download a file that does not exist in the bucket but is specified with fileName, the browser will give back a 404 error.
Optional Argument:
[--duration DURATION] - Specifies how long the authorized URL is valid for in seconds.
Output:
All variations of this call will only return the authorized download URL.
Input Output
Link to screenshot
b2 hide-file
Required information:
bucketName - Name of the bucket the file to be hidden is stored in.
fileName - Name of the file including file prefix/path if applicable.
Notes:
This will download and then upload the file and will be considered a different version of the same file. The original will still exist.
Input Output
Link to screenshot
b2 list-buckets
No required information.
Notes:
Unlike B2 docs for this call, optional arguments such as bucketId, bucketName and bucketTypes are not an option in the CLI. These options will
filter in whatever buckets match these arguments. In the CLI, all buckets will be given back in the response.
Optional arguments:
The CLI version has an optional argument [--json] which formats the list of buckets in a machine-readable output and will also include more
bucket information.
Input Output
b2 list-buckets
Link to screenshot
b2 list-buckets --json
Link to screenshot
b2 list-keys
b2 list-keys [-h] [--long]
No required information.
Notes:
This will list all keyIds created in account. This will NOT list applicationKeys
Optional argument:
The argument [--long], on top of listing all keyIds, will also list bucket restriction, prefix restriction, duration, as well as all capabilities/permissions.
Output:
Input Output
b2 list-keys
Link to screenshot
b2 list-keys --long
Link to screenshot
b2 list-parts
Required information:
largeFileId - File ID of the large file whose individual parts you want to list.
This can be found with b2 list-unfinished-large-files.
Notes:
This will only work for a large file upload that has not been finished or canceled.
Input Output
b2 list-parts 4_z4015dbad2609f66a7780071b_f208a668d4
6a15aad_d20210426_m205757_c000_v0001080_t0027
Link to screenshot
This call resulted in an ERROR because the fileId belonged to a
file that has already finished uploading.
b2 list-unfinished-large-files
Required information:
bucketName - Name of the bucket whose unfinished large files will be listed.
Output:
It will list the fileId, the fileName, contentType, and lastModified for each unfinished large file.
Input Output
b2 list-unfinished-large-
files mattta
Link to screenshot
b2 ls
Required information:
With bucketName alone, this call will list the contents at the top level of the bucket.
Optional arguments:
If there are desired contents nested in a folder, the optional arguments [--recursive] and/or [folderName] will need to be included in the call.
When making the call with [--recursive] without a specified [folderName], it will list everything nested in each folder at the top level.
When making the call with [folderName], it must either be a single folder name or a full folder path. It will list just the contents in the top level
of the specified folder or ending folder of the full folder path.
When making the call with both [--recursive] and [folderName], it will list the contents at the top level and everything nested in the specified
folder or the ending folder of the full folder path.
If included, the [folderName] must be after bucketName
If you need further information about the contents in the bucket or bucket/folder, include the argument [--long]. This will display file ID, upload date
/time, file size and file name.
Bucket contents for the examples below:
matt-cli-bucket
/example/
animals/
dog.jpeg
pasta.jpeg
Output:
Input Output
b2 ls matt-cli-bucket
Link to screenshot
b2 ls matt-cli-bucket example
Link to screenshot
b2 ls matt-cli-bucket example
/animals
Link to screenshot
b2 ls --recursive matt-cli-
bucket
Link to screenshot
b2 ls matt-cli-bucket --long
b2 make-url
Required information:
fileId - File ID for the file that the URL will link to.
Notes:
This will create a download link to the file with the fileId in the URL. The URL will be the same as the “Native URL” in the file info when
browsing the bucket through the web UI. Here is an example.
If the bucket is private, this will successfully create a download link to the file but the link cannot be accessed without authorization. Here is an
example of what you would see if you try to access a file from a private bucket.
Output:
Input Output
b2 make-url 4_z327a4f1ee3e42c586bcd0619_f11811
702e1d33ee9_d20191231_m172906_c000_v0001064_
t0027
Link to screenshot
b2 make-friendly-url
Required information:
Notes:
This will create a download link to the file with the file bucketName and fileName in the URL. The URL will be the same as the “Friendly URL”
in the file info when browsing the bucket through the web UI. Here is an example.
If the bucket is private, this will successfully create a download link to the file but the link cannot be accessed without authorization. Here is an
example of what you would see if you try to access a file from a private bucket.
Output:
Input Output
Link to screenshot
b2 sync
b2 sync [-h] [--noProgress] [--dryRun] [--allowEmptySource] [--excludeAllSymlinks] [--threads THREADS] [--compareVersions {none,
modTime,size}] [--compareThreshold MILLIS] [--excludeRegex REGEX] [--includeRegex REGEX] [--excludeDirRegex REGEX] [--
excludeIfModifiedAfter TIMESTAMP] [--destinationServerSideEncryption {SSE-B2}] [--destinationServerSideEncryptionAlgorithm
{AES256}] [--skipNewer | --replaceNewer] [--delete | --keepDays DAYS] source destination
Required information:
source - The source folder that contents will be copied from.
destination - The destination folder to which the contents will be copied.
Notes:
One of the locations have to be a local folder and the other has to be a b2 location. For the B2 location, since this call can work either to or from a
bucket, the path must start with b2:// to denote a network path to the bucket.
When entering the destination, a prefix can be added as a folder path. If the folder path does not exist yet, a new folder or new set of folders
will be created when the call is sent.
Optional arguments:
This call has a LOT of optional arguments, so these will be split into several sections.
[--noProgress] will simply do the same thing but will not show a progress for the upload or download.
[--dryRun] will simulate the syncing process without actually uploading or downloading data.
--delete will delete whatever data already exists in the destination folder. If this argument is not included, the contents of the source will be
coped and added to the destination.
--keepDays DAYS will delete versions of any file older than the specified age in DAYS.
[--allowEmptySource] will respect an empty folder and the contents from the destination will be deleted if the source is indeed empty.
These first two are a helpful way to clear a bucket entirely. In which case, you can sync an empty folder to a bucket with the options --
allowEmptySource and --delete. More information on using b2 sync to delete the contents of a bucket can be found here: https://help.
backblaze.com/hc/en-us/articles/225556127-How-Can-I-Easily-Delete-All-Files-in-a-Bucket-
When you use these two optional arguments to delete stuff in a folder or the bucket itself, it will also upload a placeholder file called “.
DS_Store”.
[--skipNewer | --replaceNewer] - Files at the source that have a newer modification time are always copied to the destination. If the
destination file is newer, the default is to report an error and stop.
--skipNewer will tell the sync job to ignore files with a newer modification time at the destination.
--replaceNewer will tell the sync job to replace files with a newer modification time at the destination with the files with an older
modification time at the source.
To make the destination EXACTLY match the source, you can use the option --replaceNewer and --delete.
To make the destination match the source, but retain previous versions for 30 days:
Sync Exclusions
[--excludeRegex REGEX] can be used to specify a “regular expression” to exclude certain files, file types or folders from being uploaded or
downloaded. The regular expression should be formatted a certain way.
[--includeRegex REGEX] can be used to have --excludeRegex make exceptions for the rules specified in its REGEX.
[--excludeDirRegex REGEX] - Folders excluded by --excludeDirRegex will not be included even if it matches a REGEX specified by --
includeRegex.
REGEX Formatting
'(.*/file.txt)' '(.*/folder)'
'(.*/file1.txt)|(.*/file2.txt)' '(.*/folder1)|(.*/folder2)'
To target file by extension: To also target items at the top level of the folder
'(.*/.txt)' '(.*/*folder)'
'(.*/*file.txt)'
'(.*/*.txt)'
'(.*\file.txt)' '(.*\folder)'
'(.*\file1.txt)|(.*\file2.txt)' '(.*\folder1)|(.*\folder2)'
To target file by extension: To also target items at the top level of the folder
'(.*\.txt)' '(.*\*folder)'
'(.*\*file.txt)'
'(.*\*.txt)'
Notes Folders and files can be specified in the same REGEX input.
Folder and file names can be partial and everything with that partial name will be targeted.
By default, REGEX will not target folders or files in the top level of the source location you are syncing.
In order to have REGEX work for folders or files in the top level of the source location, there are a couple ways to achieve this. Some
examples:
Targeting the top level items per REGEX item
Adding the wildcard right after the forward slash will make the sync not target those items at the top level of the source as well
as other places the REGEX shows up in the source.
'(.*/*cars)|(.*/*.DS_Store)|(.*/food)' will target ALL folders called cars and ALL files called .DS_Store. That goes
for everything at the top level as well. Since (.*/food) does not have the * right after the forward slash, it will not be excluded
if it is at the top level of the source.
'(.*/*cat.jpg)' will target ALL cat.jpg files (both top and sub level).
'[(.*/cars)|(.*/.DS_Store)|(.*/food)]' will target only TOP LEVEL folders called cars, TOP LEVEL files called .
DS_Store and TOP LEVEL folders called food.
[--excludeIfModifiedAfter TIMESTAMP] - You can specify --excludeIfModifiedAfter to selectively ignore file versions (including hide markers)
which were synced after given time (for local source) or ignore only specific file versions (for b2 source). Ignored files or file versions will not be
taken for consideration during sync. The time should be given as a seconds timestamp (e.g. "1367900664") If you need milliseconds precision,
put it after the decimal point (e.g. "1367900664.152")
File Comparisons
By default, a file is the same if the name and modification time are the same as a file that already exists in the destination. If a file is the same, it
will not be synced.
[--compareVersions {none,modTime,size}] will tell the sync job to determine whether a file is the same based on your choice for this optional
argument.
[--compareThreshold MILLIS] is to be used when --compareVersions is set to either modTime or size. This will tell --compareVersions to do a
fuzzy comparison of files within a threshold of all files that already exist in a bucket based on what --compareVersions is set to. If a file’s
modification time or size is within what is set with --compareThreshold, it will be considered the same and will not be synced.
Example of compareVersions/compareThreshold
We have a bucket with a file dog.jpeg and it is 10 KB or 10000 Bytes plus a few other files that don’t change. If the original file is changed and is
now 10.05 KB or 10050 Bytes, a sync job with b2 sync will determine that this file is different because of its difference in size.
If this 50 byte difference is not big enough for the user to want to upload via b2 sync, it can be ignored with --compareVersions if it is set to size
and with --compareThreshold if the threshold is set to 100 (bytes). Example of the call is below.
Server-Side Encryption
[--destinationServerSideEncryption {SSE-B2}] - When the destination is a B2 bucket, this will encrypt all new files that get uploaded to it. This will
not encrypt anything that already exists in the bucket.
If the intention is to encrypt everything in the bucket with this call, it is better to creating a new bucket and then syncing the local folder to
the new bucket. Or even a new folder within the bucket and resyncing the local source to the new folder. Everything newly uploaded will
be encrypted.
[--destinationServerSideEncryptionAlgorithm {AES256}] - This option is not necessary as calling the sync job with --
destinationServerSideEncryption
Note: The output of the sync job when using --destinationServerSideEncryption will look exactly the same as a standard sync job.
Output:
Input Output
Link to screenshot
Link to screenshot
This excludes objects with a specific file path. Can be full or partial paths.
Also works for files.
Link to screenshot
This does the same as the above but --includeRegex is used to make an
exception for a specific folder. Also works for files. Link to screenshot
The destination in this example has a couple files that are unchanged, and
a file called bread.jpeg that is 40,100 bytes. The source currently has that br
ead.jpeg file but it was modified and is now 39,300 bytes.
Link to screenshot
If we run:
Notice that the sync job no longer wants to pick up the modified bread.jpeg.
b2 update-bucket
Required information:
Notes:
In the B2 docs for the CLI, the required bucketType argument show [allPublic | allPrivate] instead and is wrapped in square
brackets, but the choice between allPublic and allPrivate is required. You must choose one or the other and not both.
Optional arguments:
The following optional arguments need values in JSON format. There are some rules for JSON formatting.
If adding more than one VARIABLE:VALUE pair, it must be separated with a comma.
If a VALUE is to be treated as an integer (number), it shouldn’t be in double quotes. If a VALUE is to be treated as a string (text), it should be
in double quotes
[--bucketInfo BUCKETINFO] - These are user chosen and user defined values. They can be named anything and be anything the user wants.
BUCKETINFO Format:
The BUCKETINFO part must be replaced with a JSON formatted as ‘{“VARIABLE”: “VALUE”}’
If a VALUE is to be treated as an integer (number), it shouldn’t be in double quotes. If a VALUE is to be treated as a string (text), it should be in
double quotes
CORSRULES Format:
The CORSRULES part must be replaced with a JSON formatted as ‘[{“VARIABLE”: “VALUE”, “VARIABLE2”: “VALUE2”}]’
Since this JSON requires multiple VARIABLE:VALUE pairs, it must have those square braces [ ] in the wrapping.
For allowedHeaders, allowedOrigins and allowedOperations, the variables must be wrapped in [“ “]. Like this: “allowedHeaders”: [“range”]
[--lifecycleRules LIFECYCLERULES] - This is to set Lifecycle Rules. For more information: https://www.backblaze.com/b2/docs/lifecycle_rules.
html
LIFECYCLERULES Format:
The LIFECYCLERULES part must be replaced with a JSON formatted as ‘[{“VARIABLE”: “VALUE”, “VARIABLE2”: “VALUE2”}]’
Since this JSON requires multiple VARIABLE:VALUE pairs, it must have those square braces [ ] in the wrapping.
The option [--defaultServerSideEncryption {SSE-B2,none}] will set Server-Side Encryption for the bucket.
If the value chosen is SSE-B2 it will enable Server-Side Encryption for the bucket. This will not encrypt non-encrypted objects that are already
in the bucket.
If the value chosen is none it will disable Server-Side Encryption for the bucket. This will not decrypt encrypted objects that are already in the
bucket.
The option [--defaultServerSideEncryptionAlgorithm {AES256}] doesn’t have an effect on the call as enabling Server-Side Encryption with --
defaultServerSideEncryption SSE-B2 will set --defaultServerSideEncryptionAlgorithm to AES256 automatically.
Output:
Input Output
Link to screenshot
Link to screenshot
Link to screenshot
Link to screenshot
b2 upload-file
b2 upload-file [-h] [--noProgress] [--quiet] [--contentType CONTENTTYPE] [--minPartSize MINPARTSIZE] [--sha1 SHA1] [--threads
THREADS] [--info INFO] [--destinationServerSideEncryption {SSE-B2}] [--destinationServerSideEncryptionAlgorithm {AES256}] bucketN
ame localFilePath b2FileName
Required information:
Notes:
Unlike the normal B2 docs call, the CLI version of this call will not require the authorization token nor the upload URL input. It will handle the b2
get-upload-url call and plug in the URL and upload auth token automatically.
When entering the b2FileName, a prefix can be added as a folder path. If the folder path does not exist yet, a new folder or new set of folders
will be created when the call is sent.
Optional arguments:
[--noProgress] will simply do the same thing but will not show a progress bar for the download.
[--quiet] will do the same thing but will not list either of the URLs for the object in the bucket.
[--contentType CONTENTTYPE] will set the content-type for the file uploaded. If this is left out of the upload call, it will default according to the file’
s extension.
[--sha1 SHA1] allows you to tell the call the sha1 of the file to be uploaded. Without this argument, the call will automatically calculate it for you.
This is useful to make the processing and computing for many upload calls take less time in total. If making this call one at a time, it’s easier to let
the call do it for you.
[--info INFO] - These are variables and values chosen and defined by the user.
Large Files:
If the file being uploaded is 200MB or larger, the CLI will default to treating the file as a large file and upload it in multiple threads. If not specified
by the optional [--threads THREADS], the CLI will default to 10 threads. Using the optional [--threads THREADS] argument on a file smaller than
200MB will not have an effect and the CLI will ignore that argument. You can set the size of the parts with [--minPartSize MINPARTSIZE]. If left
out, the CLI will choose the “recommendedPartSize” for you.
A file (larger than 200MB) will be broken up into parts and each part will be uploaded on its own thread to reduce the amount of time taken to
upload the file in whole.
When uploading a large file in parts, the CLI will not look any different as it does when uploading a small file under 200MB.
If you want to test uploading a large file but don't have one large enough.
You can create a dummy file of any size by running this command:
Example:
Output:
Input Output
Link to screenshot
Link to screenshot
b2 update-file-legal-hold
Required information:
Notes:
This call will toggle Object Lock on a specified file and it can’t be deleted until it is disabled.
In order to use this command, the bucket itself needs to have fileLockEnabled=true. Required capabilities for the application key are: write
FileLegalHolds and (if file name is not provided) readFiles.
Optional arguments:
Specifying the [fileName] is more efficient than leaving it out. If you omit the [fileName], it requires an initial query to B2 to get the file name,
before making the call to delete the file. This extra query requires the capability bypassGovernance.
Output:
If the API accepts the call and is successful, it will return nothing.
If the API doesn’t accept the call or fails, it will return the error and prompt that pertains to the issue (example below)
Input Output
b2 update-file-legal-hold 4_z521acf4e63241c9
87b5d0619_f111ad32d4d6d1cd7_d20210624
_m202933_c000_v0001076_t0026 on
Link to screenshot
b2 update-file-legal-hold 4_z223abf4eb334ec4
86bbd0619_f1022ec06996c501e_d20220710
_m202519_c000_v0001059_t0031_u0165748
4719094 on Link to screenshot
b2 update-file-retention
Required information:
fileId - File ID for the file we want to update file retention for.
{governance,compliance,none} - Choose between governance, compliance or none.
Notes:
This call will set a certain amount of time of Object Lock for a specified file and it can’t be deleted for that amount of time.
In order to use this command, the bucket itself needs to have fileLockEnabled=true. Required capabilities for the application key are: write
FileLegalHolds and (if file name is not provided) readFiles.
Optional arguments:
[--retainUntil TIMESTAMP] will lock the file from being deleted from a bucket until the specified time. The timestamp must be an integer
representing milliseconds since “epoch”.
In addition to the above, if the type of file retention is set to governance, in order to disable or shorten file retention, the user must be using an
application key with the capability bypassGovernance and the call must also pass the [--bypassGovernance] argument in the call.
If the type of file retention is set to compliance, the user will not be able to remove or shorten file retention for the file. The file must reach the
end of the retention period set before it can be deleted.
A date in epoch format is an integer that represents milliseconds from January 1, 1970 to a specified date.
For example, if you want a file to be locked until January 1, 2024, you would set --retainUntil 1704096000 where 1704096000 is the number of
milliseconds from January 1, 1970 to January 1, 2024. This is a useful tool to calculate a date converted to epoch.
Output:
If the API accepts the call and is successful, it will return nothing.
If the API doesn’t accept the call or fails, it will return the error and prompt that pertains to the issue
Input Output
Since a successful call doesn’t pass anything back, the example to the right
includes b2 get-file-info after.
Link to screenshot