API Documentation¶
- class borg.archiver.Archiver(lock_wait=None)[source]¶
-
- do_debug_dump_archive_items(args)[source]¶
dump (decrypted, decompressed) archive items metadata (not: data)
- get_args(argv, cmd)[source]¶
usually, just returns argv, except if we deal with a ssh forced command for borg serve.
- helptext = {'patterns': "\nExclusion patterns support four separate styles, fnmatch, shell, regular\nexpressions and path prefixes. If followed by a colon (':') the first two\ncharacters of a pattern are used as a style selector. Explicit style\nselection is necessary when a non-default style is desired or when the\ndesired pattern starts with two alphanumeric characters followed by a colon\n(i.e. `aa:something/*`).\n\n`Fnmatch <https://docs.python.org/3/library/fnmatch.html>`_, selector `fm:`\n\n These patterns use a variant of shell pattern syntax, with '*' matching\n any number of characters, '?' matching any single character, '[...]'\n matching any single character specified, including ranges, and '[!...]'\n matching any character not specified. For the purpose of these patterns,\n the path separator ('\\' for Windows and '/' on other systems) is not\n treated specially. Wrap meta-characters in brackets for a literal match\n (i.e. `[?]` to match the literal character `?`). For a path to match\n a pattern, it must completely match from start to end, or must match from\n the start to just before a path separator. Except for the root path,\n paths will never end in the path separator when matching is attempted.\n Thus, if a given pattern ends in a path separator, a '*' is appended\n before matching is attempted.\n\nShell-style patterns, selector `sh:`\n\n Like fnmatch patterns these are similar to shell patterns. The difference\n is that the pattern may include `**/` for matching zero or more directory\n levels, `*` for matching zero or more arbitrary characters with the\n exception of any path separator.\n\nRegular expressions, selector `re:`\n\n Regular expressions similar to those found in Perl are supported. Unlike\n shell patterns regular expressions are not required to match the complete\n path and any substring match is sufficient. It is strongly recommended to\n anchor patterns to the start ('^'), to the end ('$') or both. Path\n separators ('\\' for Windows and '/' on other systems) in paths are\n always normalized to a forward slash ('/') before applying a pattern. The\n regular expression syntax is described in the `Python documentation for\n the re module <https://docs.python.org/3/library/re.html>`_.\n\nPrefix path, selector `pp:`\n\n This pattern style is useful to match whole sub-directories. The pattern\n `pp:/data/bar` matches `/data/bar` and everything therein.\n\nExclusions can be passed via the command line option `--exclude`. When used\nfrom within a shell the patterns should be quoted to protect them from\nexpansion.\n\nThe `--exclude-from` option permits loading exclusion patterns from a text\nfile with one pattern per line. Lines empty or starting with the number sign\n('#') after removing whitespace on both ends are ignored. The optional style\nselector prefix is also supported for patterns loaded from a file. Due to\nwhitespace removal paths with whitespace at the beginning or end can only be\nexcluded using regular expressions.\n\nExamples:\n\n# Exclude '/home/user/file.o' but not '/home/user/file.odt':\n$ borg create -e '*.o' backup /\n\n# Exclude '/home/user/junk' and '/home/user/subdir/junk' but\n# not '/home/user/importantjunk' or '/etc/junk':\n$ borg create -e '/home/*/junk' backup /\n\n# Exclude the contents of '/home/user/cache' but not the directory itself:\n$ borg create -e /home/user/cache/ backup /\n\n# The file '/home/user/cache/important' is *not* backed up:\n$ borg create -e /home/user/cache/ backup / /home/user/cache/important\n\n# The contents of directories in '/home' are not backed up when their name\n# ends in '.tmp'\n$ borg create --exclude 're:^/home/[^/]+\\.tmp/' backup /\n\n# Load exclusions from file\n$ cat >exclude.txt <<EOF\n# Comment line\n/home/*/junk\n*.tmp\nfm:aa:something/*\nre:^/home/[^/]\\.tmp/\nsh:/home/*/.thumbnails\nEOF\n$ borg create --exclude-from exclude.txt backup /\n"}¶
- class borg.archiver.ToggleAction(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶
argparse action to handle “toggle” flags easily
toggle flags are in the form of --foo, --no-foo.
the --no-foo argument still needs to be passed to the add_argument() call, but it simplifies the --no detection.
- borg.archiver.sig_info_handler(signum, stack)[source]¶
search the stack for infos about the currently processed file and print them
- class borg.upgrader.AtticKeyfileKey(repository)[source]¶
backwards compatible Attic key file parser
- FILE_ID = 'ATTIC KEY'¶
- classmethod find_key_file(repository)[source]¶
copy of attic’s `find_key_file`_
this has two small modifications:
- it uses the above `get_keys_dir`_ instead of the global one, assumed to be borg’s
- it uses `repository.path`_ instead of `repository._location.canonical_path`_ because we can’t assume the repository has been opened by the archiver yet
- class borg.upgrader.AtticRepositoryUpgrader(*args, **kw)[source]¶
-
- convert_cache(dryrun)[source]¶
convert caches from attic to borg
those are all hash indexes, so we need to s/ATTICIDX/BORG_IDX/ in a few locations:
- the files and chunks cache (in $ATTIC_CACHE_DIR or $HOME/.cache/attic/<repoid>/), which we could just drop, but if we’d want to convert, we could open it with the Cache.open(), edit in place and then Cache.close() to make sure we have locking right
- static convert_keyfiles(keyfile, dryrun)[source]¶
convert key files from attic to borg
replacement pattern is s/ATTIC KEY/BORG_KEY/ in get_keys_dir(), that is $ATTIC_KEYS_DIR or $HOME/.attic/keys, and moved to $BORG_KEYS_DIR or $HOME/.config/borg/keys.
no need to decrypt to convert. we need to rewrite the whole key file because magic string length changed, but that’s not a problem because the keyfiles are small (compared to, say, all the segments).
- convert_repo_index(dryrun, inplace)[source]¶
convert some repo files
those are all hash indexes, so we need to s/ATTICIDX/BORG_IDX/ in a few locations:
- the repository index (in $ATTIC_REPO/index.%d, where %d is the Repository.get_index_transaction_id()), which we should probably update, with a lock, see Repository.open(), which i’m not sure we should use because it may write data on Repository.close()...
- static convert_segments(segments, dryrun=True, inplace=False, progress=False)[source]¶
convert repository segments from attic to borg
replacement pattern is s/ATTICSEG/BORG_SEG/ in files in $ATTIC_REPO/data/**.
luckily the magic string length didn’t change so we can just replace the 8 first bytes of all regular files in there.
- find_attic_keyfile()[source]¶
find the attic keyfiles
the keyfiles are loaded by KeyfileKey.find_key_file(). that finds the keys with the right identifier for the repo.
this is expected to look into $HOME/.attic/keys or $ATTIC_KEYS_DIR for key files matching the given Borg repository.
it is expected to raise an exception (KeyfileNotFoundError) if no key is found. whether that exception is from Borg or Attic is unclear.
this is split in a separate function in case we want to use the attic code here directly, instead of our local implementation.
- upgrade(dryrun=True, inplace=False, progress=False)[source]¶
convert an attic repository to a borg repository
those are the files that need to be upgraded here, from most important to least important: segments, key files, and various caches, the latter being optional, as they will be rebuilt if missing.
we nevertheless do the order in reverse, as we prefer to do the fast stuff first, to improve interactivity.
- class borg.upgrader.Borg0xxKeyfileKey(repository)[source]¶
backwards compatible borg 0.xx key file parser
- class borg.upgrader.BorgRepositoryUpgrader(path, create=False, exclusive=False, lock_wait=None, lock=True)[source]¶
- class borg.archive.Archive(repository, key, manifest, name, cache=None, create=False, checkpoint_interval=300, numeric_owner=False, progress=False, chunker_params=(19, 23, 21, 4095), start=datetime.datetime(2016, 3, 9, 16, 42, 5, 494555), end=datetime.datetime(2016, 3, 9, 16, 42, 5, 494565))[source]¶
- class borg.archive.ArchiveChecker[source]¶
- class borg.archive.ChunkBuffer(key, chunker_params=(12, 16, 14, 4095))[source]¶
- BUFFER_SIZE = 1048576¶
- class borg.archive.RobustUnpacker(validator)[source]¶
A restartable/robust version of the streaming msgpack unpacker
- class borg.fuse.FuseOperations(key, repository, manifest, archive, cached_repo)[source]¶
Export archive as a fuse filesystem
- class borg.locking.ExclusiveLock(path, timeout=None, sleep=None, id=None)[source]¶
An exclusive Lock based on mkdir fs operation being atomic.
If possible, try to use the contextmanager here like: with ExclusiveLock(...) as lock:
...This makes sure the lock is released again if the block is left, no matter how (e.g. if an exception occurred).
- class borg.locking.LockRoster(path, id=None)[source]¶
A Lock Roster to track shared/exclusive lockers.
Note: you usually should call the methods with an exclusive lock held, to avoid conflicting access by multiple threads/processes/machines.
- exception borg.locking.NotMyLock[source]¶
Failed to release the lock {} (was/is locked, but not by me).
- class borg.locking.TimeoutTimer(timeout=None, sleep=None)[source]¶
A timer for timeout checks (can also deal with no timeout, give timeout=None [default]). It can also compute and optionally execute a reasonable sleep time (e.g. to avoid polling too often or to support thread/process rescheduling).
- class borg.locking.UpgradableLock(path, exclusive=False, sleep=None, timeout=None, id=None)[source]¶
A Lock for a resource that can be accessed in a shared or exclusive way. Typically, write access to a resource needs an exclusive lock (1 writer, noone is allowed reading) and read access to a resource needs a shared lock (multiple readers are allowed).
If possible, try to use the contextmanager here like: with UpgradableLock(...) as lock:
...This makes sure the lock is released again if the block is left, no matter how (e.g. if an exception occurred).
- borg.shellpattern.translate(pat)[source]¶
Translate a shell-style pattern to a regular expression.
The pattern may include “**<sep>” (<sep> stands for the platform-specific path separator; “/” on POSIX systems) for matching zero or more directory levels and “*” for matching zero or more arbitrary characters with the exception of any path separator. Wrap meta-characters in brackets for a literal match (i.e. “[?]” to match the literal character ”?”).
This function is derived from the “fnmatch” module distributed with the Python standard library.
Copyright (C) 2001-2016 Python Software Foundation. All rights reserved.
TODO: support {alt1,alt2} shell-style alternatives
- class borg.repository.LoggedIO(path, limit, segments_per_dir, capacity=90)[source]¶
- COMMIT = b'@\xf4<%\t\x00\x00\x00\x02'¶
- LoggedIO.crc_fmt = <Struct object at 0x7ff633a4c6c0>¶
- LoggedIO.get_segments_transaction_id()[source]¶
Verify that the transaction id is consistent with the index transaction id
- LoggedIO.header_fmt = <Struct object at 0x7ff633a4c618>¶
- LoggedIO.header_no_crc_fmt = <Struct object at 0x7ff633a4c688>¶
- LoggedIO.put_header_fmt = <Struct object at 0x7ff633a4c650>¶
- class borg.repository.Repository(path, create=False, exclusive=False, lock_wait=None, lock=True)[source]¶
Filesystem based transactional key value store
On disk layout: dir/README dir/config dir/data/<X / SEGMENTS_PER_DIR>/<X> dir/index.X dir/hints.X
- Repository.DEFAULT_MAX_SEGMENT_SIZE = 5242880¶
- Repository.DEFAULT_SEGMENTS_PER_DIR = 10000¶
- Repository.check(repair=False, save_space=False)[source]¶
Check repository consistency
This method verifies all segment checksums and makes sure the index is consistent with the data stored in the segments.
- class borg.remote.RemoteRepository(location, create=False, lock_wait=None, lock=True, args=None)[source]¶
-
- RemoteRepository.extra_test_args = []¶
- class borg.remote.RepositoryCache(repository)[source]¶
A caching Repository wrapper
Caches Repository GET operations using a local temporary Repository.
- class borg.remote.RepositoryNoCache(repository)[source]¶
A not caching Repository wrapper, passes through to repository.
Just to have same API (including the context manager) as RepositoryCache.
- class borg.remote.RepositoryServer(restrict_to_paths)[source]¶
-
- rpc_methods = ('__len__', 'check', 'commit', 'delete', 'destroy', 'get', 'list', 'negotiate', 'open', 'put', 'rollback', 'save_key', 'load_key', 'break_lock')¶
Compute hashtable sizes with nices properties - prime sizes (for small to medium sizes) - 2 prime-factor sizes (for big sizes) - fast growth for small sizes - slow growth for big sizes
- Note:
- this is just a tool for developers. within borgbackup, it is just used to generate hash_sizes definition for _hashindex.c.
- class borg.hash_sizes.Policy¶
Policy(upto, grow)
- grow¶
Alias for field number 1
- upto¶
Alias for field number 0
- borg.hash_sizes.eratosthenes()[source]¶
Yields the sequence of prime numbers via the Sieve of Eratosthenes.
- borg.hash_sizes.two_prime_factors(pfix=65537)[source]¶
Yields numbers with 2 prime factors pfix and p.
A basic extended attributes (xattr) implementation for Linux and MacOS X
- exception borg.helpers.ErrorWithTraceback[source]¶
like Error, but show a traceback also
- traceback = True¶
- exception borg.helpers.ExtensionModuleError[source]¶
The Borg binary extension modules do not seem to be properly installed
- class borg.helpers.FnmatchPattern(pattern)[source]¶
Shell glob patterns to exclude. A trailing slash means to exclude the contents of a directory, but not the directory itself.
- PREFIX = 'fm'¶
- class borg.helpers.Location(text='')[source]¶
Object representing a repository / archive location
- archive = None¶
- env_re = re.compile('(?:::(?P<archive>[^/]+)?)?$')¶
- file_re = re.compile('(?P<proto>file)://(?P<path>[^:]+)(?:::(?P<archive>[^/]+))?$')¶
- host = None¶
- path = None¶
- port = None¶
- proto = None¶
- scp_re = re.compile('((?:(?P<user>[^@]+)@)?(?P<host>[^:/]+):)?(?P<path>[^:]+)(?:::(?P<archive>[^/]+))?$')¶
- ssh_re = re.compile('(?P<proto>ssh)://(?:(?P<user>[^@]+)@)?(?P<host>[^:/#]+)(?::(?P<port>\\d+))?(?P<path>[^:]+)(?:::(?P<archive>[^/]+))?$')¶
- user = None¶
- class borg.helpers.Manifest(key, repository)[source]¶
- MANIFEST_ID = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'¶
- class borg.helpers.PathPrefixPattern(pattern)[source]¶
Literal files or directories listed on the command line for some operations (e.g. extract, but not create). If a directory is specified, all paths that start with that path match as well. A trailing slash makes no difference.
- PREFIX = 'pp'¶
- class borg.helpers.PatternBase(pattern)[source]¶
Shared logic for inclusion/exclusion patterns.
- PREFIX = NotImplemented¶
- class borg.helpers.PatternMatcher(fallback=None)[source]¶
- class borg.helpers.ProgressIndicatorEndless(step=10, file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='ANSI_X3.4-1968'>)[source]¶
- class borg.helpers.ProgressIndicatorPercent(total, step=5, start=0, same_line=False, msg='%3.0f%%', file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='ANSI_X3.4-1968'>)[source]¶
- class borg.helpers.ShellPattern(pattern)[source]¶
Shell glob patterns to exclude. A trailing slash means to exclude the contents of a directory, but not the directory itself.
- PREFIX = 'sh'¶
- class borg.helpers.Statistics[source]¶
-
- summary = ' Original size Compressed size Deduplicated size\n{label:15} {stats.osize_fmt:>20s} {stats.csize_fmt:>20s} {stats.usize_fmt:>20s}'¶
- borg.helpers.dir_is_cachedir(path)[source]¶
Determines whether the specified path is a cache directory (and therefore should potentially be excluded from the backup) according to the CACHEDIR.TAG protocol (http://www.brynosaurus.com/cachedir/spec.html).
- borg.helpers.dir_is_tagged(path, exclude_caches, exclude_if_present)[source]¶
Determines whether the specified path is excluded by being a cache directory or containing user-specified tag files. Returns a list of the paths of the tag files (either CACHEDIR.TAG or the matching user-specified files).
- borg.helpers.format_file_size(v, precision=2)[source]¶
Format file size into a human friendly format
- borg.helpers.int_to_bigint(value)[source]¶
Convert integers larger than 64 bits to bytearray
Smaller integers are left alone
- borg.helpers.load_excludes(fh)[source]¶
Load and parse exclude patterns from file object. Lines empty or starting with ‘#’ after stripping whitespace on both line ends are ignored.
- borg.helpers.log_multi(*msgs, level=20)[source]¶
log multiple lines of text, each line by a separate logging call for cosmetic reasons
each positional argument may be a single or multiple lines (separated by
) of text.
- borg.helpers.normalized(func)[source]¶
Decorator for the Pattern match methods, returning a wrapper that normalizes OSX paths to match the normalized pattern on OSX, and returning the original method on other platforms
- borg.helpers.parse_pattern(pattern, fallback=<class 'borg.helpers.FnmatchPattern'>)[source]¶
Read pattern from string and return an instance of the appropriate implementation class.
- borg.helpers.posix_acl_use_stored_uid_gid(acl)[source]¶
Replace the user/group field with the stored uid/gid
- borg.helpers.remove_surrogates(s, errors='replace')[source]¶
Replace surrogates generated by fsdecode with ‘?’
- borg.helpers.safe_decode(s, coding='utf-8', errors='surrogateescape')[source]¶
decode bytes to str, with round-tripping “invalid” bytes
- borg.helpers.safe_encode(s, coding='utf-8', errors='surrogateescape')[source]¶
encode str to bytes, with round-tripping “invalid” bytes
- borg.helpers.update_excludes(args)[source]¶
Merge exclude patterns from files with those on command line.
- borg.helpers.yes(msg=None, false_msg=None, true_msg=None, default_msg=None, retry_msg=None, invalid_msg=None, env_msg=None, falsish=('No', 'NO', 'no', 'N', 'n', '0'), truish=('Yes', 'YES', 'yes', 'Y', 'y', '1'), defaultish=('Default', 'DEFAULT', 'default', 'D', 'd', ''), default=False, retry=True, env_var_override=None, ofile=None, input=<built-in function input>)[source]¶
Output <msg> (usually a question) and let user input an answer. Qualifies the answer according to falsish, truish and defaultish as True, False or <default>. If it didn’t qualify and retry_msg is None (no retries wanted), return the default [which defaults to False]. Otherwise let user retry answering until answer is qualified.
If env_var_override is given and this var is present in the environment, do not ask the user, but just use the env var contents as answer as if it was typed in. Otherwise read input from stdin and proceed as normal. If EOF is received instead an input or an invalid input without retry possibility, return default.
param msg: introducing message to output on ofile, no - is added [None]
param retry_msg: retry message to output on ofile, no - is added [None]
param false_msg: message to output before returning False [None] param true_msg: message to output before returning True [None] param default_msg: message to output before returning a <default> [None] param invalid_msg: message to output after a invalid answer was given [None] param env_msg: message to output when using input from env_var_override [None], needs to have 2 placeholders for answer and env var name, e.g.: “{} (from {})” param falsish: sequence of answers qualifying as False param truish: sequence of answers qualifying as True param defaultish: sequence of answers qualifying as <default> param default: default return value (defaultish answer was given or no-answer condition) [False] param retry: if True and input is incorrect, retry. Otherwise return default. [True] param env_var_override: environment variable name [None] param ofile: output stream [sys.stderr] param input: input function [input from builtins] return: boolean answer value, True or False
- class borg.cache.Cache(repository, key, manifest, path=None, sync=True, do_files=False, warn_if_unencrypted=True, lock_wait=None)[source]¶
Client Side cache
- exception Cache.EncryptionMethodMismatch[source]¶
Repository encryption method changed since last access, refusing to continue
- Cache.sync()[source]¶
Re-synchronize chunks cache with repository.
Maintains a directory with known backup archive indexes, so it only needs to fetch infos from repo and build a chunk index once per backup archive. If out of sync, missing archive indexes get added, outdated indexes get removed and a new master chunks index is built by merging all archive indexes.
- class borg.key.AESKeyBase(repository)[source]¶
Common base class shared by KeyfileKey and PassphraseKey
Chunks are encrypted using 256bit AES in Counter Mode (CTR)
Payload layout: TYPE(1) + HMAC(32) + NONCE(8) + CIPHERTEXT
To reduce payload size only 8 bytes of the 16 bytes nonce is saved in the payload, the first 8 bytes are always zeros. This does not affect security but limits the maximum repository capacity to only 295 exabytes!
- PAYLOAD_OVERHEAD = 41¶
- exception borg.key.UnsupportedPayloadError[source]¶
Unsupported payload type {}. A newer version is required to access this repository.
logging facilities
The way to use this is as follows:
each module declares its own logger, using:
from .logger import create_logger logger = create_logger()
then each module uses logger.info/warning/debug/etc according to the level it believes is appropriate:
logger.debug(‘debugging info for developers or power users’) logger.info(‘normal, informational output’) logger.warning(‘warn about a non-fatal error or sth else’) logger.error(‘a fatal error’)
... and so on. see the logging documentation for more information
console interaction happens on stderr, that includes interactive reporting functions like help, info and list
...except input() is special, because we can’t control the stream it is using, unfortunately. we assume that it won’t clutter stdout, because interaction would be broken then anyways
what is output on INFO level is additionally controlled by commandline flags
- borg.logger.create_logger(name=None)[source]¶
lazily create a Logger object with the proper path, which is returned by find_parent_module() by default, or is provided via the commandline
this is really a shortcut for:
logger = logging.getLogger(__name__)we use it to avoid errors and provide a more standard API.
We must create the logger lazily, because this is usually called from module level (and thus executed at import time - BEFORE setup_logging() was called). By doing it lazily we can do the setup first, we just have to be careful not to call any logger methods before the setup_logging() call. If you try, you’ll get an exception.
- borg.logger.find_parent_module()[source]¶
find the name of a the first module calling this module
if we cannot find it, we return the current module’s name (__name__) instead.
- borg.logger.setup_logging(stream=None, conf_fname=None, env_var='BORG_LOGGING_CONF', level='info', is_serve=False)[source]¶
setup logging module according to the arguments provided
if conf_fname is given (or the config file name can be determined via the env_var, if given): load this logging configuration.
otherwise, set up a stream handler logger on stderr (by default, if no stream is provided).
if is_serve == True, we configure a special log format as expected by the borg client log message interceptor.
- borg.platform_linux.acl_get()¶
Saves ACL Entries
If numeric_owner is True the user/group field is not preserved only uid/gid
- borg.platform_linux.acl_set()¶
Restore ACL Entries
If numeric_owner is True the stored uid/gid is used instead of the user/group names
- borg.platform_linux.acl_use_local_uid_gid()¶
Replace the user/group field with the local uid/gid if possible
- class borg.hashindex.ChunkKeyIterator¶
- class borg.hashindex.NSKeyIterator¶
- class borg.compress.CNONE¶
none - no compression, just pass through data
- ID = b'\x00\x00'¶
- compress()¶
- decompress()¶
- name = 'none'¶
- class borg.compress.Compressor¶
compresses using a compressor with given name and parameters decompresses everything we can handle (autodetect)
- compress()¶
- decompress()¶
- class borg.compress.CompressorBase¶
base class for all (de)compression classes, also handles compression format auto detection and adding/stripping the ID header (which enable auto detection).
- ID = b'\xff\xff'¶
- compress()¶
- decompress()¶
- detect()¶
- name = 'baseclass'¶
- class borg.compress.LZ4¶
raw LZ4 compression / decompression (liblz4).
- Features:
- lz4 is super fast
- wrapper releases CPython’s GIL to support multithreaded code
- buffer given by caller, avoiding frequent reallocation and buffer duplication
- uses safe lz4 methods that never go beyond the end of the output buffer
- But beware:
- this is not very generic, the given buffer MUST be large enough to handle all compression or decompression output (or it will fail).
- you must not do method calls to the same LZ4 instance from different threads at the same time - create one LZ4 instance per thread!
- ID = b'\x01\x00'¶
- compress()¶
- decompress()¶
- name = 'lz4'¶
- class borg.compress.LZMA¶
lzma compression / decompression
- ID = b'\x02\x00'¶
- compress()¶
- decompress()¶
- name = 'lzma'¶
- class borg.compress.ZLIB¶
zlib compression / decompression (python stdlib)
- ID = b'\x08\x00'¶
- compress()¶
- decompress()¶
- classmethod detect()¶
- name = 'zlib'¶
- borg.compress.get_compressor()¶
- class borg.chunker.Chunker¶
- chunkify()¶
Cut a file into chunks.
Parameters: - fd – Python file object
- fh – OS-level file handle (if available), defaults to -1 which means not to use OS-level fd.
- borg.chunker.buzhash()¶
- borg.chunker.buzhash_update()¶
A thin OpenSSL wrapper
This could be replaced by PyCrypto maybe?
- class borg.crypto.AES¶
A thin wrapper around the OpenSSL EVP cipher API
- decrypt()¶
- encrypt()¶
- iv¶
- reset()¶
- borg.crypto.bytes_to_int¶
- borg.crypto.bytes_to_long¶
- borg.crypto.long_to_bytes¶
- borg.crypto.num_aes_blocks()¶
Return the number of AES blocks required to encrypt/decrypt length bytes of data. Note: this is only correct for modes without padding, like AES-CTR.