PAR2 testing

Recently I was experimenting with PAR2. I’m planning to use it for bitrot protection of my backups.

Test set of files:

total 1527580 bytes

Set of par2 files created by PyPar2 (default settings). Test set has 1990 data blocks of 768 bytes (par2 metrics). Parity files has 5% (100 recovery blocks) of redundancy and total size of 1219776 bytes (1191.2kB). Size of all 7 parity files was 80% of original files.

For bitrot protection par2 block should align with medium block. Modern hard drives claims that it won’t return wrong data and entire rotten sector will return zeros (disks has own ECC). (Hypothesis - not tested yet)

For single big file (1944256512 bytes, 1.9GB) default settings (5% redundancy) par2 generated (98349904 bytes, 98.3MB) of parity files, so for single big file storage efficiency if much better. In this scenario data was divided in 2000 blocks (98349904 bytes in one block).

Par2 seems to be single threaded.

Experiments and results