Adding or subtracting '0' is the most efficient method of converting between decimal-coded binary and character-coded binary.
Subtracting 'A' or 'a' (and then adding 10) is a well known and efficient conversion from character-encoded hexadecimal to binary.
Adding or subtracting ' ' (space) is used often in base64 encoding/decoding (e.g., MIME) [though you do need to special-case that binary 0 is coded as period instead of as space)
Adding or subtracting 32 used to be very common magic for converting between upper and lower-case ASCII. So common that it became a problem when dealing with EBCDIC and then later with ISO-8896-* and UNICODE. So common that this bug was hard to find, because programmers would read the 32, know that it was upper/lower case conversion, and then be puzzled that letters weren't being converted properly...
The characters '1' through '9' have been in consecutive coding positions since the ITA2 code of 1930. Any program that is not required to work with Baudot or Murray or older codes may assume that for a fact. Any program written the ASCII / ANSI / ISO / UNICODE line may assume that upper-case "Latin" (English) characters are consecutive, and that the lower-case "Latin" (English) characters are consecutive: this is a fundamental standardization no worse than assuming that all of the MATLAB operator characters are present in the character set. As best I know, MATLAB has never been supported on any EBCDIC-based system on which the assumption is not true.