Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
Hashing is lossy compression. You can't recover the input of a hash from the result. This would obviously not work as an encryption. How would you decrypt it, if half the message is destroyed :) ...
Answer
#4: Post edited
- Hashing is lossy compression. You can't recover the input of a hash from the result.
- This would obviously not work as an encryption. How would you decrypt it, if half the message is destroyed :)
- ---------------------
- Consider the SHA hash. You can hash a 1 GB file into a 0.1 kB string. Wow! Why don't we just send people hashes and save all that bandwidth? The problem is that millions of files all hash to the same thing. You can't know which is the correct one.
- Ciphertext is usually the same size as the plaintext.
- ------------------
- >What specific algorithm makes it possible to scramble data into an unrecoverable form, yet still be usable for its intended purpose? Is it something like a checksum
- Yes. Simply counting the number of 1 bits in a file is a very primitive hash. You can see why it's lossy: You're throwing away their location and where the 0 bits are.
- Multiplying each byte and taking its mod is another primitive one. Again, it is painfully obvious why it's lossy. So is the utility: If some of the bytes are corrupted, the final value would *probably* change.
- Modern hashes, especially secure ones like SHA, are a bit more complex but they still rely on things like sums, products and mods which "eat" information.
- --------------------
A non-digital example would be if I hashed a "book" by giving you a list of the last word on each page. This would very effectively identify the book, but it is obvious that you cannot reconstruct the whole book from this hash, and why. There's no special "scrambling" going on, you're just throwing away data.
- Hashing is lossy compression. You can't recover the input of a hash from the result.
- This would obviously not work as an encryption. How would you decrypt it, if half the message is destroyed :)
- ---------------------
- Consider the SHA hash. You can hash a 1 GB file into a 0.1 kB string. Wow! Why don't we just send people hashes and save all that bandwidth? The problem is that millions of files all hash to the same thing. You can't know which is the correct one.
- Ciphertext is usually the same size as the plaintext.
- ------------------
- >What specific algorithm makes it possible to scramble data into an unrecoverable form, yet still be usable for its intended purpose? Is it something like a checksum
- Yes. Simply counting the number of 1 bits in a file is a very primitive hash. You can see why it's lossy: You're throwing away their location and where the 0 bits are.
- Multiplying each byte and taking its mod is another primitive one. Again, it is painfully obvious why it's lossy. So is the utility: If some of the bytes are corrupted, the final value would *probably* change.
- Modern hashes, especially secure ones like SHA, are a bit more complex but they still rely on things like sums, products and mods which "eat" information.
- --------------------
- A non-digital example would be if I "hashed" a book by giving you the last word of each page. This would very effectively identify the book, but it is obvious that you cannot reconstruct the whole book from this hash, and why. There's no special "scrambling" going on, you're just throwing away data.
#3: Post edited
- Hashing is lossy compression. You can't recover the input of a hash from the result.
- This would obviously not work as an encryption. How would you decrypt it, if half the message is destroyed :)
- ---------------------
- Consider the SHA hash. You can hash a 1 GB file into a 0.1 kB string. Wow! Why don't we just send people hashes and save all that bandwidth? The problem is that millions of files all hash to the same thing. You can't know which is the correct one.
- Ciphertext is usually the same size as the plaintext.
- ------------------
- >What specific algorithm makes it possible to scramble data into an unrecoverable form, yet still be usable for its intended purpose? Is it something like a checksum
- Yes. Simply counting the number of 1 bits in a file is a very primitive hash. You can see why it's lossy: You're throwing away their location and where the 0 bits are.
- Multiplying each byte and taking its mod is another primitive one. Again, it is painfully obvious why it's lossy. So is the utility: If some of the bytes are corrupted, the final value would *probably* change.
Modern hashes, especially secure ones like SHA, are a bit more complex but they still rely on things like sums, products and mods which "eat" information.
- Hashing is lossy compression. You can't recover the input of a hash from the result.
- This would obviously not work as an encryption. How would you decrypt it, if half the message is destroyed :)
- ---------------------
- Consider the SHA hash. You can hash a 1 GB file into a 0.1 kB string. Wow! Why don't we just send people hashes and save all that bandwidth? The problem is that millions of files all hash to the same thing. You can't know which is the correct one.
- Ciphertext is usually the same size as the plaintext.
- ------------------
- >What specific algorithm makes it possible to scramble data into an unrecoverable form, yet still be usable for its intended purpose? Is it something like a checksum
- Yes. Simply counting the number of 1 bits in a file is a very primitive hash. You can see why it's lossy: You're throwing away their location and where the 0 bits are.
- Multiplying each byte and taking its mod is another primitive one. Again, it is painfully obvious why it's lossy. So is the utility: If some of the bytes are corrupted, the final value would *probably* change.
- Modern hashes, especially secure ones like SHA, are a bit more complex but they still rely on things like sums, products and mods which "eat" information.
- --------------------
- A non-digital example would be if I hashed a "book" by giving you a list of the last word on each page. This would very effectively identify the book, but it is obvious that you cannot reconstruct the whole book from this hash, and why. There's no special "scrambling" going on, you're just throwing away data.
#2: Post edited
- Hashing is lossy compression. You can't recover the input of a hash from the result.
- This would obviously not work as an encryption. How would you decrypt it, if half the message is destroyed :)
- Consider the SHA hash. You can hash a 1 GB file into a 0.1 kB string. Wow! Why don't we just send people hashes and save all that bandwidth? The problem is that millions of files all hash to the same thing. You can't know which is the correct one.
Ciphertext is usually the same size as the plaintext.
- Hashing is lossy compression. You can't recover the input of a hash from the result.
- This would obviously not work as an encryption. How would you decrypt it, if half the message is destroyed :)
- ---------------------
- Consider the SHA hash. You can hash a 1 GB file into a 0.1 kB string. Wow! Why don't we just send people hashes and save all that bandwidth? The problem is that millions of files all hash to the same thing. You can't know which is the correct one.
- Ciphertext is usually the same size as the plaintext.
- ------------------
- >What specific algorithm makes it possible to scramble data into an unrecoverable form, yet still be usable for its intended purpose? Is it something like a checksum
- Yes. Simply counting the number of 1 bits in a file is a very primitive hash. You can see why it's lossy: You're throwing away their location and where the 0 bits are.
- Multiplying each byte and taking its mod is another primitive one. Again, it is painfully obvious why it's lossy. So is the utility: If some of the bytes are corrupted, the final value would *probably* change.
- Modern hashes, especially secure ones like SHA, are a bit more complex but they still rely on things like sums, products and mods which "eat" information.
#1: Initial revision
Hashing is lossy compression. You can't recover the input of a hash from the result. This would obviously not work as an encryption. How would you decrypt it, if half the message is destroyed :) Consider the SHA hash. You can hash a 1 GB file into a 0.1 kB string. Wow! Why don't we just send people hashes and save all that bandwidth? The problem is that millions of files all hash to the same thing. You can't know which is the correct one. Ciphertext is usually the same size as the plaintext.