General software, Operating Systems, and Programming discussion.
Everything from software questions, OSes, simple HTML to scripting languages, Perl, PHP, Python, MySQL, VB, C++ etc.
I'm writing an encryption/de-encryption program pair, and have hit a brick wall. The first program (encrypt) writes the encrypted output to 'flatfile' and the random key to 'key'. The second program reads the information in from key, reads in from flatfile, and is supposed to use key to decrypt flatfile and put the output back out into flatfile.
First program runs fine (except for cout'ing the final output), second doesn't.
Damn, it's been long since I've done something in C++.
Ok, I've looked at the program, seems that the decrypt program is reading the flatfile wrong.
Something about the "comma" character in the file causing the >> operator to split up the data.
I tried opening with ios::binary, but still had that problem.
Darn, sorry can't help you much more, I haven't done much file IO work when I used C++. Maybe someone more experienced can help out.
You will experience a strong urge to do good; but it will pass.
You say you are getting "weird, weird compile errors", I just compiled it with MS VC++ 6.0 Pro, and it compiles fine. Do you mean compiler errors or something else. If it is a true compile error, which compiler are you using? If not, what are you seeing?
Your encryption process is highly suspect by the way. You are multiplying an 8bit unsigned char by an integer storing the result in an unsigned 8bit with whatever truncation your compiler applies. The results of the multiplication will inevitably overflow 8bit storage for some values, the result will be loss of the significant bits. When you try to decode it, it will translate incorrectly since it will assume the more significant bits were zero. Your attempt to read and output the results in your first prgram is probably evidence of this effect.
Originally posted by in2deep You say you are getting "weird, weird compile errors", I just compiled it with MS VC++ 6.0 Pro, and it compiles fine. Do you mean compiler errors or something else. If it is a true compile error, which compiler are you using? If not, what are you seeing?
That is called 'Paft using gcc on Linux and having a hell of a time when it works just fine on MS Visual Studio 6'. It compiles fine now, but I have some more issues you address below.
Your encryption process is highly suspect by the way. You are multiplying an 8bit unsigned char by an integer storing the result in an unsigned 8bit with whatever truncation your compiler applies. The results of the multiplication will inevitably overflow 8bit storage for some values, the result will be loss of the significant bits. When you try to decode it, it will translate incorrectly since it will assume the more significant bits were zero. Your attempt to read and output the results in your first prgram is probably evidence of this effect.
..and there's the other issue. I can't decrypt the data after it's been encrypted because of that exact problem (drive me up a wall). So this leaves me with - What do I do about that? Even if I designated them long or double, it would STILL truncate and I would STILL run into that issue. So should I do something like a pair of matrixes ((hello Algebra 2)), or...? I will NOT be using xor encryption, since that can so easily reverse-engineered.
So trade that typical for something colorful, and if it's crazy live a little crazy!
As far as I know, (I am not a crypto specialist), public/private key polynomial encryption is still regarded as the pinnacle at the moment, although it is creaking a bit.
All simple encryption, (XOR, number rotators, factors, bitswappers and combinations thereof), would be cracked in seconds by any of the free cracker tools easily available from the web. Even more sophisticated techniques like Diffie-Helleman etc. fail in seconds to todays powerful desktops using simple brute force.
Unless you are a maths genius, forget trying to create your own encryption system if you want anything to remain encrypted for long.