-
Anton Blanchard authored
I tested how long it took to do a dd from /dev/random on ppc64 before and after this patch, while doing a ping flood from another machine. before: # /usr/bin/time dd if=/dev/random of=/dev/zero count=1k 0+51 records in Command terminated by signal 2 0.00user 0.00system 19:18.46elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k I gave up after 19 minutes. after: # /usr/bin/time dd if=/dev/random of=/dev/zero count=1k 0+1024 records in 0.00user 0.00system 0:33.38elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k Just over 33 seconds. Better. From: Arnd Bergmann <arnd@arndb.de> I noticed that only i386 and x86-64 are currently using a high resolution timer source when adding randomness. Since many architectures have a working get_cycles() implementation, it seems rather straightforward to use that. Has this been discussed before, or can anyone comment on the implementation below? This patch attempts to take into account the size of cycles_t, which is either 32 or 64 bits wide but independent of the architecture's word size. The behavior should be nearly identical to the old one on i386, x86-64 and all architectures without a time stamp counter, while finding more entropy on the other architectures. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
4c746d40