• Simon Guo's avatar
    powerpc/64: enhance memcmp() with VMX instruction for long bytes comparision · d58badfb
    Simon Guo authored
    This patch add VMX primitives to do memcmp() in case the compare size
    is equal or greater than 4K bytes. KSM feature can benefit from this.
    
    Test result with following test program(replace the "^>" with ""):
    ------
    ># cat tools/testing/selftests/powerpc/stringloops/memcmp.c
    >#include <malloc.h>
    >#include <stdlib.h>
    >#include <string.h>
    >#include <time.h>
    >#include "utils.h"
    >#define SIZE (1024 * 1024 * 900)
    >#define ITERATIONS 40
    
    int test_memcmp(const void *s1, const void *s2, size_t n);
    
    static int testcase(void)
    {
            char *s1;
            char *s2;
            unsigned long i;
    
            s1 = memalign(128, SIZE);
            if (!s1) {
                    perror("memalign");
                    exit(1);
            }
    
            s2 = memalign(128, SIZE);
            if (!s2) {
                    perror("memalign");
                    exit(1);
            }
    
            for (i = 0; i < SIZE; i++)  {
                    s1[i] = i & 0xff;
                    s2[i] = i & 0xff;
            }
            for (i = 0; i < ITERATIONS; i++) {
    		int ret = test_memcmp(s1, s2, SIZE);
    
    		if (ret) {
    			printf("return %d at[%ld]! should have returned zero\n", ret, i);
    			abort();
    		}
    	}
    
            return 0;
    }
    
    int main(void)
    {
            return test_harness(testcase, "memcmp");
    }
    ------
    Without this patch (but with the first patch "powerpc/64: Align bytes
    before fall back to .Lshort in powerpc64 memcmp()." in the series):
    	4.726728762 seconds time elapsed                                          ( +-  3.54%)
    With VMX patch:
    	4.234335473 seconds time elapsed                                          ( +-  2.63%)
    		There is ~+10% improvement.
    
    Testing with unaligned and different offset version (make s1 and s2 shift
    random offset within 16 bytes) can archieve higher improvement than 10%..
    Signed-off-by: default avatarSimon Guo <wei.guo.simon@gmail.com>
    Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
    d58badfb
memcmp_64.S 10.5 KB