cloning website keeps __before_traverse__ from original website
@kazuhiko , since @vpelletier told me you knew websites :)
Each time a website is cloned, it keeps all the before traverse hooks from original website.
We can see for example this one has 8 before traverse hooks.
This seems harmless, but must have a small impact on performance.
If I use ab -n 1000 website with 1 before traverse hooks, I have:
Requests per second: 74.15 [#/sec] (mean)
Time per request: 13.485 [ms] (mean)
After cloning the web site 10 times, it will have 10 before traverse hooks. ab timing on this website becomes:
Requests per second: 68.14 [#/sec] (mean)
Time per request: 14.676 [ms] (mean)
If I clone 100 times, for a web site with 100 before traverse hooks, even if this becames unrealistic, it becomes:
Requests per second: 39.64 [#/sec] (mean)
Time per request: 25.227 [ms] (mean)
So I guess it is better to also do something to cleanup the old websites. This is especially on this part that I am not sure the approach is correct. I integrated in the existing code in WebSite._edit
but shouldn't this be checkConsistency / fixConsistency ?