I went here from the old google group, so I would like to put my question here.
Basically my task seems easy. I want to simulate concurrent/parallel (100-200) browsing activity. Currently I’m using mitmdump 0.18.2 with -nc switch and a dump file. I’ve noticed if I parallel run multiple mitmdump processes it quite fast consumes all of the RAM (4GB). As I’ve read in many topics it could be because with this method I load all the flows 100-200 times into the memory. With version 2 it consumed also the swap. With version 3.0 RC2 the memory usage lowered with approx. 1GB so I could run multiple processes, but this is not too sophisticated.
Is it possible to replay eg. with an inline script and multiply each request according to the needs? I’ve also read about @concurrent decorator, but still no idea how to reduce memory usage.