Mitmdump - large memory consumption

I’m running mitmdump in upstream mode and use it to block specific requests. After starting mitmdump and browsing several websites, mitmdump consumes over 1.5G of memory. I don’t see that it is leaking (I restart it frequently). I’m curious if there are settings or custom script that I could use to lower the memory usage. I have also seen that it frequently consumes a lot of CPU. Given that the bulk of the traffic is just being sent to an upstream proxy (and some of it dropped), I’d think this would be a very lightweight solution.

/usr/bin/python /usr/local/bin/mitmdump -v -U -s lib/ simply uses the request hook to check url patterns and if I want to drop the request it uses:

def request(context, flow):
# some logic to decide if request should be dropped …

I’m on the current release (0.17.1) and have manually applied the patch outlined here ( to get the upstream proxy authentication working. Running on: Ubuntu 14.04.5 LTS

Output from top:

KiB Mem: 4046856 total, 2008332 used, 2038524 free, 22448 buffers
KiB Swap: 0 total, 0 used, 0 free. 294620 cached Mem

14808 ubuntu 20 0 2027404 758640 6592 S 124.4 18.7 2:25.57 mitmdump

‘top’ output doesn’t display well here, but it is using over 2G of memory and 124% cpu. I’ve thought of testing with the tip-of-the-tree from source, but a bit afraid of introducing new issues. Any suggestions would be appreciated.

@cortesi recently fixed a couple of memory leaks in mitmdump, which should fix the issue you are running into here. We are very close to shipping a next release, so I would recommend that you try out master and see how that works. :slight_smile:
On master, you can send a SIGUSR1/SIGUSR2 to mitmproxy to get a bunch of useful debug info as well.