Quantcast
Channel: Objective Development Forums
Viewing all articles
Browse latest Browse all 4524

Re: little snitch memory leak

$
0
0
I can confirm this problem as well. It may have to do with pending network connections which are yet to be confirmed to be accepted or denied by the user. For example suppose one is logged in a few times in Mountain Lion through VNC (multiple sessions). Then one user may be prompted whether to accept or deny a particular request, and that might not get noticed for a while. And so pending connections that take place while waiting for or expecting user input may accumulate, and in order to guarantee that LS will individually ask the user for each one, it may need to keep these in memory and buffer them up.

That used to be my explanation. However I've since set preferences to automatically time out when there is no user feedback of confirm/deny in an expected time slot. So then there can't in principle be that buildup of queued requests. And furthermore I've individually checked the couple of instances I'm logged onto and I did not see any window dialog boxes open which would have suggested a previous connection attempt that became stale. And yes, I can confirm over 3GB of memory usage for one LS user process, and 2GB for another, and 0.5GB for yet another. So that's one large amount of memory! Maybe there's a way to break down the memory usage or for LS to give a report or feedback on where it is allocating its memory so that if this happens again in this or a future release, we can see what the real cause is. But I can say that this high memory usage is consistent for me - it's happening all the time - and I'm using 3.1.1.

Suffice it to say, it's a real problem and issue that I continue to face, and a good way for the development team to test for this is to consider when more than one user is logged on simultaneously and let there be a variety of connection attempts. And then having some kind of UI feedback of breakdown of memory usage might clarify what the cause is. It may not be a memory leak per se - it may be somehow that LS isn't putting reasonable bounds on a certain set of allocations, waiting for some future event until it actually would free that memory. But the user experience of using that much memory is at least an order of magnitude too high and I'd say even two orders of magnitude. Compromises should be made before deciding to allocate this much memory. And if there's a potential for a memory leak since the coding is not easy or kernel extensions or daemons might face development or transition challenges, then the daemon(s) or processes should restart themselves every once in a while - at least when they notice how much memory they're using! Whatever state is deemed worthwhile of 3GB of storage just doesn't seem to be a valuable user experience.

Viewing all articles
Browse latest Browse all 4524