An archive of community.esquilo.io as of Saturday January 26, 2019.

Memory use by HTTPC instances

robertjensen

I have one Esquilo Air sending alerts to another. The nut that sends the alerts is shown below. If I throttle the rate at which alerts are sent, the memory use remains modest. If I allow the alerts to be sent every second, the memory use continues to grow for several seconds until the indicator at the lower right of the IDE is more than half full. When I remove the source of the alerts, the memory use will slowly drop back to a very low level.

If I set the interval of sending alerts, to 750 ms, failure will result after the memory use is excessive.

The alerts are being received correctly, so I don't believe that HTTPC timeouts are the problem. I tried setting the timeout to 500 ms, but that did not change the behavior.

These observations make me think that HTTPC instances are being generated faster than the garbage collection is disposing of the out-of-scope instances. Is that the explanation? If so, is there a way around this if, for instance, I really need to send alerts a one-second intervals and have other memory demands? I can simply throttle the rate for this application, but I'm curious. I tried to create an instance of the HTTPC class as a global and make ERPC calls within the Checker function, but that didn't work. Perhaps I did it wrong.

In the code below, the line updating the variable "last" can be executed or not to throttle the alert rate to 4 seconds or not at all. This is a very small nut to be using up a lot of memory when the source of the alerts is constant.

require("HTTPC");
require("Timer");
require("system");
require("GPIO");

motion <- GPIO(2);
motion.pulldown(true);

failed <- 0;
last <- 0;

function Checker(){
    if(motion.ishigh() && time()>last+4){        // something that we are watching for
//        last = time();
        try{
            HTTPC("192.168.1.167").erpc("Alert");
            print("Alert sent\n");
        }
        catch(err){
            failed += 1;
            print("Alert failed "+failed+"\n");
        }
    }
}

checkTimer <- Timer(Checker);
checkTimer.interval(1000);
softwarejanitor

This is just a guess... I'm not familiar with the internal architecture of Squirrel or the Esquilo implementation, but from my experience with other dynamic languages, it sounds like you are outrunning the garbage collection routines. In many language VM implementations garbage collection runs on a schedule and when a system is busy it may even reduce the frequency of garbage collection runs in order to try to gain some performance. So you may be figuratively creating more garbage when activity is high than it can take out.

I don't know if it is possible to make the Esquilo perform garbage collection more frequently or not. Another thing to look at might be re-using your HTTP objects instead of tearing them down and building a new one each time your checker is called. That might reduce the amount of memory that is being used and then freed. It might run faster too. I can't imaging that creating an HTTP object is a low overhead operation.

robertjensen

Thanks for the response. I can declare the HTTPC object outside of the service function (as suggested) and make the erpc calls within the function. Re-using the object works fine until the erpc call fails for any reason. Once that happens, another attempt to re-use the object hangs the system. I can keep things running by redefining the HTTPC object whenever the erpc call fails. This works but seems a little messy.

I wonder if there some way of resetting the HTTPC object instead of discarding it and declaring a new object.

For my current purposes, I can throttle the intervals of making the erpc calls and avoid filling memory with garbage. As you know, it's always a little bit unsatisfying to simply find something that works when you suspect that there might be a more elegant and general solution.

softwarejanitor

If redefining the HTTPC object after an error works then I'd probably figure out the cleanest way of implementing that for now unless someone else has a better suggestion.