I want to add a few bits of info here. The problem has only one direct cause – not enough memory new session allocation. The real question is what consumes the memory and obviously the answer is different in each different situation. Let me say a few words about my setup first:

I am hosting on Virtuozzo based VPS-es and there you can see the problem more clearly. What is the difference – on VMWare you have something resembling a real machine – with swap file and ram, they might not correspond exactly to the real thing, but effectively a VMWare (and any hardware virtualization based solution) can be treated as a real machine. With Virtuozzo you get only RAM – part of it would be in the page file, but from inside the VPS you can’t really see the difference. So, I am using VPS-es with as little memory as 286MB and that is all the memory I can use, when you use VMWare with memory set to 286 you may have also some additional swap space thus your usable RAM would be more (how much depends on the paging file size of course).

On 286 MB machine I can host some 15-20 applications with fair amount of visitors. I am using my own framework and components – including own database engine. I am sure no memory leaks occur (after years and years of testing), but 286MB of memory in total is very thin ice. You would get the “new session failed” from time to time – the question is how to avoid being stuck with it until restart (of IIS or the machine). The answer is adjusting the memory limits of the COM+ application and may be some other recycling options. You should make sure that the limits are such that the worker process will be recycled when the things get ugly and you should keep in your head the total amount of memory used by all the COM+ applications you are using because if the total amount of memory limits is greater than the usable memory you can get in a situation where none of the worker processes is over the limit but all together are occupying all the available memory and none would recycle to free some precious megabytes.

Another thing to check is the ASP caching options – how much files are cached in memory, how many script engines are cached (this happens in memory also). The defaults are quite big and the effect resembles memory leaking – especially if you have many ASP files and many sites (the caching is configured on a global level, but is done in the worker processes).

So, by fine tuning the COM+ applications and the cache options you can get your server running if not absolutely smoothly then at least make sure it will automatically recover with possible only a few requests per day at most (from let say hundred thousand) returning New session failed. If you keep the number of COM+ applications low (2-3 at most) you can keep the track and choose appropriate memory limits, perhaps set up time interval recycling for the applications that are not sensitive to session losses etc.

There is one other thing to consider .NET applications on the same server. Without attention they can kill everything else and grab all the server resources. The problem is the garbage collecting – the ASP.NET (and any .NET) applications will not release their memory because some ASP classic or PHP application needs it and will keep the memory occupied even if they are not using it – eventually they will free it, but they may keep it for hours – just so. The solutions are – avoid mixing .NET and non-.NET apps on the same server, if you cannot avoid it put the .NET apps in separate COM+ pools and set strict memory limits and make them as low as possible.

Well, I hope this is helpful. I have a lot of experience managing servers with little memory available – it is not very difficult, but one need to pay attention and not hope the problem will go away with a patch or another magical solution.

Leave a Reply

Your email address will not be published. Required fields are marked *