RTOSs aren't always built to guarantee proper resource management;
they're built to guarantee bounded response times for certain events or
operations. As long as programs free the resources they allocate
before termination, there are generally no big resource management
problems besides memory fragmentation issues (which are worked around
in several ways).
An example of a significant flaw in a RTOS would be failing to service
a timer interrupt within a specified time bound that causes too much
radiation to be released into a cancer patient's body.
That would certainly be bad, being a little late in turning off the
radiation is not good. It could be worse though. Failing completely at
that point due to running out of resources such as memory could be
literally fatal. Of course any sane system would have secondary safeguards
and checks against resources getting low, but in terms of ensuring that an
action is performed successfully within a certain time ensuring necessary
resources is just as vital, perhaps even more so, than scheduling.
Note that RTOS code to handle programmer laziness or error is more code
to audit for RT issues and is an increase in footprint and overall
system complexity.
Not really, it is typically not difficult for the system to reclaim memory
for a terminated application and it can take away potentially much larger
complexity from the application. It also makes the overall system MUCH
easier to validate for long term stability.
As a footnote, using a low-priority task to reclaim memory may never
actually run in a highly loaded system, where you're most likely to
need the extra memory!
In which case you've failed from a RT point of view anyway. Even if the
application has to release the resources it still needs to spend the CPU
cycles to do it and if they are all in use then something has been
starved. RT principles can be applied to resource reclamation by the OS
just as much as anything else. If the RT system has insufficient CPU
resources to meet the RT requirements of its components then it has failed.
It makes no difference to this where memory freeing occurs, it still has
to happen somewhere.
No, it's a design tradeoff. Memory that is malloc-ed may be still in
use by different parts of the system. In many circumstances, the OS
does not know who is using what, and keeps itself out of the way for
safety's sake.
I agree that shared memory is different. While malloc() could be used to
allocate shared memory in an unprotected environment it makes more sense
to have a separate mechanism so the the OS can tell the difference. Good
design should keep shared objects small in number and preferably size.
They should not be treated like normal "local" objects.
When people write clean, conformant code that frees what it allocates,
it can be easy to port a program to a RTOS. When that's not done, it
can be a real PITA.
But the flaw that highlights is in the design of the RTOS. I can
understand this sort of thing for really small systems like embedded
systems on 8 bit processors where you have to cut corners, but there is
nothing in RT technology itself that warrants this.
Lawrence