I don't think any of these approaches are immune to system crashes, but
should be good enough to prevent single processes, whether launched by a
user or automatically by the system, from running more copies that are
wanted.
As I concluded at the end of my post.
I'd normally use a shell script or programmed equivalent to launch a more
complex set of processes. Its first action would be to assume a crash had
ocurred and do a full clean-up: if the system had shut down normally the
clean-up would still run but would not find anything to do.
That may make sense for a lot of systems.
Yes, that would work too: though it sounds as if the mere existence of a
small 'heartbeat' process could obviate the need for a lockfile:
Uh, something still has to point to the location of the heartbeat process.
Either that needs a fixed but configurable port, or else the lockfile (or
some sort of file, anyway) is needed (itself in a fixed location) to point
to the port-du-jour.
if the heartbeat process is running and agrees that the limit for your
process type hasn't been reached, your process can start.
"The limit for your process type"? That doesn't sound like you're thinking
of an app that should, by default, run as a single instance that "absorbs"
any more that the user triggers by e.g. double-clicking documents, more for
efficiency reasons or to avoid race conditions with its data files than for
any other reason. It sounds more like you're thinking of a situation
involving enforcing policy against users for reasons that have nothing to
do with those users' own wishes, say to limit their resource consumption on
a work computer that isn't theirs.
That's a whole different kettle of fish, but it's a kettle of fish best
handled at the OS level much of the time:
* The resource use at the guy's own cubicle box is his own business. If he
squanders it and then can't get his work done, he can be let go for poor
productivity or whatever.
* The resource use on shared machines, e.g. a network server supplying
services to a whole office block, can be managed by that machine's OS
having per-user quotas set up, if the users have accounts on it, or by
the server software enforcing quotas. The latter is similar to how a
publicly-exposed web service without authentication might prevent one
user hogging too many resources -- per-connecting-IP resource quotas past
which it slows down or times out, intentional latency high enough to
limit the damage a rampaging bot client can do from rapid-fire sequential
requests but low enough not to nuisance a human user with human reaction
times, etc.
Some situations likely in the wild 'net are much less so on the company
LAN, of course, such as the rampaging bot. And, if something that
egregious ever does occur, whoever's responsible can be fired.
In a workplace environment, with software customized for that particular
workplace, you can generally go much further in deciding things that users
should and should not do than with software intended for a general audience
including people running it on their own hardware with their own time and
paying their own utility bills.
Even then, it's often better to audit rather than strictly ration use, and
then hold employees accountable for unnecessary and excessive usage based
on the audit reports. Of all the different kinds of bureaucratic red tape
out there, the machine-enforced variety is easily the worst, because it's
typically *impossible* to circumvent without going through the proper
channels, even in direst emergency with some sort of looming deadline and,
with characteristically bad timing, the pointy-haired single point of
failure that holds the needed pad of permission slips home sick. In all
other situations, "contrition is easier than permission" should be a
possible approach, on pain of losing your job if your corner-cutting was
frivolous rather than out of good faith perceived necessity. Though it
certainly should not be the default.
(Of course, because machine-enforced red tape *is* so hard to circumvent,
the bureaucrats *really* love it...)
Another thing to note is that all of the possibly-limited computing
resources -- CPU, disk, memory, bandwidth -- are so cheap these days that a
company can easily afford to have internal servers with 10x or more
capacity than the likely peak load from normal employee use of its
services, such that it would take truly exceptional circumstances (a 10x
bigger than normal demand, or deliberate bad faith or a major malware
infestation) for it to be unable to meet demand. The result is that the
cost in enforcing quotas (or even in auditing usage, possibly) could
actually exceed the benefit (the cost in enforcing has to factor in the
eventuality of someone legitimately needing more than their quota, and the
relevant permission slip being slow or difficult to obtain, with a
deadline; and the cost of both has to factor in the added system complexity
and accompanying bugs; bugs in enforcement are quite likely to result in
people being locked out of the system that are *under* quota, since half of
errors can be expected to be in that direction).
The other limited resource is money, to pay for electricity and (external)
bandwidth whose use may go up. But with efficient hardware an employee
would have to cause very big jumps in server loads to cost noticeable
amounts of marginal hydro-bill dollars, and the firewall can work both ways
to make excessive use of external bandwidth unlikely. Business connections
tend to be non-metered anyway, so while congestion can be a problem overuse
won't directly cost money. (It might indirectly do so, if
revenue-generating public-facing services are knocked out. Those should
probably be on a separate pipe from the one feeding the offices' internet
connectivity, routed differently enough that congesting one won't impair
the other. That's equally important in reverse, so if the web server's
under a DDoS or unusually high legitimate demand it won't cripple the
office workers that need to deal with the problem by cutting them off from
email, Wikipedia, Google, et. al.)