Adding a Par construct to Python?

J

jeremy

But that's exactly what you said you wanted people to be able to do:

"with my suggestion they could potentially get a massive speed up just by
changing 'for' to 'par' or 'map' to 'pmap'."

I am finding this conversation difficult because it seems to me you don't
have a consistent set of requirements.


How will 'par' be any different? It won't magically turn code with
deadlocks into bug-free code.


A compiler directive is just as clear about the programmer's intention as
a keyword. Possibly even more so.

#$ PARALLEL-LOOP
for x in seq:
    do(x)

Seems pretty obvious to me. (Not that I'm suggesting compiler directives
is a good solution to this problem.)


The problem is that "as simple as possible" is Not Very Simple. There's
no getting around the fact that concurrency is inherently complex. In
some special cases, you can keep it simple, e.g. parallel-map with a
function that has no side-effects. But in the general case, no, you can't
avoid dealing with the complexity, at least a little bit.


It's *already* easy to explain. And having explained it, you still need
to do something about it. You can't just say "Oh well, I've had all the
pitfalls explained to me, so now I don't have to actually do anything
about avoiding those pitfalls". You still need to actually avoid them.
For example, you can choose one of four tactics:

(1) the loop construct deals with locking;

(2) the caller deals with locking;

(3) nobody deals with locking, therefore the code is buggy and risks
deadlocks; or

(4) the caller is responsible for making sure he never shares data while
looping over it.

I don't think I've missed any possibilities. You have to pick one of
those four.


So now you want a second keyword as well.

Hi Steven,
I am finding this conversation difficult because it seems to me you don't have a consistent set of requirements".

I think that my position has actually been consistent throughout this
discussion about what I would like to achieve. However I have learned
more about the inner workings of python than I knew before which have
made it clear that it would be difficult to implement (in CPython at
least). And also I never intended to present this as a fait accompli -
the intention was to start a debate as we have been doing. You also
wrote
So now you want a second keyword as well

I actually described the 'sync' keyword in my second email before
anybody else contributed.

I *do* actually know a bit about concurrency and would never imply
that *any* for loop could be converted to a parallel one. The
intention of my remark "with my suggestion they could potentially get
a massive speed up just by changing 'for' to 'par' or 'map' to
'pmap'." is that it could be applied in the particular circumstances
where there are no dependencies between different iterations of the
loop.

Regarding your implementation strategies, mine would be related to
this one:
(4) the caller is responsible for making sure he never shares data while
looping over it.

However in my world there is no problem with threads sharing data as
long as they do not change the values. So the actual rule would be
something like:

5. The caller is responsible for making sure that one iteration of the
parallel loop never tries to write to a variable that another
iteration may read, unless separated by a 'sync' event.

This shows why the sync event is needed - to avoid race conditions on
shared variables. It is borrowed from the BSP paradigm - although that
is a distibuted memory approach. Without the sync clause, rule 5 would
just be the standard way of defining a parallelisable loop.

Jeremy

P.S. I have a couple of additional embellishments to share at this
stage:

1. Parallel slackness - this is a concept where one creates many more
threads than there are available cores to cover up for a load
imbalance. So you might use this in a case where the iterations you
wish to parallelise take a variable amount of time. I think my par
concept needs a mechanism for the user to specify how many threads to
create - overriding any decision made by Python, e.g.
par 100 i in list: # Create 100 threads no matter how many cores are
available.

2. Scope of the 'sync' command. It was pointed out to me by a
colleague that I need to define what happens with sync when there are
nested par loops. I think that it should be defined to apply to the
innermost par construct which encloses the statement.
 
P

Paul Boddie

Can you explain how you can tell that there are no side-effects from
x*sqrt(x)+3 ? What if I have this?

class Funny(object):
    def __add__(self, other):
        global parrot
        parrot += 1
        return 5 + other

x = Funny()

Yes, in general you need whole-program analysis with Python to know if
there are any side-effects or not. That said, with a process forking
mechanism where modified "globals" are not global beyond each process,
you should be able to guard against side-effects more effectively.

Paul
 
I

Iain King

Let me clarify what I think par, pmap, pfilter and preduce would mean
and how they would be implemented.

[...]

Just for fun, I've implemented a parallel-map function, and done a couple
of tests. Comments, criticism and improvements welcome!

import threading
import Queue
import random
import time

def f(arg):  # Simulate a slow function.
    time.sleep(0.5)
    return 3*arg-2

class PMapThread(threading.Thread):
    def __init__(self, clients):
        super(PMapThread, self).__init__()
        self._clients = clients
    def start(self):
        super(PMapThread, self).start()
    def run(self):
        while True:
            try:
                data = self._clients.get_nowait()
            except Queue.Empty:
                break
            target, where, func, arg = data
            result = func(arg)
            target[where] = result

class VerbosePMapThread(threading.Thread):
    def __init__(self, clients):
        super(VerbosePMapThread, self).__init__()
        print "Thread %s created at %s" % (self.getName(), time.ctime())
    def start(self):
        super(VerbosePMapThread, self).start()
        print "Thread %s starting at %s" % (self.getName(), time.ctime())
    def run(self):
        super(VerbosePMapThread, self).run()
        print "Thread %s finished at %s" % (self.getName(), time.ctime())

def pmap(func, seq, verbose=False, numthreads=4):
    size = len(seq)
    results = [None]*size
    if verbose:
        print "Initiating threads"
        thread = VerbosePMapThread
    else:
        thread = PMapThread
    datapool = Queue.Queue(size)
    for i in xrange(size):
        datapool.put( (results, i, f, seq) )
    threads = [PMapThread(datapool) for i in xrange(numthreads)]
    if verbose:
        print "All threads created."
    for t in threads:
        t.start()
    # Block until all threads are done.
    while any([t.isAlive() for t in threads]):
        if verbose:
            time.sleep(0.25)
            print results
    return results

And here's the timing results:

20.490942001342773


I was going to write something like this, but you've beat me to it :)
Slightly different though; rather than have pmap collate everything
together then return it, have it yield results as and when it gets
them and stop iteration when it's done, and rename it to par to keep
the OP happy and you should get something like what he initially
requests (I think):

total = 0
for score in par(f, data):
total += score


Iain
 
P

Paul Boddie

I was going to write something like this, but you've beat me to it :)
Slightly different though; rather than have pmap collate everything
together then return it, have it yield results as and when it gets
them and stop iteration when it's done, and rename it to par to keep
the OP happy and you should get something like what he initially
requests (I think):

total = 0
for score in par(f, data):
    total += score

It depends on whether you want the outputs to correspond to the inputs
in the resulting sequence. If pmap is supposed to behave like map, you
want the positions of each input and corresponding output to match. As
I wrote earlier, in pprocess you distinguish between these cases by
employing maps (for input/output correspondence) and queues (for
"first ready" behaviour).

I don't recall whether the Map class in pprocess blocks for all the
data to be returned, or whether you can consume any outputs that are
at the start of the output sequence (and then block until output
arrives at the next available position), but this would be a logical
enhancement.

Paul
 
L

Luis Zarrabeitia

Even if you decided to accept the penalty and add locking to
refcounts, you still have to be prepared for context switching at any
time when writing C code, which means in practice you have to lock any
object that's being accessed--that's in addition to the refcount lock.

While I agree that the GIL greatly simplifies things for the interpreter, I
don't understand this statement. In practice, you should lock all critical
sections if you expect your code to be used in a multithreading environment.
That can't be different from what Java, C# or any other languages do,
including C++. Why is that so expensive in python extensions, that it is used
as an argument against removing the GIL?
 
C

Carl Banks

While I agree that the GIL greatly simplifies things for the interpreter, I
don't understand this statement. In practice, you should lock all critical
sections if you expect your code to be used in a multithreading environment.
That can't be different from what Java, C# or any other languages do,
including C++. Why is that so expensive in python extensions, that it is used
as an argument against removing the GIL?

I wasn't really arguing that locking individual objects was a
significant penalty in computer time, only in programmer time. The
locks on reference counts are what's expensive.

Also, I'm not using it as an argument against removing the GIL. I
want to remove the GIL. I'm only pointing out that removing the GIL
is not easy, and once it's removed there is a cost.


Carl Banks
 
L

Luis Zarrabeitia

I wasn't really arguing that locking individual objects was a
significant penalty in computer time, only in programmer time.  The
locks on reference counts are what's expensive.

Also, I'm not using it as an argument against removing the GIL.  I
want to remove the GIL.  I'm only pointing out that removing the GIL
is not easy, and once it's removed there is a cost.

Ah, allright then. Thanks for the clarification.
 
A

Aahz

While I agree that the GIL greatly simplifies things for the
interpreter, I don't understand this statement. In practice, you should
lock all critical sections if you expect your code to be used in a
multithreading environment. That can't be different from what Java, C#
or any other languages do, including C++. Why is that so expensive in
python extensions, that it is used as an argument against removing the
GIL?

Python is intended to be simple/easy to integrate with random C
libraries. Therefore you have to write explicit code from the C side in
order to drop the GIL.
 
L

Luis Zarrabeitia

Python is intended to be simple/easy to integrate with random C
libraries.  Therefore you have to write explicit code from the C side in
order to drop the GIL.

Erm, that doesn't really answers the question. If there were no GIL, the C
code called from python would be just as unsafe as the C code called from C.
And if "not thread-safe, you take care of the locking" notices are enough for
the original libraries (and a lot of other languages, and even python
constructs), the C extension could always grab the locks.

There must be another reason (i.e, the refcounts) to argue _for_ the GIL,
because this one just seems to be just an attempt to fix unsafe code when
called from python. And that was my point. Refcounts + interpreter simplicity
seem to imply the need for a GIL, but to make unsafe code safe without fixing
said code (or even thinking about it) is a weird goal... specially if it
became the only reason for a GIL. After all, one could argue for that goal in
almost all languages.
 
C

Carl Banks

Erm, that doesn't really answers the question. If there were no GIL, the C
code called from python would be just as unsafe as the C code called from C.
And if "not thread-safe, you take care of the locking" notices are enough for
the original libraries (and a lot of other languages, and even python
constructs), the C extension could always grab the locks.

The designers of Python made a design decision(**) that extension
writers would not have to take care of locking. They could have made
a different decision, they just didn't.

There must be another reason (i.e, the refcounts) to argue _for_ the GIL,
Why?


because this one just seems to be just an attempt to fix unsafe code when
called from python.

I think you are being unfair in calling it unsafe.

Suppose if I were to call PyList_Append from a C extension. It's not
necessary for me to guard the list I'm calling it on with a lock,
because only the GIL thread is allowed to call most Python API or
otherwise access objects. But you seem to be suggesting that since I
didn't guard the list with a lock it is "unsafe", even though the GIL
is sufficient?

No, I totally disagree. The code is not "unsafe" and the GIL doesn't
"fix" it. The code is jsut

And that was my point. Refcounts + interpreter simplicity
seem to imply the need for a GIL, but to make unsafe code safe without fixing
said code (or even thinking about it) is a weird goal...

Why? Do you seriously not see the benefit of simplifying the work of
extention writers and core maintainers? You don't have to agree that
it's a good trade-off but it's a perfectly reasonable goal.

I highly suspect Aahz here would argue for a GIL even without the
refcount issue, and even though I wouldn't agree, there's nothing
weird or unreasonable about the argument, it's just a different
viewpoint.

specially if it
became the only reason for a GIL. After all, one could argue for that goal in
almost all languages.

"B-B-B-But other languages do it that way!" is not a big factor in
language decisions in Python.


Carl Banks

(**) - To be fair, Python didn't originally support threads, so there
was a lot of code that would have become unsafe had threading been
added without the GIL, and that probably influenced the decision to
use a GIL, but I'm sure that wasn't only reason. Would Python have a
GIL if threading had been there from the start? Who knows.
 
L

Luis Alberto Zarrabeitia Gomez

Quoting Carl Banks said:
The designers of Python made a design decision(**) that extension
writers would not have to take care of locking. They could have made
a different decision, they just didn't.

Well, then, maybe that's the only python's decision so far I may not agree with.
And I'm not criticizing it... but I'm questioning it, because I honestly don't
understand it.

1- refcounts is a _very_ strong reason to argue for the GIL. Add to that
simplifying CPython implementation, and you don't really need another one on top
of that, at least not to convince me, but
2- in [almost] every other language, _you_ have to be aware of the critical
sections when multithreading. Even in pure python, you have use locks. Don't you
find at least a bit odd the argument that in the particular case you are writing
a C extension from python, you should be relieved of the burden of locking?
I think you are being unfair in calling it unsafe.

I think I was unfair calling it a "fix". It sounds like it was broken, my bad. I
should have used 'not multithread-ready'.
Suppose if I were to call PyList_Append from a C extension. It's not
necessary for me to guard the list I'm calling it on with a lock,
because only the GIL thread is allowed to call most Python API or
otherwise access objects. But you seem to be suggesting that since I
didn't guard the list with a lock it is "unsafe", even though the GIL
is sufficient?

Certainly not. If you program under the assumption that you have a GIL, of
course it is not unsafe. Not your code, anyway. But, why is PyList_Append not
multithread-ready? (or rather, why does PyList_Append would requiere a _global_
lock to be multithread ready, instead of a more local lock?) Of course, given
that the GIL exists, to use it is the easier solution, but for this particular
case, it feels like discarding multithreading just to avoid locking.
No, I totally disagree. The code is not "unsafe" and the GIL doesn't
"fix" it. The code is jsut

[I think there was a word missing from that sentence...)
Why? Do you seriously not see the benefit of simplifying the work of
extention writers and core maintainers? You don't have to agree that
it's a good trade-off but it's a perfectly reasonable goal.

I do agree that, at least for the core maintainers, it is a good trade-off and a
reasonable goal to keep CPython simple. And if that has, as a positive side
efect for extensions writers, that their code becomes easier, so be it. What I
don't agree - rather, what I don't understand, is why it is presented in the
opposite direction.
I highly suspect Aahz here would argue for a GIL even without the
refcount issue, and even though I wouldn't agree, there's nothing
weird or unreasonable about the argument, it's just a different
viewpoint.

I consider it weird, at least. If I were to say that python should not allow
multithreading to simplify the lives of pure-python programmers, I hope I would
be shot down and ignored. But somehow I must consider perfectly natural the idea
of not allowing[1] multithreading when building C extensions, to simplify the
lives of extension programmers.
"B-B-B-But other languages do it that way!" is not a big factor in
language decisions in Python.

That's not what I said. We are not talking about the _language_, but about one
very specific implementation detail. Not even that, I'm talking about one of the
reasons presented in favor of that specific implementation detail (while
agreeing with the others). The fact that the reason I'm having trouble with is
valid for almost any other language, and none of them have a GIL-like construct
(while still being successful, and not being exceptionally hard to build native
modules for) just suggests that _that_ particular reason for that particular
implementation detail is not a very strong one, even if all other reasons are.
(**) - To be fair, Python didn't originally support threads, so there
was a lot of code that would have become unsafe had threading been
added without the GIL, and that probably influenced the decision to
use a GIL, but I'm sure that wasn't only reason. Would Python have a
GIL if threading had been there from the start? Who knows.

(This is a very good remmark. Maybe here lies the whole answer to my question.
We may be dragging the heavy chain of backward compatibility with existing
extension modules, that is just too costly to break.)

Regards,

Luis

[1] Ok, I know that is not exactly what the GIL does.
 
C

Carl Banks

I don't have any reply to this post except for the following excerpts:

2- in [almost] every other language, _you_ have to be aware of the critical
sections when multithreading. [snip]
That's not what I said. We are not talking about the _language_, but about one
very specific implementation detail. Not even that, I'm talking about one of the
reasons presented in favor of that specific implementation detail (while
agreeing with the others). The fact that the reason I'm having trouble with is
valid for almost any other language, and none of them have a GIL-like construct
(while still being successful, and not being exceptionally hard to build native
modules for) just suggests that _that_ particular reason for that particular
implementation detail is not a very strong one, even if all other reasons are.

No other languages have nesting by indentation (while still being
reasonablyt successful)....
etc

Comparisons to other languages are useless here. In many cases Python
does things differently from most other languages and usually it's
better off for it.

The fact that other languages do something differently doesn't mean
that other way's better, in fact it really doesn't mean anything at
all.


Carl Banks
 
P

Paul Rubin

Carl Banks said:
Why? Do you seriously not see the benefit of simplifying the work of
extention writers and core maintainers? You don't have to agree that
it's a good trade-off but it's a perfectly reasonable goal.

I highly suspect Aahz here would argue for a GIL even without the
refcount issue, and even though I wouldn't agree, there's nothing
weird or unreasonable about the argument, it's just a different
viewpoint.

How about only setting a lock when a "safe" user extension is called.
There could also be an "unsafe" extension interface with no lock. The
more important stdlib functions would be (carefully) written with the
unsafe interface. Ordinary python code would not need a lock.
 
L

Lie Ryan

Carl said:

Nobody likes GIL, but it just have to be there or things starts crumbling...

Nobody would actually argue _for_ GIL, they just know from experience,
that people that successfully GIL in the past, always produces
unsatisfactory solution.

Among them are slowing down single threaded code, locking issues, etc.

They are not really arguments _for_ GIL, but a challenge for the next
people that tries to remove it to think about before dedicating the rest
of their life to removing GIL; only to found that nobody likes the
solution...
 
L

Luis Alberto Zarrabeitia Gomez

Quoting Carl Banks said:
I don't have any reply to this post except for the following excerpts:

2- in [almost] every other language, _you_ have to be aware of the critical
sections when multithreading. [snip]
That's not what I said. We are not talking about the _language_, but about one
very specific implementation detail. Not even that, I'm talking about one of the
reasons presented in favor of that specific implementation detail (while
agreeing with the others).
[...]

No other languages have nesting by indentation (while still being
reasonablyt successful)....
etc

Comparisons to other languages are useless here. In many cases Python
does things differently from most other languages and usually it's
better off for it.

You seem to have missed that I'm not talking about the language but about a
specific implementation detail of CPython. I thought that my poor choice of
words in that sentence was completely clarified by the paragraphs that followed,
but apparently it wasn't. In my "2-" point, maybe I should've said instead: "in
[almost] every language, INCLUDING (proper) PYTHON, you have to be aware of
critcal sections when multithreading".
The fact that other languages do something differently doesn't mean
that other way's better, in fact it really doesn't mean anything at
all.

No, it doesn't mean that it's better, and I didn't say it was. But it _does_
show that it is _possible_. For an argument about how hard could be to write
native extensions in there was no GIL, the fact that there are many other
GIL-less platforms[1] where is not painful to write native extensions is a valid
counter-example. And that, in all those languages and platforms (including
python), the only one where I hear that explicit, granular locking is too hard
or whatever[2], is CPython's /native/ extensions, is what I found weird.

Regards,

Luis

[1] Including the python implementations for .net and java.
[2] Really, this is was my question and nothing more. I was not comparing, I was
trying to understand what was that "whatever" that made it so hard for CPython.
And your footnote in the previous message gave me a reasonable explanation.
 
R

Rhodri James

Here, I think, is the fatal flaw in your plan. As Steven pointed out,
concurrency isn't simple. All you are actually doing is making it
easier for 'amateur' programmers to write hard-to-debug buggy code,
since you seem to be trying to avoid making them think about how to
write parallelisable code at all.
I *do* actually know a bit about concurrency and would never imply
that *any* for loop could be converted to a parallel one. The
intention of my remark "with my suggestion they could potentially get
a massive speed up just by changing 'for' to 'par' or 'map' to
'pmap'." is that it could be applied in the particular circumstances
where there are no dependencies between different iterations of the
loop.

If you can read this newsgroup for a week and still put your hand on
your heart and say that programmers will check that there are no
dependencies before swapping 'par' for 'for', I want to borrow your
rose-coloured glasses. That's not to say this isn't the right solution,
but you must be aware that people will screw this up very, very
regularly, and making the syntax easy will only up the frequency of
screw-ups.
This shows why the sync event is needed - to avoid race conditions on
shared variables. It is borrowed from the BSP paradigm - although that
is a distibuted memory approach. Without the sync clause, rule 5 would
just be the standard way of defining a parallelisable loop.

Pardon my cynicism but sync would appear to have all the disadvantages
of message passing (in terms of deadlock opportunities) with none of
advantages (like, say, actual messages). The basic single sync you put
forward may be coarse-grained enough to be deadlock-proof, but I would
need to be more convinced of that than I am at the moment before I was
happy.
P.S. I have a couple of additional embellishments to share at this
stage: [snip]
2. Scope of the 'sync' command. It was pointed out to me by a
colleague that I need to define what happens with sync when there are
nested par loops. I think that it should be defined to apply to the
innermost par construct which encloses the statement.

What I said before about deadlock-proofing? Forget it. There's hours
of fun to be had once you introduce scoping, not to mention the fact
that your inner loops can't now be protected against common code in the
outer loop accessing the shared variables.
 
J

jeremy

Here, I think, is the fatal flaw in your plan.  As Steven pointed out,
concurrency isn't simple.  All you are actually doing is making it
easier for 'amateur' programmers to write hard-to-debug buggy code,
since you seem to be trying to avoid making them think about how to
write parallelisable code at all.
I *do* actually know a bit about concurrency and would never imply
that *any* for loop could be converted to a parallel one. The
intention of my remark "with my suggestion they could potentially get
a massive speed up just by changing 'for' to 'par' or 'map' to
'pmap'." is that it could be applied in the particular circumstances
where there are no dependencies between different iterations of the
loop.

If you can read this newsgroup for a week and still put your hand on
your heart and say that programmers will check that there are no
dependencies before swapping 'par' for 'for', I want to borrow your
rose-coloured glasses.  That's not to say this isn't the right solution,
but you must be aware that people will screw this up very, very
regularly, and making the syntax easy will only up the frequency of
screw-ups.
This shows why the sync event is needed - to avoid  race conditions on
shared variables. It is borrowed from the BSP paradigm - although that
is a distibuted memory approach. Without the sync clause, rule 5 would
just be the standard way of defining a parallelisable loop.

Pardon my cynicism but sync would appear to have all the disadvantages
of message passing (in terms of deadlock opportunities) with none of
advantages (like, say, actual messages).  The basic single sync you put
forward may be coarse-grained enough to be deadlock-proof, but I would
need to be more convinced of that than I am at the moment before I was
happy.


P.S. I have a couple of additional embellishments to share at this
stage: [snip]
2. Scope of the 'sync' command. It was pointed out to me by a
colleague that I need to define what happens with sync when there are
nested par loops. I think that it should be defined to apply to the
innermost par construct which encloses the statement.

What I said before about deadlock-proofing?  Forget it.  There's hours
of fun to be had once you introduce scoping, not to mention the fact
that your inner loops can't now be protected against common code in the
outer loop accessing the shared variables.

Hi Rhodri,
If you can read this newsgroup for a week and still put your hand on
your heart and say that programmers will check that there are no
dependencies before swapping 'par' for 'for', I want to borrow your
rose-coloured glasses.

I think this depends on whether we think that Python is a language for
people who we trust to know what they are doing (like Perl) or whether
it is a language for people we don't trust to get things right(like
Java). I suspect it probably lies somewhere in the middle.

Actually the 'sync' command could lead to deadlock potentially:

par i in range(2):
if i == 1:
sync

In this case there are two threads (or virtual threads): one thread
waits for a sync, the other does not, hence deadlock.

My view about deadlock avoidance is that it should not be built into
the language - that would make everything too restrictive - instead
people should use design patterns which guarantee freedom from
deadlock.

See http://www.wotug.org/docs/jeremy-martin/index.shtml

Jeremy
 
R

Rhodri James

I think this depends on whether we think that Python is a language for
people who we trust to know what they are doing (like Perl) or whether
it is a language for people we don't trust to get things right(like
Java). I suspect it probably lies somewhere in the middle.

So do I *in general*, but your design principle -- make it easy -- came
down firmly in the first camp and, as I said, I come down in the second
where parallel processing is concerned. I've spent enough years weeding
bugs out of my insufficiently carefully designed Occam programs to have
the opinion that "easy" is the very last thing you want to make it.
Actually the 'sync' command could lead to deadlock potentially:

par i in range(2):
if i == 1:
sync

Hmm. I was assuming you had some sort of implicit rendez-vous sync
at the end of the PAR. Yes, this does make it very easy for the
freshly-enabled careless programmer to introduce deadlocks and
never realise it.
 
A

Albert van der Horst

You'd think so, but you'd be wrong. You can't assume addition is always
commutative.


And how is reduce() supposed to know whether or not some arbitrary
function is commutative?

Why would it or need it? A Python that understands the ``par''
keyword is supposed to know it can play some tricks with
optimizing reduce() if the specific function is commutative.

Groetjes Albert
 
S

sunnia

Ah, allright then. Thanks for the clarification.

The Judicial System in Islam : Its Legal Basis and Islam Ruling
Please forgive us for any disturbance, but we have an important
subject to address to you regarding FAITH, and we Don’t intend to
overload your email with unnecessary messages…

The Judicial System in Islam : Its Legal Basis and Islam Ruling

By The Editorial Team of Dr. Abdurrahman al-Muala (translated by
islamtoday.com)


Defining the Judicial System and its Legal basis

The judicial system in Islam is a system for deciding between people
in litigation with the aim of settling their disputes in accordance
with the injunctions of the Divine Law, injunctions that are taken
from the Quran and Sunnah.

All of the Messengers of God (may God praise them all) acted as
judges. God says:

“And remember David and Solomon, when they gave judgment concerning
the field when people’s sheep had browsed therein at night, and We
were witness to their judgment. And We made Solomon to understand the
case. And to each of them We gave good judgment and knowledge.” (Quran
21:78-79)
God also says:

“O David, verily we have placed you as a successor on Earth, so judge
between people in truth, and do not follow your desires for it will
mislead you from the path of God. Verily, those who stray from the
path of God have a severe punishment because they forgot the day of
reckoning.” (Quran 38:26)

Prophet Muhammad, who came with the final and eternal Message, was
ordered by God to pass judgment in disputes just as he was ordered to
spread the word of God and call people to Islam. This is mentioned in
the Quran in a number of places. God says, for instance:

“So judge (O Muhammad) between them by what God has revealed and do
not follow their vain desires, but beware of them lest they turn you
away from some of what God has sent down to you.” (Quran 5:49)

God also says:

“…And if you judge (O Muhammad), judge between them with justice.
Verily, God loves those who act justly.” (Quran 5:42)

And He says:

“But no, by your Lord, they shall have no faith until they make you (O
Muhammad) judge in all their disputes and find in themselves no
resistance against your decisions and accept them with full
submission.” (Quran 4:65)

The Sunnah also provides for the legal basis of the Islamic judicial
system. It is related by Amr b. al-Aas that the Prophet said:

“If a judge gives a judgment using his best judgment and is correct,
then he receives a double reward (from God). If he uses his best
judgment but makes a mistake, then he receives a single
reward.” (Ahmed)

God’s Messenger said:

“You should not wish to be like other people, except in two cases: a
man who God has given wealth and he spends it on Truth and another who
God has granted wisdom and he gives verdicts on its basis and teaches
others.” (Saheeh Al-Bukhari, Saheeh Muslim)

Many scholars have related to us that there is consensus among Muslims
on the legal status of the judicial system in Islam. Ibn Qudamah says:

“The Muslims are unanimously agreed that a judicial system must be
established for the people.”
The Islamic Ruling Concerning the Judiciary

The jurists agree that the duties of the judge are an obligation that
must be carried out by society. If some members of society carry out
this duty, it is sufficient for everyone. If, on the other hand,
everyone neglects it, then everyone in society is sinful.

The proof that these duties are obligatory comes from the Quran:

“O you who believe! Stand out firmly for justice...” (Quran 4:135)

It is only necessary for a small number of individuals to perform
judicial duties since judicial concerns come under the broad duty of
enjoining what is right and forbidding what is wrong. It is not
obligatory for every individual to carry out this duty as long as some
people are doing so.

The affairs of the people will not be correct and upright without a
judicial system. It is, consequently, obligatory for one to exist,
just like it is necessary to have a military. Imam Ahmad, one of the
greatest and most well-known scholars of Islam said:

“People have to have a judicial authority or their rights will
disappear.”

The duties of the judiciary include enjoining what is right, helping
the oppressed, securing people’s rights, and keeping oppressive
behavior in check. None of these duties can be performed without the
appointment of a judiciary.

A judicial system is a necessity for the prosperity and development of
nations. It is needed to secure human happiness, protect the rights of
the oppressed, and restrain the oppressor. It is the way to resolve
disputes and ensure human rights. It facilitates enjoining what is
right, forbidding what is wrong, and curbing immoral behavior. In this
way, a just social order can be enjoyed by all sectors of society, and
every individual can feel secure in his life, property, honor, and
liberty. In this environment, nations can progress, civilization can
be achieved, and people are free to pursue what will better them both
spiritually and materially.




———————————-


For more information about Islam

http://english.islamway.com/

http://www.islamhouse.com/

http://www.discoverislam.com/

http://www.islambasics.com/index.php

http://english.islamway.com/

http://www.islamtoday.net/english/

http://www.islamweb.net/ver2/MainPage/indexe.php

http://www.sultan.org/

http://www.islamonline.net/

Contact Us At

(e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,999
Messages
2,570,243
Members
46,835
Latest member
lila30

Latest Threads

Top