Number of digits in N!

P

Pascal Bourguignon

Willem said:
(e-mail address removed) wrote:
) Proginoskes wrote:
)> Oh, really?
)>
)> log 1 = 0
)> log 2 = 0.3010299957
)> log 3 = 0.4771212547
)> log 4 = 0.6020599914
)> log 5 = 0.6989700043
)> log 6 = 0.7781512504
)> log 7 = 0.8450980400
)> log 8 = 0.9030899871
)> log 9 = 0.9542425094
)> log 10 = 1.0
)>
)> (All are accurate except for possibly to the last digit.)
)>
)> The sum is 6.559763033 (last 2 digits possibly wrong, but the rest are
)> correct).
)>
)> The answer is 7.
)>
)> Where did I use the fact that 10! = 3,628,800?
)
) this post is dejavu.
) what's the difference, one still has to compute log(x) somehow. it is
) even easier to ocmpute factorial itself.

Let's change that calculation slightly then.

log 1 ~= 0.0
log 2 ~= 0.3
log 3 ~= 0.5
log 4 ~= 0.6
log 5 ~= 0.7
log 6 ~= 0.8
log 7 ~= 0.8
log 8 ~= 0.9
log 9 ~= 1.0
log 10 ~= 1.0

The sum is 6.6, so the answer is 7.

That sounds more convincing.

The more so if the way you compute these approximations of log doesn't
involves all the algorithmic complexity of computing the log function...



After all, computing factorial n only involves n multiplications,
while computing only log(i) involves several additions,
multiplications and divisions.
 
P

Proginoskes

Proginoskes said:
Yes, Pascal Bourguignon is only making himself look sillier and
sillier.

If anyone cares, it can be shown that, if Reverse Polish Notation is
used for the method of calcualtion, that 10! never shows up in the
stack of numbers.

RPN means you have a stack of numbers; you can push a number on top, or
pull the top one off. You are also allowed some functions f, which pull
off the top number x, and push f(x) down on top of the stack. You can
do the same for operations.

For instance, 2*(3+6) would be coded like: 3 6 + 2 *

Step by step, this would give the following sequence of stacks (top =
left):

3
6 3
9 [here the top 2 numbers are added]
2 9
18 [here the top 2 numbers were multiplied]

Pascal seems to have given up this debate, and has moved on to
computational complexity.

--- Christopher Heckman
 
P

Proginoskes

Pascal said:
That sounds more convincing.

The more so if the way you compute these approximations of log doesn't
involves all the algorithmic complexity of computing the log function...

After all, computing factorial n only involves n multiplications,

True, but the stored numbers grow in size while you're doing it.
Multiplication of two integers cannot be done in constant time;
however, it can be done in quadratic time (in terms of the sum of the
size of the integers).
while computing only log(i) involves several additions,
multiplications and divisions.

Of smaller numbers.

Pascal is comparing apples and oranges again.

--- Christopher Heckman
 
J

Jon Slaughter

Pascal Bourguignon said:
Agreed. My point is that the way the problem was stated, I don't
consider a solution summing the logarithms to be valid, while a
solution using Stirling approximation is.

Why not? This is what I don't understand? Is it not a solution to the
problem in the sense that the results are the same? Is it also not a faster
and more robust algorithm? I agree that the stirling solution might be
better(but is is really an approximation so its not perfect. It all depends
on the criteria. The direct method of first computing n! is not a good one
IMO except in rare cases.
I'll spare you the 35 kbyte dump, but my computer has no problem in
computing 10000! and counting the number of digits needed to
represent it in base 10:

LISP> (time (length (prin1-to-string (ext:! 10000))))
Real time: 0.304857 sec.
Run time: 0.296018 sec.
Space: 617096 Bytes
35660

Ok, now do the same with the sum method... now use n=100000000; If this is
possible then I think your algorithm is mistaken.

why? Becaus the number of digits is ceil(sum(log(k),k=2..100000000))

It takes a few seconds to compute the sum and the result is 756,570,557. If
you represent a digit as a byte then thats 756MB of memory. Thats a lot of
memory to compute using your method... that doesn't include the time it
takes to multiply that all out. Even though memory is cheap all you have to
do is add another factor of 10 to see your method is not scalable. Ofcoures
maybe you plan on using it only for small n. Even then I wouldn't use that
method since its slow unless I also needed to compute n! for some other
reason. It comes to about 8gigs.

As you can see, for n ~= 10^m there are ~ 10^m digits. Adding an additional
factor of 10 to n adds an additional factor of 10 to the number of digits.
This is called exponential growth and is not scalable on polynomial growth
computing(which is all we have).
 
L

Logan Shaw

Yes, they are.

Mathematically, they can be.

Computationally, they are not.

Logarithms map certain rational numbers onto irrational numbers (and
vice versa), right?

Most useful models of computation assume you are making transformations
from finite state to finite state[1]. If there are two different
representations for one number and one requires an irrational number
(infinite state) and another requires a rational number (finite state),
they are not equivalent representations. Mainly because one of them
cannot exist and the other can. :)

- Logan

[1] It has been a while since I took a class on the theory of
computation, but unless I'm mistaken, it would be correct to
say that even though the size of the state might be unbounded,
it is, at any given moment, finite. Any given computation
(of finite length, I guess[2]) maps finite state onto finite
state.

[2] Which is probably the only useful form of computation, making
the usual assumption that the goal of computation is to get to
the end state.
 
L

Logan Shaw

Just like any algorithm will lead to an error sooner or later.

No. Some algorithms lose information as they perform transformations
on the state of the (hypothetical) machine. Others do not.
The only
reason it is an "approximation" is that the log-based algorithm of
calculating factorial starts to suffer from rounding errors sooner than
the straightforward algorithm.

No, the straightforward algorithm involves only rational numbers (which
can be represented with a finite amount of information) and thus never
needs to suffer from rounding errors. The log-based one must suffer from
rounding errors since it involves irrational numbers (which require an
infinite amount of information to be represented directly).

- Logan
 
P

Phil Carmody

Andrey Tarasevich said:

Show the world either:
1) Where Christopher makes any reference to decimal representations of 10!
2) Your complete ignorance, and disregard for correctness.

Clue - (2) will be much easier for you, you're 99% of the way down
the path already.

Phil
 
P

Proginoskes

Logan said:
No. Some algorithms lose information as they perform transformations
on the state of the (hypothetical) machine. Others do not.


No, the straightforward algorithm involves only rational numbers (which
can be represented with a finite amount of information) and thus never
needs to suffer from rounding errors. The log-based one must suffer from
rounding errors since it involves irrational numbers (which require an
infinite amount of information to be represented directly).

I was going to say: "But a certain amount of error is okay; if each log
is correct to within 1/(20n), then the original problem [find the
number of digits in N!] can be solved exactly. (The error of the sum is
at most 1/20 from the actual value.)" However, if the sum of the logs
turns out to be very close to an integer, more precision may be needed.


This raises an interesting question (of which I do not know the
answer): Given any
epsilon > 0, are there an infinite number of N's such that

{log(N!)} > 1 - epsilon ?

(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.) If not,
then there is a fixed precision ((1-M)/(2*n), where M is the largest
value of {log(N!)}) which can be used for solving the "digits in N!
problem" for any N. (Basically, you calcluate the value of log(i) to
within (1-M)/(2*n).)

And logs are base 10, of course.

--- Christopher Heckman
 
P

Phil Carmody

Pascal Bourguignon said:
I've spoke of isomorphism between fields, rings or groups, not just a
bijection.

Are you so stupid that you don't realise that *every* bijection
induces an isomorphism? Therefore your isomorphism really is
dime-an-aleph-null.

Phil
 
M

makc.the.great

Willem said:
(e-mail address removed) wrote:
) Proginoskes wrote:
)> Oh, really?
)>
)> log 1 = 0
)> log 2 = 0.3010299957
)> log 3 = 0.4771212547
)> log 4 = 0.6020599914
)> log 5 = 0.6989700043
)> log 6 = 0.7781512504
)> log 7 = 0.8450980400
)> log 8 = 0.9030899871
)> log 9 = 0.9542425094
)> log 10 = 1.0
)>
)> (All are accurate except for possibly to the last digit.)
)>
)> The sum is 6.559763033 (last 2 digits possibly wrong, but the rest are
)> correct).
)>
)> The answer is 7.
)>
)> Where did I use the fact that 10! = 3,628,800?
)
) this post is dejavu.
) what's the difference, one still has to compute log(x) somehow. it is
) even easier to ocmpute factorial itself.

Let's change that calculation slightly then.

log 1 ~= 0.0
log 2 ~= 0.3
log 3 ~= 0.5
log 4 ~= 0.6
log 5 ~= 0.7
log 6 ~= 0.8
log 7 ~= 0.8
log 8 ~= 0.9
log 9 ~= 1.0
log 10 ~= 1.0

The sum is 6.6, so the answer is 7.


SaSW, Willem

that looks fine but think how many multiplication it requires per
log(X) in order for logs sum to have absolute error no more than 1?
obviously you will need increasing number of multiplications per
log(X), and thus it is worse than doing single multiplication per X in
explicit factorial computation.
 
S

Stuart

jmcgill said:
Of course. It was just an experiment, as I found the exercise of
"creating the triangle" far too easy, and then found that of "formatting
in a single pass" to be beyond my understanding of numbers. I had no
intention of actually formatting a 6144-row Pascal's Triangle for
printing on 100-meter-wide paper :)

And look at the trouble such a simple enquiring attitude can cause - I hope
you have learned your lesson ;-)

[Just in case you are relatively new to newsgroups - don't be put off by the
flamewars and try not to get sucked in!
Enjoy your programming!]

Regards
 
D

Dik T. Winter

> This raises an interesting question (of which I do not know the
> answer): Given any
> epsilon > 0, are there an infinite number of N's such that
>
> {log(N!)} > 1 - epsilon ?
>
> (Here, {x} denotes the fractional part of x, so {1.98} = 0.98.) If not,
> then there is a fixed precision ((1-M)/(2*n), where M is the largest
> value of {log(N!)}) which can be used for solving the "digits in N!
> problem" for any N. (Basically, you calcluate the value of log(i) to
> within (1-M)/(2*n).)
>
> And logs are base 10, of course.

I think that can not happen very often. It can only be true if N! is close
to a power of the base of the logarithm.
 
C

Chris Uppal

Proginoskes said:
This raises an interesting question (of which I do not know the
answer): Given any
epsilon > 0, are there an infinite number of N's such that

{log(N!)} > 1 - epsilon ?

(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.) If
not, then there is a fixed precision ((1-M)/(2*n), where M is the
largest value of {log(N!)}) which can be used for solving the "digits
in N! problem" for any N. (Basically, you calcluate the value of
log(i) to within (1-M)/(2*n).)

I can't prove it either. But there are infinitely many distinct values
of {log(N!)} in (0, 1) so they must be dense around at least one point
in that range -- why not around 1.0 ?

(For what little it's worth, I plotted the first few million values,
and they appear to be scattered "randomly" over that interval, so
there's at least a hint that they /might/ be dense over all of (0, 1)
-- in which case you would have your result.)

-- chris
 
W

Willem

(e-mail address removed) wrote:
) that looks fine but think how many multiplication it requires per
) log(X) in order for logs sum to have absolute error no more than 1?
) obviously you will need increasing number of multiplications per
) log(X), and thus it is worse than doing single multiplication per X in
) explicit factorial computation.

You are assuming that multiplication has a fixed cost, which it hasn't.


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
 
P

Phil Carmody

Chris Uppal said:
I can't prove it either. But there are infinitely many distinct values
of {log(N!)} in (0, 1) so they must be dense around at least one point
in that range -- why not around 1.0 ?

Repeat that argument with the cantor set.

Phil
 
P

Proginoskes

Phil said:
Repeat that argument with the cantor set.

The cantor set is dense at 1, isn't it?

The limit of 2/3, 2/3 + 2/3^2, 2/3 + 2/3^2 + 2/3^3, ... is 1, and all
of these fractions are in the Cantor set.

--- Christopher Heckman
 
P

Proginoskes

that looks fine but think how many multiplication it requires per
log(X) in order for logs sum to have absolute error no more than 1?

Actually, it's more complicated than that. If the absolute error is <
1, and the sum turns out to be 0.8, the answer to the OP still is
unknown; it could just as easily be 2.

No absolute error is a guarantee that the correct answer will be given,
because you have to round up to the nearest integer, and the ceiling
function is discontinuous at all integers.

That's why I wanted to know whether the fractional part of log(N!) can
be bounded away from 0 and 1.

--- Christopher Heckman
 
P

Phil Carmody

Proginoskes said:
The cantor set is dense at 1, isn't it?

I'd need to review 20 year old lecture notes to find the definition of
dense to answer that.
The limit of 2/3, 2/3 + 2/3^2, 2/3 + 2/3^2 + 2/3^3, ... is 1, and all
of these fractions are in the Cantor set.

I accept that in the cantor set 1, and 0, and everything else, is an
accumulation point. I do not remember how that relates to being dense
or not though. If density requires simply one other point in every
open region about a point, then yes, it's dense.

I seem to remember reading that a positive measure fat cantor set (remove
1*1/4, 2*1/16, 4*1/64, etc...) is nowhere dense, and would have expected
it to be more dense than the cantor set.

I await enlightenment...

Phil
 
P

Proginoskes

Phil said:
I'd need to review 20 year old lecture notes to find the definition of
dense to answer that.


I accept that in the cantor set 1, and 0, and everything else, is an
accumulation point. I do not remember how that relates to being dense
or not though. If density requires simply one other point in every
open region about a point, then yes, it's dense.

That's the definition I was going by.

--- Christopher Heckman
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,812
Latest member
GracielaWa

Latest Threads

Top