fgets - design deficiency: no efficient way of finding last character read

J

John Reye

Hello,

The last character read from fgets(buf, sizeof(buf), inputstream) is:
'\n'
OR
any character x, when no '\n' was encountered in sizeof(buf)-1
consecutive chars, or when x is the last char of the inputstream

***How can one EFFICIENTLY determine if the last character is '\n'??
"Efficiently" means: don't use strlen!!!

I only come up with the strlen method, which - to me - says that fgets
has a bad design.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char *argv[])
{
char buf[6];
FILE *fp = stdin;
while (fgets(buf, sizeof(buf), fp)) {
printf((buf[strlen(buf)-1] == '\n') ? "Got a line which ends with
newline: %s" : "no newline: %s", buf);
}


return EXIT_SUCCESS;
}



A well-designed fgets function should return the length of characters
read, should it not??

Please surprise me, that there is a way of efficiently determining the
number of characters read. ;)
I've thought of ftell, but I think that does not work with stdin.

Because right now, I think that fgets really seems useless.
Why is the standard C library so inefficient?
Do I really have to go about designing my own library? ;)

Thanks for tipps and pointers

Regards,
J.
 
R

Rupert Swarbrick

John Reye said:
A well-designed fgets function should return the length of characters
read, should it not??

Please surprise me, that there is a way of efficiently determining the
number of characters read. ;)
I've thought of ftell, but I think that does not work with stdin.

I'm intrigued. What application do you have where you read extremely
long lines from stdin using fgets? This seems an odd thing to do: I
can't think of any text-based formats where lines are extremely
long. For binary formats, use fread and (oh, look!):

FREAD(3)

....

RETURN VALUE
fread() and fwrite() return the number of items successfully read
or written (i.e., not the number of characters). If an error
occurs, or the end-of-file is reached, the return value is a
short item count (or zero).


It seems that the standard library isn't so badly designed after all...

Do I really have to go about designing my own library? ;)

No.


Rupert

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iJwEAQECAAYFAk+FwqsACgkQRtd/pJbYVobF/gQAm45GjO09qc1i9yNjSujIl3Km
F8lHqO/YhPu2RQXM7txgrgnUs5HLcslaWdfnKEHzh0sOrWAs2k63gRB4go5bFXCn
tCVMTM1mItsMrpwBRCWglkuUU8OREBuKli/kC45DgWiLOOlwFRbUF4vKOSi7Wm0x
OYkyZpwuxjUwLyqTJ8Q=
=c1DD
-----END PGP SIGNATURE-----
 
B

Ben Pfaff

Rupert Swarbrick said:
I'm intrigued. What application do you have where you read extremely
long lines from stdin using fgets? This seems an odd thing to do: I
can't think of any text-based formats where lines are extremely
long.

It's fairly common for machine-generated HTML and XML (which are
text-based formats) to be single, very-long lines.
 
J

John Reye

I'm intrigued. What application do you have where you read extremely
long lines from stdin using fgets?
Actually I was using fgets, to read into a buffer. If the buffer is
not large enough to fit an entire line (i.e. one including '\n'), I
doubled the buffer and read the remaining chars. (stdin is just an
example that shows that I cannot abuse ftell to determine the length
read... you know: ftell-after-fgets minus ftell-before-fgets).

I thought fgets would be a good function to use, since it
automatically stops, when it encounters '\n'.
       fread() and fwrite() return the number of items successfully read
       or written (i.e., not the number of characters).  If an error
       occurs, or the end-of-file is reached, the return value isa
       short item count (or zero).

Yes... probably fread is a better way of handling it!
I want a buffer to hold the complete line, and then continue reading
lines.

***What is more efficient?
If I use fread, I'll probably overshoot beyond the '\n'.
Is it more efficient to rewind via fseek, and fread the overshoot to
the beginning of the buffer;
OR is it more efficient to copy the overshoot to the beginning of the
buffer and the fread the remainder.

Thanks.
J.
 
J

John Reye

It's fairly common for machine-generated HTML and XML (which are
text-based formats) to be single, very-long lines.

Correct, but I would not read those huge lines, because the '\n' is
not the logical divider.

I however want a nice routine (which I have to code myself), which
uses realloc, to adjust a buffer to fit everything until the '\n'.
C standard lib does not have anything like this - so I have to code it
myself.

I bet C++ has something useful that one could use. It seems that many
went into C++, to make it the huge bloated monster that it is! But
still seems worth a look, to relieve me from having to handle this
stuff at the basic level. Alternative: I need to develop my own
library of useful C routines.
 
K

Keith Thompson

John Reye said:
The last character read from fgets(buf, sizeof(buf), inputstream) is:
'\n'
OR
any character x, when no '\n' was encountered in sizeof(buf)-1
consecutive chars, or when x is the last char of the inputstream

***How can one EFFICIENTLY determine if the last character is '\n'??
"Efficiently" means: don't use strlen!!!

I only come up with the strlen method, which - to me - says that fgets
has a bad design.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char *argv[])
{
char buf[6];
FILE *fp = stdin;
while (fgets(buf, sizeof(buf), fp)) {
printf((buf[strlen(buf)-1] == '\n') ? "Got a line which ends with
newline: %s" : "no newline: %s", buf);
}


return EXIT_SUCCESS;
}



A well-designed fgets function should return the length of characters
read, should it not??

Please surprise me, that there is a way of efficiently determining the
number of characters read. ;)
I've thought of ftell, but I think that does not work with stdin.

Because right now, I think that fgets really seems useless.
Why is the standard C library so inefficient?
Do I really have to go about designing my own library? ;)

Have you measured the performance cost of calling strlen()?

I haven't done so myself, so the following is largely speculation,
but I strongly suspect that the time to call strlen() is going to
be *much* less than the time to read the data. Yes, an fgets-like
function could return additional information, either the length
of the string or a pointer to the end of it, and that would save a
little time, but I'm not convinced it would be a significant benefit.

And there would be some small but non-zero overhead in returning the
extra information. In a lot of cases, the caller isn't going to use
that information (perhaps it's going to traverse the string anyway).

*Measure* before you decide that fgets is "useless".
 
K

Kaz Kylheku

***How can one EFFICIENTLY determine if the last character is '\n'??
"Efficiently" means: don't use strlen!!!

There is no way to know where the last character of a string is if you
do not know the length explicitly, or else implicitly (scan the string
looking for the null terminator).
I only come up with the strlen method, which - to me - says that fgets
has a bad design.

The newline can be missing only in two situations. One is that the buffer isn't
large enough to hold the line. In that case, some non-newline character is
written into the next-to-last element of the buffer and a null terminator
into the last element. If you set the next-to-last byte to zero before
calling fets, you can detect that this situation has happened by finding
a non-zero byte there.

The second situation is that the last line of the stream has been read,
but fails to be newline terminated.

If you want to detect this situation, you only to check for if end-of-file has
been reached. That is to say, keep calling fgets until it returns NULL. Then
go back to the most recently retrieved line and check whether the newline is
here or not, with the help of strlen, or strchr(line, '\n'), etc.

So as you can see, you don't have to scan every single line.
 
J

James Kuyper

Hello,

The last character read from fgets(buf, sizeof(buf), inputstream) is:
'\n'
OR
any character x, when no '\n' was encountered in sizeof(buf)-1
consecutive chars, or when x is the last char of the inputstream

***How can one EFFICIENTLY determine if the last character is '\n'??

That's relatively easy - so long as you don't need to know where the
'\n' is.
"Efficiently" means: don't use strlen!!!

I only come up with the strlen method, which - to me - says that fgets
has a bad design.

The following approach uses strchr() rather than strlen(), so it
technically meets your specification. However, I presume you would have
the same objections to strchr() as you do to strlen(). I'd like to point
out, however, that it uses strchr() only once per file, which seems
efficient enough for me. If you're doing so little processing per file
that a single call to strchr() per file adds significantly to the total
processing load, I'd be more worried about the costs associated with
fopen() and fclose() than those associated with strchr().

The key point is that a successful call to fgets() can fail to read in
an '\n' character only if fgets() meets the end of the input file, or
the end of your buffer, both of which can be checked for quite
efficiently. If it reaches the end of your buffer, there's one and only
one place where the '\n' character can be, if one was read in.
Therefore, it's only at the end of the file that a search is required.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char *argv[])
{
char buf[6];
FILE *fp = stdin;

buf[(sizeof buf)-1] = 1; // any non-zero value will do.
while (fgets(buf, sizeof(buf), fp)) {
const char *prefix =
(buf[(sizeof buf)-1] == '\0' && buf[(sizeof buf)-2] != '\n'
|| feof(fp) && !strchr(buf, '\n')) ? "no " : "";

printf("Got a line which ends with %snewline: %s\n",
prefix, buf);

buf[(sizeof buf)-1] = 1;
}


return EXIT_SUCCESS;
}



A well-designed fgets function should return the length of characters
read, should it not??

Please surprise me, that there is a way of efficiently determining the
number of characters read. ;)
I've thought of ftell, but I think that does not work with stdin.

Because right now, I think that fgets really seems useless.
Why is the standard C library so inefficient?

Measure the inefficiency before deciding whether or not it's useless.
You may be surprised.
Do I really have to go about designing my own library? ;)

You don't need an entire library; a function equivalent to fgets() that
calls getc() and provides the information you're looking for wouldn't be
too difficult to write, and should compile fairly efficiently.
 
R

Rupert Swarbrick

Kaz Kylheku said:
The newline can be missing only in two situations. One is that the buffer isn't
large enough to hold the line. In that case, some non-newline character is
written into the next-to-last element of the buffer and a null terminator
into the last element. If you set the next-to-last byte to zero before
calling fets, you can detect that this situation has happened by finding
a non-zero byte there.

The second situation is that the last line of the stream has been read,
but fails to be newline terminated.

If you want to detect this situation, you only to check for if end-of-file has
been reached. That is to say, keep calling fgets until it returns NULL. Then
go back to the most recently retrieved line and check whether the newline is
here or not, with the help of strlen, or strchr(line, '\n'), etc.

So as you can see, you don't have to scan every single line.

Thanks for this. It neatly uses the O(1) access at the end of the string
and gets around the OP's problem brillantly. I like it!

Rupert

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iJwEAQECAAYFAk+F6/8ACgkQRtd/pJbYVoZmwwP9GuuCob/JJoEwY8jlsnoj0Ziw
Fy5fB8HJw9xNNCuSq6C7O3KzNjZ/A5hCs5w9YN2V8E+K84bPLfGPVlRindcRpuf8
PV5Hec0q2GSPm48tJmVtvsNg2ohJjTXsKz7f/ZW71cXH87ZgF49PvzQrmrGj/+0R
bIYEZS9yFJ7Q90W7dOM=
=Z4QS
-----END PGP SIGNATURE-----
 
J

John Reye

thanks for your comment

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char *argv[])
{
  char buf[6];
  FILE *fp = stdin;

    buf[(sizeof buf)-1] = 1;    // any non-zero value will do.
  while (fgets(buf, sizeof(buf), fp)) {

        const char *prefix =
            (buf[(sizeof buf)-1] == '\0' && buf[(sizeof buf)-2] != '\n'
            || feof(fp) && !strchr(buf, '\n')) ? "no " : "";

        printf("Got a line which ends with %snewline: %s\n",
            prefix, buf);

        buf[(sizeof buf)-1] = 1;
  return EXIT_SUCCESS;
}


Thanks for that! It's really good! :)

You don't need an entire library; a function equivalent to fgets() that
calls getc() and provides the information you're looking for wouldn't be
too difficult to write, and should compile fairly efficiently.

Hmmm... I think fread() is more efficient than continous getc().

Does this make sense?

For some context:
I think that when writing a getline function (that uses realloc)...
i.e. size_t getline(char **ptr_to_inner_buf, FILE *fp) ... where
ptr_to_inner_buf is set to an internal buffer that holds bytes until
'\n', or any char x if EOF...

then realizing that getline function, by repeatedly calling getc() is
less efficient THAN using fread to get a number of bytes, scan for
'\n' and place a '\0' in the following byte. Before the next call to
fread, I could scan any overshoot (beyond '\n'... putting back the
char overwritten by '\0' via a tmp) for '\n' and if I find it... again
set '\0' and adjust ptr_to_inner_buf (see function declaration).
Otherwise I copy the overshoot to the very beginning of the buffer,
and fread the delta needed to fill the entire buffer.
If no '\n in the buffer, I realloc and fread the delta. etc.etc.

So fread() more efficient than continous getc(). Or am I wrong?

Thanks.
 
J

John Reye

Even though James Kuyper showed a nice way of determining if the
string contains '\n', I still feel that fgets has a RETURN VALUE that
simply shouts "deficiency!".

char * fgets ( char * str, int num, FILE * stream );
Return Value
On success, the function returns the same str parameter. etc.

Why on earth return an identical pointer most of the time???
Returning a count of the number of bytes read would have been a far
better choice for the return value, wouldn't it?
 
J

James Kuyper

thanks for your comment

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char *argv[])
{
char buf[6];
FILE *fp = stdin;

buf[(sizeof buf)-1] = 1; // any non-zero value will do.
while (fgets(buf, sizeof(buf), fp)) {

const char *prefix =
(buf[(sizeof buf)-1] == '\0' && buf[(sizeof buf)-2] != '\n'
|| feof(fp) && !strchr(buf, '\n')) ? "no " : "";

printf("Got a line which ends with %snewline: %s\n",
prefix, buf);

buf[(sizeof buf)-1] = 1;
return EXIT_SUCCESS;
}


Thanks for that! It's really good! :)

You don't need an entire library; a function equivalent to fgets() that
calls getc() and provides the information you're looking for wouldn't be
too difficult to write, and should compile fairly efficiently.

Hmmm... I think fread() is more efficient than continous getc().

Does this make sense?

For some context:
I think that when writing a getline function (that uses realloc)...
i.e. size_t getline(char **ptr_to_inner_buf, FILE *fp) ... where
ptr_to_inner_buf is set to an internal buffer that holds bytes until
'\n', or any char x if EOF...

then realizing that getline function, by repeatedly calling getc() is
less efficient THAN using fread to get a number of bytes, scan for
'\n' and place a '\0' in the following byte. Before the next call to
fread, I could scan any overshoot (beyond '\n'... putting back the
char overwritten by '\0' via a tmp) for '\n' and if I find it... again
set '\0' and adjust ptr_to_inner_buf (see function declaration).
Otherwise I copy the overshoot to the very beginning of the buffer,
and fread the delta needed to fill the entire buffer.
If no '\n in the buffer, I realloc and fread the delta. etc.etc.

It's not clear to me that what you're saying is any different between an
implementation-written implementation of fgets() and a user-written
user_fgets() replacement function that makes repeated calls to getc().
They both have to do pretty much the same things you mentioned. It is
true that fgets() could take advantage of OS-specific features that a
portable user_fgets() could not; but I didn't recognize any suggestion
of that possibility in what you were saying.
So fread() more efficient than continous getc(). Or am I wrong?

"The byte input functions" ( fgets, fread, fscanf, getc, getchar, scanf,
vfscanf, and vscanf - 7.21.1p5) "read characters from the stream as if
by successive calls to the fgetc function." (7.21.3p11)

The reason why the fgetc() function and the getc() function-like macro
both exist is because getc() can eliminate the function call overhead
nominally associated with fgetc(). I say "nominally" because
sufficiently aggressive optimizer that is closely integrated with the C
standard library could remove that overhead even when using fgetc().

Typical implementations of getc() basically just move a pointer though a
buffer, triggering buffer refills when needed. As long as the file is
buffered, all the complicated stuff happens only during the refills.
Off-hand, I'd expect user_fgets() to be able to achieve similar
performance to that of fgets(), at least when reading buffered streams.
The execution time should be dominated by the calls to the OS-specific
function which actually fills the buffer, and the total number of such
calls should be the same in either case.

If it matters, I suppose you could put try testing it. user_fgets()
shouldn't be very difficult to write; I might try if myself in the
unlikely event that I get enough spare time anytime soon.
 
J

James Kuyper

Even though James Kuyper showed a nice way of determining if the
string contains '\n', I still feel that fgets has a RETURN VALUE that
simply shouts "deficiency!".

char * fgets ( char * str, int num, FILE * stream );
Return Value
On success, the function returns the same str parameter. etc.

Why on earth return an identical pointer most of the time???
Returning a count of the number of bytes read would have been a far
better choice for the return value, wouldn't it?

Many of the C standard library functions would have been more useful if
they'd returned a pointer to the end of a string or buffer, rather than
to its beginning. I chalk it up to inexperience (with C, that is) by the
people who invented C. A decent respect for the need to retain backwards
compatibility means that we can't undo those bad design decisions - but
that doesn't prevent the creation of new functions with similar
functionality and a more useful return value.
 
B

BartC

John Reye said:
Even though James Kuyper showed a nice way of determining if the
string contains '\n', I still feel that fgets has a RETURN VALUE that
simply shouts "deficiency!".

char * fgets ( char * str, int num, FILE * stream );
Return Value
On success, the function returns the same str parameter. etc.

Why on earth return an identical pointer most of the time???
Returning a count of the number of bytes read would have been a far
better choice for the return value, wouldn't it?

You're right. A quick test reading files using fgets(), showed that a
following strlen() was adding 10-15% to runtime.

This is for a program doing nothing else except reading all the lines, and
for files already cached in memory, so the overheads will be less in real
programs, especially for mainly small files as line-oriented files tend to
be. So it's not that big a deal. And you can easily write your own version.
 
B

BartC

William Ahern said:
The designer(s) of fgets() may have been backward looking instead of
forward
looking; not intent on making a composable routine--which works well with
ad
hoc buffer parsing code--but rather one which works conveniently with the
pre-existing string routines--i.e. read a string then pass that string to
some other string routine which will lazily determine string length while
processing it.

Except that fgets() can return NULL on error. That makes it harder to use
the return value unchecked.
 
E

Eric Sosman

Hello,

The last character read from fgets(buf, sizeof(buf), inputstream) is:
'\n'
OR
any character x, when no '\n' was encountered in sizeof(buf)-1
consecutive chars, or when x is the last char of the inputstream

***How can one EFFICIENTLY determine if the last character is '\n'??
"Efficiently" means: don't use strlen!!!

Kaz' method is pretty slick. However, the time for strlen() is
likely to be insignificant compared to the time for the I/O itself.
A well-designed fgets function should return the length of characters
read, should it not??

IMHO that would be a more useful return value than the one fgets()
actually delivers, but this is scarcely the only unfortunate choice
to be found in the Standard library. For example, strcpy() and strcat()
"know" where their output strings end and could return that information
instead of echoing back a value the caller already has. In another
thread we've just rehashed the gotchas of <ctype.h> for the umpty-
skillionth time. No doubt other folks have their own pet peeves.

Tell me, though: Are you using a QWERTY keyboard, despite all its
drawbacks? Legend[*] has it that QWERTY was chosen *on purpose* to
slow down typists in the days when too much speed led to mechanical
jams. On today's keyboards that's not a problem -- So, are you still
using a nineteenth-century keyboard layout? If so, ponder your reasons
for not changing to something more modern, and see if those reasons
shed any light on why people still put up with the Standard Warts And
All Library.

[*] Wikipedia disputes the legend, but a Wikipedia page is only
as good as its most recent editor.
 
L

lawrence.jones

Eric Sosman said:
Legend[*] has it that QWERTY was chosen *on purpose* to
slow down typists in the days when too much speed led to mechanical
jams. On today's keyboards that's not a problem -- So, are you still
using a nineteenth-century keyboard layout? If so, ponder your reasons
for not changing to something more modern, and see if those reasons
shed any light on why people still put up with the Standard Warts And
All Library.

[*] Wikipedia disputes the legend, but a Wikipedia page is only
as good as its most recent editor.

Perhaps the best way to describe it is that the layout was chosen to
maximize speed given the mechanical limitations of the device. Typing
faster doesn't help if you constantly have to stop to clear jams. Think
of it as managing response time to optimize throughput. :)
 
N

Nobody

So fread() more efficient than continous getc(). Or am I wrong?

Maybe, maybe not. getc() is allowed to be implemented as a macro, so
a getc() loop could end up as little more than memcpy().

However: if the C library is thread-safe (which may be a compiler option),
it will end up locking the stream for each call, which will definitely be
worse than a single fread().

In GNU libc 1.x, getc was a light-weight macro. This changed in 2.x due to
thread safety, but it has _unlocked versions of many of the stdio
functions, e.g. fgetc_unlocked:

// libio.h:

#define _IO_getc_unlocked(_fp) \
(_IO_BE ((_fp)->_IO_read_ptr >= (_fp)->_IO_read_end, 0) \
? __uflow (_fp) : *(unsigned char *) (_fp)->_IO_read_ptr++)

// bits/stdio.h:

# ifdef __USE_MISC
/* Faster version when locking is not necessary. */
__STDIO_INLINE int
getc_unlocked (FILE *__fp)
{
return _IO_getc_unlocked (__fp);
}
# endif /* misc */

With the right switches (e.g. disabling thread safety or
-Dgetc=getc_unlocked) and sufficient optimisation, a getc() loop could
realistically be limited by memory bandwidth.
 
N

Nobody

Tell me, though: Are you using a QWERTY keyboard, despite all its
drawbacks? Legend[*] has it that QWERTY was chosen *on purpose* to
slow down typists in the days when too much speed led to mechanical
jams. On today's keyboards that's not a problem -- So, are you still
using a nineteenth-century keyboard layout?

A related issue (which clearly isn't legend) is that nearly all computer
keyboards still have the staggered layout of a mechanical typewriter.

And unlike a completely different layout, eliminating the stagger would be
a fairly minor incompatibility (you'd still be using the same finger for
each letter).
 
B

Ben Pfaff

Eric Sosman said:
Tell me, though: Are you using a QWERTY keyboard, despite all its
drawbacks? Legend[*] has it that QWERTY was chosen *on purpose* to
slow down typists in the days when too much speed led to mechanical
jams. On today's keyboards that's not a problem -- So, are you still
using a nineteenth-century keyboard layout? If so, ponder your reasons
for not changing to something more modern, and see if those reasons
shed any light on why people still put up with the Standard Warts And
All Library.

The same topic came up here back in 2002. Here's a new copy of
what I posted back then:

Have you used a mechanical typewriter? I have. These things have
an array of letterforms on spokes[1] arranged in a half-circular
pattern in the body of the typewriter. When you hit a key, one of
them lunges forward to the place where the letter should go (the
"cursor position") and strikes the paper through the ribbon.

Now, if there's only of these spokes in motion, there's no
problem. But there's a mutual exclusion problem: if more than one
of them is in motion at once, e.g., one going out and another
coming back, then they'll hit one another and you'll have to take
a moment to disentangle them by hand, which is annoying and
possibly messy. It's a race condition that you will undoubtedly
be bitten by quickly in real typing.

The problem is exacerbated if the letterforms for common digraphs
have adjacent spokes. This is because the closer two spokes are,
the easier they can hit one another: if the spokes are at
opposite ends of the array, then they can only hit at the point
where they converge at the cursor, but if they are adjacent then
they'll hit as soon as they start moving.

One solution, of course, is to introduce serialization through
use of locking: allow only one key to be depressed at a
time. Unfortunately, that reduces parallelism, because many
digraphs that you want to type in the real world do not have
adjacent spokes, even if you just put the keys in alphabetical
order.

The adopted solution, of using a QWERTY layout, is not a real
solution to the problem. Instead, it reduces the chances of the
race condition by putting keys for common digraphs, and therefore
their spokes, far away from each other. You can still jam the
mechanism and have to untangle the spokes, but it happens less
often, at least for English text. This in fact helps you to type
*faster*, not slower, because you don't have to stop so often to
deal with jammed-together spokes.

To conclude: mechanical QWERTY typewriters are at the same time
an example of optimization for the common case and inherently
flawed because of the remaining race condition. This is a great
example of a tradeoff that you should not make when you design a
program!

[1] I don't know any of the proper vocabulary here. I was about 8
years old when I used the one we had at home, and it was thrown
out as obsolete soon after.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,817
Latest member
AdalbertoT

Latest Threads

Top