Non-constant constant strings

I

Ian Collins

Rick said:
In my experience, even working with Java, they are not very close. The IDE
has some of the same features, but the edit-and-continue abilities of Visual
Studio make it far superior. The debugger also totally and completely buries
GDB in terms of editing.

Unlike VS, with generic IDEs the debugger isn't built in. The version
of NetBeans I use (Solaris Studio) used the dbx debugger which supports
fix and continue.
 
R

Rick C. Hodgin

Unlike VS, with generic IDEs the debugger isn't built in. The version
of NetBeans I use (Solaris Studio) used the dbx debugger which supports
fix and continue.

When I contacted the GCC mailing list folk a few years back asking about
edit-and-continue, I was told there was no need for such a feature, that
it didn't really help productivity, and that there was no interest in
adding it. I later learned that Apple had introduced a branch that
included "fix-and-continue," but that it wasn't slated to be re-introduced
into the mainline. I later saw a slide that said it was scheduled to be
introduced at some future time, but I don't know when.

I have tried Solaris from time to time on my computers. I have never been
able to get it to work using native hardware abilities due to a lack of
driver support. It would fallback on lesser video modes, slower disk
settings, and so on.

I've never tried dbx. Will look into it. Thank you.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

This technique was implemented in early versions of Adobe Reader. The
"data" was then "executed" with the resulting security breaches being
well-known.

You misunderstand. The executable file on disk will be loaded by a loader.
That loader will allocate memory for the various areas that are known at
compile time, loading those portions directly there. It will not require
pre-user-code startup initialization to copy data around, but rather when
main() is hit it will have already loaded the image and begin running user
code straight off.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

Nevertheless, that absolutely doesn't mean the input needs to be
"read-writeable" in all cases or that string literals can't be
read-only. In fact, the initialization of constant strings into the
CONST segment, in the fashion VC++ does, is done with an eye toward
security.

The data populated into the CONST segment came from a disk file. It wasn't
already there inside the DRAM chip etched into silicon. It was written
there by the ABI loader, and then the setting was changed to read-only
afterward.

Your proposed compiler will allow a process to write new
data into your string buffers, with the potential to overflow those
buffers, a clear security risk if an attacker can control the input
that modifies your strings.

This is a totally separate issue. What you're talking about are imposed
protocols which exist above and beyond the fundamental abilities of the
machine, those which are created and employed to prevent issues which are
undesirable, such as a prevention against unwanted attacks through buffer
overruns.

One simple way is to implement a string system which knows the length of
the allocated string, and simply will not write beyond it. Microsoft
accomplished this using the sprintf(buffer, length, format, ...) style
of preventing buffer overruns.
You have not demonstrated that you
understand that issue and that you have coded to prevent such
exploitation. I dare say the amount of code you will need to write to
cover such contingencies will be far more repetitious and wasteful
than copying some read-only data into read-write memory with a few
memcpy calls.

Not at all. It's a fundamental ability that is necessary for the security
of any system. It simply mandates that the author of the language create
a library of features which employ those abilities. This accomplishes two
things: (1) It provides a library of base abilities allowing for faster
new software development, and (2) introduces security.

Since all source code in my offerings will be available to be downloaded
and studied, anyone who sees flaws in my implementations will be able to
point them out, or submit a patch to correct it.
Computers take input, yes, but that input is typically read-only
(e.g., a keyboard or a A/D converter), the data is placed into
read-write buffers where it can be modified as needed, sometimes, many
times it isn't.

Even a keyboard is read-write fundamentally. Internal to the 8042 keyboard
controller, and later derivatives, are states of each key, signals that
communicate on a timer back-and-forth with the motherboard, and so on. All
of those contain read-write abilities. The keypress is just a lever, a
mechanism that signals a change, part of the overall "computer".
The process may be read-only code, not self-modifying and certainly in
embedded systems it will ALWAYS be read-only, including any strings it
might be programmed to output. The x86 architecture also goes to great
pains to protect the code segment against modification. PROCESS DATA
will almost certainly be read-write but it may not even be stored in
common with INPUT.

When embedded code goes into production it should be read-only, obviously.
When you're doing development on an embedded board, your productivity will
be 50x if you have read-write abilities on your code so you can fix issues
without having to go offline, fix them, re-flash the device, reboot, and get
back to the point where you were when the error ensued.

Read-write is ALWAYS better for debugging. For release mode, appropriate
things can be moved to read-only. The purpose of the debug mode is we are
developers working on something and we are "in progress." Once it's de-
bugged, then we move to the production side of it where we distribute
copies that are not designed to be edited. However, there are other
considerations well beyond that as well.
Moreover, while a "computer" can be read-write, it is NEVER read-write
in all it's memory for all time. Certain portions MUST be reserved as
read-only and certain other special locations may even be write-only,
your argument is specious on these points. As a generalization of
"computer" your definitions and expectations fail miserably.

Modern x86-based computers contain segment selectors which identify a base
address, a length of data beginning at that address, and flags indicating
that segment selector's allowances, such as read-only, read-write, execute,
and so on.

On development machines, I guarantee you that there are segment selectors
inside of the OS kernel that essentially set the base address to 0, and the
length to maximum, and have everything read-write. In my operating system
I did this for kernel debugging. It allowed me to examine and fix
anything while the machine was running.

On this page:
https://github.com/RickCHodgin/libsf-full/blob/master/_exodus/source/common/equates.asp

You'll find this reference as a segment selector:
_sALL_MEM EQU 8 * 8 ; All memory (for debugging)

Such features are desirable during development. Perhaps not at runtime,
at least not outside of a particular protocol.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

What we have learned (decades ago) is that global variables can be tamed
in the form of "dynamically scoped" (a.k.a "special") variables.

"The right tool for the right job," I always say. Well, I don't always say
that. It's not like I go around sounding off, "The right tool for the right
job! The right tool for the right job!" No. What I mean is this: there are
times when a particular thing is desirable, undesirable, times when it's an
overtly bad idea, and times when it is an exceedingly good idea, and all of
those states can stem from the same fundamental ability, device, or design.

We should all work together to improve one another. I'm very happy to learn
new things from people. I've learned a lot in this thread so far. But I
don't rush into each conversation waving my credentials so that everybody
knows what I'm talking about when I ask a question. So often times I get far
more information than answers to my question. I get philosophy as to why the
question I'm answering is likely a bad one, and so on. It does not seem to
be a common consideration of man that when someone asks a question there might
actually be a valid reason why they asked that question, but instead the need
to mold people into the things we each know seems to be foremost in our
intents and purposes.

I had one person reply to me in comp.lang.c++ on the subject of this thread
whereby he stated no fewer than three times in his four small paragraph
response that I didn't need to do what I was doing. I guess he was really
trying to drive home the point. :)

Coming from a deep assembly background I understand what is happening at the
machine level. I also have a high level background as with Visual FoxPro's
xbase implementation, which is the basis used for Visual Basic and what became
..NET. All of those current abilities can be traced back to that which we
first saw in Visual FoxPro 3.0, circa mid-1990s. It's why Microsoft killed
off Visual FoxPro ... they had to. It was too powerful, too small, and too
fast, and it had no native ability to control the software ecosystem due to
its historical licensing model of free unlimited runtime distribution with
each machine carrying out all of the work afforded by its total abilities
using a 13MB DLL install.

In any event ... my goals are to produce products which do not tie people's
hands, but rather open and expose everything so that people can share their
work, build atop one another's software (and not just at the compiler source
code level, but rather at the "this is my office suite" level, and so on).

I want people to get into the mindset of helping people, not profits, of
tearing down self-imposed walls which fight and rage against others, invisible
monopolistic fortresses around commodities like software which have unlimited
distribution for essentially no cost were it not for the barriers imposed by
men who say, "NO! YOU CANNOT HAVE IT UNLESS YOU PAY!!"

I'm trying to change the paradigm itself, top to bottom, front to back,
and it begins and ends with a focus upon Jesus Christ, and His teachings
that we should "love our neighbor as our self," because they are the true
foundations of love that permeate all such endeavors. It comes from Him,
to us, and we are to receive it, and give it back to Him, and unto others.

It's why the Liberty Software Foundation exists. It's why Village Freedom
Project exists. It's why I'm spending all of my time writing RDC and Visual
FreePro. It begins somewhere, with someone, focused on giving unto others
upon the foundation which is Jesus Christ. And I am trying very, very hard
to do this. And I am succeeding ... slowly. :)

Best regards,
Rick C. Hodgin
 
J

James Kuyper

On 01/24/2014 08:19 AM, Rick C. Hodgin wrote:
....
... So often times I get far
more information than answers to my question. ...

I routinely give a lot more information with my answers than most people
- I believe it's important to help people see how they could determine
the answer themselves from other information. I hope I haven't wasted
too much of your time (and mine) telling you things you already knew. As
far as I can tell from your responses, most of the objective facts that
I talked about were previously unknown to you. The opinions I expressed
that you dismissed so readily are a different matter - they seem to have
already been familiar to you.
... I get philosophy as to why the
question I'm answering is likely a bad one, and so on. It does not seem to
be a common consideration of man that when someone asks a question there might
actually be a valid reason why they asked that question, ...

It's only certain types of question that provoke that response, and
legitimately so, since those types of questions are disproportionately
asked by people without valid reasons for asking them. People who ask
how to do something that can't be done, have to step back one level and
figure out why they wanted to do it, so they can choose a different
approach to achieving the same goal that actually is feasible. A helpful
responder will guide them in the discovery of that alternative approach.
People who ask how to do something that shouldn't be done, usually do so
because of a lack of understanding of the reasons why it shouldn't be
done, or a lack of appreciation for the validity of those reasons. A
helpful responder will educate them in those reasons. These legitimately
helpful responses can be extremely frustrating to someone who is
incorrectly certain that it can and should be done.
I had one person reply to me in comp.lang.c++ on the subject of this thread
whereby he stated no fewer than three times in his four small paragraph
response that I didn't need to do what I was doing. I guess he was really
trying to drive home the point. :)

I presume he failed in that effort?
Coming from a deep assembly background I understand what is happening at the
machine level. ...

Paying too much attention to what you think is happening at the machine
level can be a serious impediment to writing good code for a higher
level language like C. C code tells the compiler what the program should
do, and leaves it up to the compiler to decide how to do it; either you
trust your compiler, or you should replace it with one you do trust, or
you should go back to assembly language. If you find yourself constantly
trying to find ways to force the C compiler to generate assembly code
similar to what you would have produced if you were writing it yourself,
then you're approaching the whole thing wrong.
 
D

David Brown

When embedded code goes into production it should be read-only, obviously.
When you're doing development on an embedded board, your productivity will
be 50x if you have read-write abilities on your code so you can fix issues
without having to go offline, fix them, re-flash the device, reboot, and get
back to the point where you were when the error ensued.

Read-write is ALWAYS better for debugging. For release mode, appropriate
things can be moved to read-only. The purpose of the debug mode is we are
developers working on something and we are "in progress." Once it's de-
bugged, then we move to the production side of it where we distribute
copies that are not designed to be edited. However, there are other
considerations well beyond that as well.

Please don't make such wild assertions and generalisations about a type
of development that you don't understand. It makes it hard to take your
other points seriously.

You sound as though you do development by throwing together a vague
first-draft of a program, then spend all your time doing live debugging.
In /professional/ development, a significant proportion of time is
spent doing specifications, planning, design, prototyping, simulation,
etc., before coding even starts. And you do your coding in ways aimed
at minimising the risk of errors - you want your code to be error-free
by design, not by luck during debugging sessions. For many projects, I
have not used any kind of debugger at all, because I haven't needed one
- and for other kinds of projects I don't use a debugger because it is
impossible to use one. (Note that this does not mean my programs are
often bug-free on first attempt, or that I don't do debugging - just
that I often don't use the kind of debugger tools you are talking about.)
 
R

Rick C. Hodgin

Please don't make such wild assertions and generalisations about a type
of development that you don't understand. It makes it hard to take your
other points seriously.

It's my experience. I haven't encountered any task I've done to date
that wasn't done faster when I could make immediate changes. YMMV.
You sound as though you do development by throwing together a vague
first-draft of a program, then spend all your time doing live debugging.

I have dyslexia and often make unexpected mistakes. They weren't due to
an error in thinking, but in the translation of my thoughts into the parts
that do the typing, or when I'm speaking to someone, by using the wrong
words. It's a reality that often sparks humor in face-to-face conversation
because I rarely catch me using the wrong words, because in my thinking I
was not using the wrong words.
In /professional/ development, a significant proportion of time is
spent doing specifications, planning, design, prototyping, simulation,
etc., before coding even starts. And you do your coding in ways aimed
at minimising the risk of errors - you want your code to be error-free
by design, not by luck during debugging sessions. For many projects, I
have not used any kind of debugger at all, because I haven't needed one
- and for other kinds of projects I don't use a debugger because it is
impossible to use one. (Note that this does not mean my programs are
often bug-free on first attempt, or that I don't do debugging - just
that I often don't use the kind of debugger tools you are talking about.)

David, I am a full-time professional software developer.

Best regards,
Rick C. Hodgin
 
K

Kaz Kylheku

It's my experience. I haven't encountered any task I've done to date
that wasn't done faster when I could make immediate changes. YMMV.

Since Lisp was invented in the 1950's, the state of the art has been that you
can modify the live program in memory by interactively replacing functions and
whatnot. Everything else is a sad limitation, particularly on systems that
have the resources to support it. (I.e. anything more than an 8 bit
microcontroller.) This, note, is not "self-modifying code", because
at no point is the instruction pointer or instruction stream of any thread
tweaked. Old versions of functions being executed continue to be executed
until every thread returns out of them; then they become garbage. Brand
new calls to functions through the symbolic function bindings call the new
functions. This is robust enough that you can do it on a deployed system (and
using optimized object code).

The MSVC "edit and continue", though, is a self-modifying code idiocy that
actually rewrites functions that are currently executing. Once you do this,
you cannot be sure whether any behavior is a consequence of this hack, or a
real bug in the program. It is certainly not robust enough that you could
confidently connect the debugger to a live system on a customer site, suspend
it, fix something, and then just resume it, expecting that executable to
continue running for another 300 days.

By the way, Microsoft could impress me with their "edit and continue" ability
by demonstrating a Windows update without a reboot (let alone two or three).
 
R

Rick C. Hodgin

Since Lisp was invented in the 1950's, the state of the art has been that you
can modify the live program in memory by interactively replacing functions and
whatnot. Everything else is a sad limitation, particularly on systems that
have the resources to support it. (I.e. anything more than an 8 bit
microcontroller.) This, note, is not "self-modifying code", because
at no point is the instruction pointer or instruction stream of any thread
tweaked. Old versions of functions being executed continue to be executed
until every thread returns out of them; then they become garbage. Brand
new calls to functions through the symbolic function bindings call the new
functions. This is robust enough that you can do it on a deployed system (and
using optimized object code).

The MSVC "edit and continue", though, is a self-modifying code idiocy that
actually rewrites functions that are currently executing.

MSVC does this in some cases. In other cases it writes new functions and
appends them to the executable code area at the end, updating its pointer
which now indicates where that new function exists.
Once you do this,
you cannot be sure whether any behavior is a consequence of this hack, or a
real bug in the program. It is certainly not robust enough that you could
confidently connect the debugger to a live system on a customer site, suspend
it, fix something, and then just resume it, expecting that executable to
continue running for another 300 days.

Yes. In those cases where there is concern you click Stop, Recompile, and
then Restart -- the same procedure you use today in every case on developer
platforms without the edit-and-continue ability.
By the way, Microsoft could impress me with their "edit and continue"
ability by demonstrating a Windows update without a reboot (let alone
two or three).

That would be a great feature.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

I would also be interested in how you get the effect of an explicitly
(I'm guessing you would need a keyword like "nonconst" to make it
explicit) read-write variable x:
I think for these purposes it should be ro and rw, as in:
ro static char helpmsg1[] = "...";
rw static char helpmsg2[] = "...";
By default, the global compiler switch would be used:
rdc -ro file.rdc

I would try to insist that the code work independently of
such a global compiler switch if I were working with a compiler
that did that and had any input into coding style.

It does. By default it's always rw. :)
In other
words, there is no default and it is mandatory that any
place you could use "rw" or "ro" you must use one of them.

I could add a -rx switch which forces every instance to be specified at
the point of definition. Not a bad idea, though I still think it's
better to have everything read-write except those things that specifically
need to not be read-write.
Even if the variables are NOT? What happened to all-or-nothing?

I meant executable code. I think I corrected that in a later post. :)
No, I'm talking about *EXECUTABLE* code. The program code being
run in the form the CPU uses to execute it.

Me too. I corrected my position to say that during debugging, by default,
all executable code is read-write, and that in release mode, by default,
all executable code is read-only. However, it's a switch that can be
set in either case.
Incidentally, you keep rambling on about INPUT ... PROCESS ...
OUTPUT. Well, in many of those processes, the INPUT is read-only.
The computer does not force the typist to untype what he typed on
a keyboard and retype it differently.

Pressing caps lock signals the keyboard indicator. Read/write. As I
mentioned to someone else, the old mechanical cash registers had hand
cranks out the side. These engaged the mechanism, but weren't the
mechanism, but just a part of it. Same is true with the switches
attached to labeled keys.
The computer does not force
the operator to unclick the mouse. Ancient mainframe computers
from the 1970's rarely read a deck of punch cards as input and
altered it before returning it - and many didn't have card punches
at all. Programs (and this includes compilers) often read files
without altering them (and this, hopefully, includes source
code).

You're describing hand-cranks. The computer is the mechanism that can
read input, process it, and write output. The output does not have to
come back to its source. However, on computers, there are registers or
memory areas which can have values read, computed on, and written back.

It may not be able to do this to every point, but those are due to added
protocols which inhibit the ability, and not because computers cannot
read input, process, and write output.
The code being run is often read-only. The first thing your
desktop x86 runs is from the BIOS, which is read-only.

That's a protocol placed atop the CPU's native abilities to read/write.
The CPU can read-write at any point to any bit of accessible memory (if the
right code is setup). And even when running BIOS code it can attempt to
write to the BIOS ROM ranges. The write attempts there do not generate an
error ... it just fails to write anything, leaving the data as it was before.
Surprise! the x86 in protected mode is exactly the architecture
I'm talking about. The protection bits don't provide for a combination
of "writable" and "executable" at the same time.

Create two selectors that exist at the same time, one that is for executing,
one that is for writing. The debugger knows which one to use, and the
underlying memory will be updated in the executable code as well, just be
sure to clear the instruction prefetch cache.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

That, I suspect, will require re-writing a fair amount of existing
code to work with your compiler, because you'll discover that after
you're finished re-interpreting the code, you'll have several
function-local variables with the same name, and perhaps conflicting
types.

They would generate errors the user must fix. The purpose of my compiler
is new code, not to be 100% compatible with existing code.
Certain macros manage to function by defining an inner scope and
declaring temporary variables in that scope (sometimes with types
determined by the arguments to the macro). This will likely break.
Yes.

I also think that this is going in the wrong direction. Being able
to limit the scope of variables is a GOOD thing.

I appreciate your input. There are lots of other tools walking in the
direction you're going. My advice: use one of those. :)

Best regards,
Rick C. Hodgin
 
G

Geoff

One simple way is to implement a string system which knows the length of
the allocated string, and simply will not write beyond it.

This is what std::string is for.
Microsoft
accomplished this using the sprintf(buffer, length, format, ...) style
of preventing buffer overruns.

I believe that's sprintf_s() and it's a kluge.
 
G

Geoff

You misunderstand. The executable file on disk will be loaded by a loader.
That loader will allocate memory for the various areas that are known at
compile time, loading those portions directly there. It will not require
pre-user-code startup initialization to copy data around, but rather when
main() is hit it will have already loaded the image and begin running user
code straight off.

No, I don't misunderstand. You are proposing to write code that
defeats the whole intent and design of x86 protected mode. You intend
to handle code like data and data like code and make the whole thing
read-write at once. This is fraught with danger and your compiler will
never be accepted as safe for use in systems exposed to malicious
input. By default your implementation is unsafe.
 
R

Rick C. Hodgin

This is what std::string is for.

To my knowledge, std::string is a C++ add-on and requires a C++ compiler.
I have created an SDatum struct in RDC that does more or less the same
thing in C or C++. It's very straight-forward:

struct SDatum
{
char* data;
int length;
};

Various functions know and use SDatum in my language.
I believe that's sprintf_s() and it's a kluge.

Yes. It is not a good solution. sprintf_s() triggers an exception if the
length is exceeded, rather than simply truncating the result and providing
some error flag.

It does prevent buffer overruns though.

Best regards,
Rick C. Hodgin
 
I

Ian Collins

Rick said:
I had one person reply to me in comp.lang.c++ on the subject of this thread
whereby he stated no fewer than three times in his four small paragraph
response that I didn't need to do what I was doing. I guess he was really
trying to drive home the point. :)

With good reason...
Coming from a deep assembly background I understand what is happening at the
machine level.

Maybe you do, maybe you just think you do which can lead to all sorts of
incorrect assumptions. Once upon a time, back when the 386 was king I
knew pretty much all there was to know about them after writing an
embedded kernel. I would make no such claims about the multi-core Xeons
that drives this system. Yes I still know the assembly code, but as to
what really goes on in all of the complex subsystems on these
processors, I leave that to the machine!
 
R

Rick C. Hodgin

No, I don't misunderstand.

Yes, you do.
You are proposing to write code that
defeats the whole intent and design of x86 protected mode.

Not in the slightest, Geoff. There will be no aspects of existing hardware
architecture security mechanisms that will be defeated by my methodology.
I am planning a new virtual machine that runs within a protected environment
as well, but that does not rely upon native instructions.

See these:
https://github.com/RickCHodgin/libsf/tree/master/documentation/vvm/OBED
https://github.com/RickCHodgin/libsf/blob/master/documentation/vvm/OBED/obed_draft_0.60.txt
https://github.com/RickCHodgin/libs...BED/obed_draft_0.60.assembly_instructions.txt
https://github.com/RickCHodgin/libsf/blob/master/documentation/vvm/OBED/obed_draft_0.60.png

During debugging, and if the release mode setting is overridden to allow
the executable code to be read-write, during those times the code will be
different than it is today. But, that's found within the safety and security
of a developer's debugging environment, or at the far end of a purposeful
decision made to override the default read-only / execute-only setting for
executable code.

Other than that, it will be loaded as it is today when running on native
hardware. When running within the VM it will always be read-write, but
through a protocol, not outside of that protocol.
You intend
to handle code like data and data like code and make the whole thing
read-write at once. This is fraught with danger and your compiler will
never be accepted as safe for use in systems exposed to malicious
input. By default your implementation is unsafe.

You (1) misunderstand, and/or (2) do not understand what an ABI loader
does today. The program is transferred from disk into memory. An exchange
takes place. There are certain ABIs the operating system knows how to
load itself, and other forms that must be loaded manually by an artificial
environment. There are other ABIs which must be manually setup. I will
be creating a manual method, but one which puts into practice all security
features available to the OS today.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

With good reason...


Maybe you do, maybe you just think you do which can lead to all sorts of
incorrect assumptions. Once upon a time, back when the 386 was king I
knew pretty much all there was to know about them after writing an
embedded kernel. I would make no such claims about the multi-core Xeons
that drives this system. Yes I still know the assembly code, but as to
what really goes on in all of the complex subsystems on these
processors, I leave that to the machine!

As I say, there are high level people, and low level people. Time will tell
if I have it right. I'm going to suggest that whether I do or do not, it
will not affect you, nor most/all people in this form in the slightest
though. :)

Best regards,
Rick C. Hodgin
 
K

Kaz Kylheku

As I say, there are high level people, and low level people.

Another repetition of that, after it has been hinted that there may be a third
category. (At least you left the ", Kaz, " out this time, though: good!)

The unfortunate and ironic upshot of this is that you must think that you're
pegged into one of these two groups.

By the way, assembly language is not really a "background" any more than,
say, alcohol or cigarettes are a "background".
 
S

Seebs

It's my experience. I haven't encountered any task I've done to date
that wasn't done faster when I could make immediate changes. YMMV.

I think it does. Immediate changes don't necessarily allow me to verify
that they'd work if they'd been that way all along.

-s
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,079
Messages
2,570,573
Members
47,205
Latest member
ElwoodDurh

Latest Threads

Top