Non-constant constant strings

R

Rick C. Hodgin

Using multiple compilers and dialects to build one program, for the sake of
some issue of syntactic sugar in the source code (and saving a few bytes while
the program is running) is ... in the territory of Mel, the Real Programmer.

'"Even the initializer is optimized", he said proudly'

Yes, you optimized the initializers, just like Mel.

References:
http://en.wikipedia.org/wiki/The_Story_of_Mel
http://www.cs.utah.edu/~elb/folklore/mel.html


I already had a working solution that I was happy with. This entire pursuit
was a mental exercise. I've now satisfied my curiosity, learned a lot of
new things, and have a new tool in my arsenal (the ability to get GCC and
Visual C++ to share resources). And you do too.

I would say my efforts were well worth the price of your admission fee.
Enjoy the fruits of my labor ... may some use come of them. And if not,
then I thank you for your assistance in helping me arrive at this
solution.

Best regards,
Rick C. Hodgin
 
K

Kaz Kylheku

There are high level people, Kaz, and low-level people. Those languages
you love so well at the high level were written by people who toil down
in the low level.

Note that they represent a third category. They are neither "high level people"
nor "low level people".
 
S

Seebs

They didn't merely deprecate gets(); that would mean declaring that its
use is discouraged and it may be removed in a future standard. They
removed it completely (without going through an initial step of marking
it as obsolescent).

I believe the rationale is pretty much that everyone already considered it
deprecated, and indeed, I've used multiple implementations which would give
you diagnostics warning you that it was unsafe and should not be used, I
think even before the committee got around to doing anything.

-s
 
K

Kaz Kylheku

Yes. That is, it may longer be declared in <stdio.h>, and a strictly
conforming program may define its own function named "gets".

However, "ioctl" also may not be in <stdio.h> (or any standard header)
and a strictly conforming program may define its own function named "ioctl".
 
S

Seebs

There are high level people, Kaz, and low-level people. Those languages
you love so well at the high level were written by people who toil down
in the low level.

That's a vast oversimplification. Most programmers I know can work at higher
and lower levels, and switch between them.

And that turns out to give a big advantage, because if you try to think
about a language in terms of what you think you know about its
implementation, you'll generally get noticably worse results. The most
successful C programmers are the ones who, even if they *do* know all sorts
of things about what happens under the hood, are able to disregard those
things and program to the abstract machine, without letting assumptions
about the nature of memory get in their way.

-s
 
R

Rick C. Hodgin

Note that they represent a third category. They are neither "high level people"

nor "low level people".

There's also a fourth category: high and low level people!

You'll remember that I'm not only creating RDC, but also Visual FreePro, a
high level xbase language with objects, multiple-inheritance, and so on.

http://www.visual-freepro.org

:)

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

So why don't you use Netbeans? The two are both cross platform and
pretty similar feature wise.

In my experience, even working with Java, they are not very close. The IDE
has some of the same features, but the edit-and-continue abilities of Visual
Studio make it far superior. The debugger also totally and completely buries
GDB in terms of editing.

I will give NetBeans and Eclipse prompts for native refactoring abilities,
along with general code reformatting. However, that asset is largely
negated with Whole Tomato's Visual Assist X add-on to Visual Studio.

When I code in Linux I generally use CodeLite by Eran Ifrah, and the GCC
toolchain for C/C++ work. For anything Java related I use NetBeans or
Eclipse. Mostly NetBeans.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

That runs counter to almost all modern thinking in language design -
visibility and access should, by default, be restricted as much as
possible. That reduces coupling between modules, and makes for easier
development, debugging and maintainability. And frankly the need to
minimize coupling is an accepted principal that's as old as the hills
in computing (at least back to the mid seventies).

[He said rubbing his hands together, and in a particular voice]

"Yes! And that's exactly why my plan will succeed! We must go back to
the future!"

Those old timer developers knew what they were doing I say! At least that's
my assessment of them. Perhaps yours is different? Perhaps you conclude
that those early software pioneers were nothing but a bunch of wrong-doers
who, had they had your current level of insight, education, and expertise,
would've done it so differently? :)

Actually there are some areas I would agree with your assessment of them in
those regards. :)

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

That's a vast oversimplification. Most programmers I know can work at higher
and lower levels, and switch between them.

And that turns out to give a big advantage, because if you try to think
about a language in terms of what you think you know about its
implementation, you'll generally get noticably worse results. The most
successful C programmers are the ones who, even if they *do* know all sorts
of things about what happens under the hood, are able to disregard those
things and program to the abstract machine, without letting assumptions
about the nature of memory get in their way.


Ding ding ding! Precisely, Oglevee! Exact-a-mundo??sp?? !! :)

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

Precisely, Oglevee!

Oops! Should be "Ogilvie" ... I guess I just don't know women's hair product
spellings as well as I should to be quoting them in text form after hearing
them decades ago on old commercial slogans. :)

Best regards,
Rick C. Hodgin
 
S

Seebs

Those old timer developers knew what they were doing I say! At least that's
my assessment of them. Perhaps yours is different? Perhaps you conclude
that those early software pioneers were nothing but a bunch of wrong-doers
who, had they had your current level of insight, education, and expertise,
would've done it so differently? :)

The famous example would be that _The Mythical Man Month_ argued strongly for
transparency, but the 25th edition says unequivocally that this was wrong,
and that it is better for modules to be independent and not see each other's
internals.

I generally expect pioneers in a field to do lots of stuff that we later
find was ill-considered. Global variables seemed so reasonable at first, but
time has taught us that they are almost always the wrong tool for the job.

-s
 
K

Kaz Kylheku

That's a vast oversimplification. Most programmers I know can work at higher
and lower levels, and switch between them.

More importantly, it doesn't constitute an excuse for working at an
inappropriate level, or using clumsy programming techniques from a low
level language in a high level language.

I do agree that there are: high level people, Kaz, and low-level people.
 
R

Rick C. Hodgin

More importantly, it doesn't constitute an excuse for working at an
inappropriate level, or using clumsy programming techniques from a low
level language in a high level language.

I do agree that there are: high level people, Kaz, and low-level people.

I wrote my entire 32-bit operating system in x86 assembly. Does that mean
it was a total and complete waste of time? It took me years and is
incomplete to this day, yet I gained a comprehensive and fundamental
understanding of computer architecture and all related aspects of data
processing in the process ... was it worth it? Yes. Best education I
could've received. And, it was fun! :)

https://github.com/RickCHodgin/libsf-full/tree/master/_exodus

Best regards,
Rick C. Hodgin
 
B

Ben Bacarisse

[He said rubbing his hands together, and in a particular voice]

"Yes! And that's exactly why my plan will succeed! We must go back to
the future!"

Those old timer developers knew what they were doing I say!

Except for those useless text-based interfaces of course...
At least that's
my assessment of them. Perhaps yours is different?

No, I agree. Peter Landin, Christopher Stachey, Tony Hoare, John
McCarthy, Edsger Dijkstra, Peter Naur, Steve Bourne, Kenneth
Iverson... the list goes on and on. They gave us languages like APL,
LISP, CPL, Algol (-60 and -68), CSP, and the functional paradigm -- all
of which seem to take a different view in terms of the benefit of
abstraction. Do we need to go back to the future further?

<snip>
 
G

glen herrmannsfeldt

That's a vast oversimplification. Most programmers I know can
work at higher and lower levels, and switch between them.

This question comes up often regarding HDL (verilog) programming.
And that turns out to give a big advantage, because if you try
to think about a language in terms of what you think you know
about its implementation, you'll generally get noticably worse
results. The most successful C programmers are the ones who,
even if they *do* know all sorts of things about what happens
under the hood, are able to disregard those things and program
to the abstract machine, without letting assumptions about the
nature of memory get in their way.

Also, it isn't worth an hour of human time to save a microsecond
of CPU time.

For those programming in the days when memories were smaller,
clocks were slower, and CPUs cost millions of dollars, it takes
some getting used to.

Still, it is nice sometimes to have some fun, try out some of
the different ways to do something, and there are cases where it
really does matter. Go down to the inner loops of a complicated
numerical algorithm and get it to run a little faster.

Most often, readability is much more important than speed.
It takes a while to learn, though.

-- glen
 
G

glen herrmannsfeldt

(snip, someone wrote)
I wrote my entire 32-bit operating system in x86 assembly.
Does that mean it was a total and complete waste of time?

If you did it 30 years ago, no. If you do it now, yes.
It took me years and is incomplete to this day, yet I gained
a comprehensive and fundamental understanding of computer
architecture and all related aspects of data processing in the
process ... was it worth it? Yes. Best education I
could've received. And, it was fun! :)

Well, fun is different. Most fun things are, technically, a waste
of time but we do them anyway.

-- glen
 
G

glen herrmannsfeldt

(snip)
The famous example would be that _The Mythical Man Month_ argued
strongly for transparency, but the 25th edition says unequivocally
that this was wrong, and that it is better for modules to be
independent and not see each other's internals.
I generally expect pioneers in a field to do lots of stuff that we
later find was ill-considered. Global variables seemed so
reasonable at first, but time has taught us that they are
almost always the wrong tool for the job.

Well, things have changed much over the years. Seems to me likely
that global variables likely were the right solution some years ago,
and maybe still in a few cases.

It is easy to forget now the small and slow machines from not so
many years ago.

OK, global variables. I have used this example in other groups,
but I don't believe here, and not so recently in any case.

Say you are writing a system to do weather simulation for
forecasting. It is likely that you will have arrays for the
temperature and pressure of the atmosphere that are used throughout.
Even more, much of the coding won't be useful for anything else.
Intead of passing those arrays as arguments through thousands
of subroutine calls, they might as well be global.

Remember, readability is the goal. If you know that they are used
in all routines, global doesn't hurt. (But be sure that the names
are consistent, and not confusing.)

Even so, you might need to use some routines that aren't so
special purpose. If you need to call a differential equation
solver, one written for general use, you might find it better
to pass them as arguments. Again, readability and reuse.

Also, some decisions last a long time. Design decisions from OS/360
from about 50 years ago still effect the way current z/OS works.
(And note that programs written in S/360 assembler many years ago
will still run on z/OS today.)

-- glen
 
K

Kaz Kylheku

(snip)



Well, things have changed much over the years. Seems to me likely
that global variables likely were the right solution some years ago,
and maybe still in a few cases.

What we have learned (decades ago) is that global variables can be tamed in the
form of "dynamically scoped" (a.k.a "special") variables.
 
G

Geoff

There are no computers anywhere in existence, be they men or machine, that
do not:

(1) Take some form of input
(2) Perform some operation on it
(3) Generate output

It is the fundamental definition of a "computer," ie "one who computes" or
"something that computes". They all follow this course:

(1) input
(2) process
(3) output

And in all cases, that absolutely means read-write. There are no exceptions.

Completely false suppositions. See below.
All computers are read-write by default. The only things which make them
read-only are imposed protocols which inhibit their otherwise unrestricted
operation.

Nevertheless, that absolutely doesn't mean the input needs to be
"read-writeable" in all cases or that string literals can't be
read-only. In fact, the initialization of constant strings into the
CONST segment, in the fashion VC++ does, is done with an eye toward
security. Your proposed compiler will allow a process to write new
data into your string buffers, with the potential to overflow those
buffers, a clear security risk if an attacker can control the input
that modifies your strings. You have not demonstrated that you
understand that issue and that you have coded to prevent such
exploitation. I dare say the amount of code you will need to write to
cover such contingencies will be far more repetitious and wasteful
than copying some read-only data into read-write memory with a few
memcpy calls.

Computers take input, yes, but that input is typically read-only
(e.g., a keyboard or a A/D converter), the data is placed into
read-write buffers where it can be modified as needed, sometimes, many
times it isn't.

The process may be read-only code, not self-modifying and certainly in
embedded systems it will ALWAYS be read-only, including any strings it
might be programmed to output. The x86 architecture also goes to great
pains to protect the code segment against modification. PROCESS DATA
will almost certainly be read-write but it may not even be stored in
common with INPUT.

Moreover, while a "computer" can be read-write, it is NEVER read-write
in all it's memory for all time. Certain portions MUST be reserved as
read-only and certain other special locations may even be write-only,
your argument is specious on these points. As a generalization of
"computer" your definitions and expectations fail miserably.
 
G

Geoff

Yeah ... it shouldn't be like that. In my ABI it won't. :) The strings
will be loaded directly from disk to the data segment they belong. No
copying, no duplication ... just a data load.

This technique was implemented in early versions of Adobe Reader. The
"data" was then "executed" with the resulting security breaches being
well-known.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,079
Messages
2,570,574
Members
47,205
Latest member
ElwoodDurh

Latest Threads

Top