Non-constant constant strings

R

Rick C. Hodgin

I'm annoyed I didn't think of it myself, being a big fan of compound
literals, but I had already got turned off by your telling everyone how
things could be so much better without extra commas being allowed and if
the standards committee had a brain between them. I stopped thinking
about the original issue. Fortunately Joe Keane didn't.

The comma responses came notably later in the thread after my original question
was asked, and even re-worded a few times. Those responses were also in direct
response to things I saw in this thread. a For the record, I don't think I
knew about the trailing comma in C before this thread, though I do have a
faint memory of knowing about it in Java. I still think the it's an overtly
silly feature ... but that is just my opinion. Nothing more.

Best regards,
Rick C. Hodgin
 
D

David Brown

The GUI is as far superior to text-based interfaces as anything is superior
to anything else. The time for text-base interfaces is past. It's time to
move toward the parallel nature of our growing world, and away from the
serial nature.

Speaking as someone who uses the command line (bash) constantly (I have
around 40 bash instances running on this machine at the moment), and
regularly uses ssh around different machines, that's just nonsense.

For most of my programming, eclipse is my tool of choice (though I
always use text-based makefiles rather than gui-based wizards and
options dialog boxes). For some tasks, gedit is a better choice. And
sometimes nano from the command line is faster and easier.

Each style of interface has its pros and cons, and trying to claim that
one is "far superior" to others without context is like claiming that
"cars are far superior to walking".
Everything in the future will be done in parallel.

That again is a wildly inaccurate generalisation. Post a video of how
you speed up your Usenet postings by using two keyboards in parallel,
one for each hand. If you can't do that, then please apply a bit of
thought and common sense before posting.

It is certainly true that for /some/ tasks, parallel processing is the
most likely future for increased speed and efficiency. Your PC already
has a "massively parallel compute engine" in its graphics card. But it
is very far from true that this will apply to "everything" - only a
small minority of tasks benefit significantly from parallelism.
In fact, I believe our
next big breakthrough in computers will be in a massively parallel compute
engine that basically replicates all processing simultaneously in many cores,
yet with subtle differences, allowing computed work to be derived with little
or no overhead -- i.e. quantum-like computing.

If you believe that, I've got a bridge to sell you.

We will see more "massively parallel" systems getting wider use (there
are already lots of these for specific tasks), but your PC is still
going to have trouble making good use of two cores for some time to
come. And "quantum computing" is a waste of time and money (except when
viewed as purely scientific research).
 
R

Rick C. Hodgin

I hit the GCC group a while back asking for the self() extension, which
allows recursion to the current function without it being explicitly
named or even explicitly populated with parameters. They said it was
not a good idea. I asked for something else (can't remember what)
and they said the same. So ... it was fuel for me to get started on my own.


I remembered what the other option was. It was edit-and-continue. I learned
a few months later that Apple had introduced what they called "fix-and-continue"
but that it wasn't scheduled to be added to the main line until much later.

I view edit-and-continue as the single greatest asset a developer could ever
possess in their toolkit. Why all compilers do not support this ability is
beyond me. And, for the record, the only reason I am using Visual Studio and
Visual C++ in 2014 is because VS has edit-and-continue. If GNU and GDB had
it I would've switched way back when. I do not like Microsoft at all, but in
my experience there has not been a better integrated developer environment
for C and C++ than VS.

Best regards,
Rick C. Hodgin
 
D

David Brown

How about this?
@ cat bar.c
char *bar[2] =
{
"jjj",
(char []) { "kkk" },
};

Schoolmaster Joe, brilliant! Fantastic! Thank you!

#define _rw(x) (char []) { x }

char* bar[] =
{
"jjj",
_rw("kkk"),
};

Works in GCC. Visual C++ is non-compliant though and doesn't support this
syntax. Still, a truly excellent solution!

Best regards,
Rick C. Hodgin

I don't use VC++ myself, but my understanding is that Microsoft gave up
on C standards long ago - I believe it supports C89 and perhaps C90, but
nothing newer. If you want to take advantage of modern C, pick a
different compiler.

Other than that, if your strings don't contain spaces, "funny"
characters, or duplications, then you might be able to make something
using token pasting and macros. But the only good alternative I can
thing of relies on gcc extensions, which doesn't get you very far.
 
R

Rick C. Hodgin

Do you really mean this?
Yes.

I seem to remember that on some machines in old version of FORTRAN, you
could (via pass by reference) do the equivalent of
4 = 5;
and then many places thereafter, the constant 4 now had the value of 5.
This caused much confusion.

Understandably. I believe if you want something to be constant, you should
declare it as const, and then it is created at executable load time into an
area of memory which is explicitly marked read-only. It will signal a fault
when an attempt to write to it is made.
Literals SHOULD be constant.

I disagree. A literal should be a typed bit of text which explicitly conveys
a value of known type at compile time. It should not be read-only unless it
is explicitly cast with a const prefix. I would also change that to not
require the bulky keyword const, but to use C"text" or some other shorthand
equivalent.
A C quoted string is a literal, therefore
it reasonably should be constant. The main reason they were not given a
char const* type is that const was a new keyword for the language, and
this would break almost every program written. (There is also the
smaller problem that some functions that take a char* parameter, return
a pointer into that string, don't themselves modify the contents of that
parameter, and since C doesn't have overloading, all these functions
where it might make sense to use that pointer to modify the string would
need two version, one taking const char * and return const char*, and
one that takes char* and returns char*.

I'm not talking about changing C. C will be the way C is forever because
there is so much legacy code written in it. I'm talking about a new standard
for another version of C.
Note, that C provides an easy method to change the literal (which is
defined as effectively const even if its type doesn't say so), into a
writable object with a statement as simple as
char mystr[] = "This is my string";
and if mystr is a global or static variable, this is normally a "no
cost" statement.

Also, since writing to strings is Undefined Behavior, and does not
require any diagnostics, an implementation is free to have a method of
supporting the strings being writable, and still be conforming. There is
a small cost to this, as programs might get a bit bigger to allow this
since it disallows string folding.

I think you must mean something related to how strings are physically packed
together in memory by the compiler is undefined, and therefore overwriting a
string boundary, for example, would cause undefined behavior. If it's
otherwise ... I'm confused because writing to a string is a fundamental
component of data processing, and is something that should be wholly
defined. :)

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

Speaking as someone who uses the command line (bash) constantly (I have
around 40 bash instances running on this machine at the moment), and
regularly uses ssh around different machines, that's just nonsense.

Are you using a GUI for each bash instance in a terminal window? A GUI
supports easy access to multiple terminal windows. A text-based interface
does not.
For most of my programming, eclipse is my tool of choice (though I
always use text-based makefiles rather than gui-based wizards and
options dialog boxes). For some tasks, gedit is a better choice.
And sometimes nano from the command line is faster and easier.

Try using Visual Studio with the Visual Assist X plugin. You'll be floored
at how much more productive you are. Edit-and-continue alone will shock you.
Each style of interface has its pros and cons, and trying to claim that
one is "far superior" to others without context is like claiming that
"cars are far superior to walking".

It's my experience. I have not found a better developer environment than
one based on a GUI, and one using edit-and-continue. Edit-and-continue
brings so much to the table that I think most people don't realize.
That again is a wildly inaccurate generalisation. Post a video of how
you speed up your Usenet postings by using two keyboards in parallel,
one for each hand. If you can't do that, then please apply a bit of
thought and common sense before posting.

At some point a computer will "know you" and when you sit down at a computer
it will know, from prior patterns, that you like to read this, that, the
other thing, and it will already have those ready for you. It will know you
like to perform these searches on certain things, that you like to read so-
and-so's posts first, and it will do all of this for you.

These will be little "agents" acting on your behalf, preparing data for your
consumption, doing all of it in parallel.

That's what I'm referring to. And there is more as it specifically relates
to programming.
It is certainly true that for /some/ tasks, parallel processing is the
most likely future for increased speed and efficiency. Your PC already
has a "massively parallel compute engine" in its graphics card. But it
is very far from true that this will apply to "everything" - only a
small minority of tasks benefit significantly from parallelism.

Incorrect. In my opinion. I do not believe there will be anything that
remains completely in serial for much longer. I believe there are some
physical hardware limitations being imposed upon us today because the hardware
people have not thought along those lines, or are pursuing the high-speed
core execution engine and do not want a complete paradigm shift. However, I
believe categorically that there will soon be a paradigm shift in computing.
We will break through the serial thread spawning other big serial thread
nature and move into something much more conducive to processing regular
top-down code in parallel by processing through both instances of an IF block,
for example, and then disregarding the results of the path not taken. In this
case, for an expense of electricity and heat generation, the process is sped
up as much as it can be because it doesn't have to wait for a prior result
before continuing on with the next result.

Several years ago I came up with a concept for a CPU. I mentioned it briefly
to Jerry Bautista at Intel when I went to visit him on the Terascale project.
I also submitted it to some folks at Tom's Hardware who "promised to pass it
along." It was a way to aggregate a series of "waiting threads" around the
current CS:EIP/RIP of the x86 engine, so that they could be kicked off in
advance to move temporally the components of reads and writes, but also to
handle future operations before knowing prior input results. The idea was that
everything in computing is essentially an equation. You can build a lengthy
equation and then fill in a few variables to determine the ultimate path taken,
but you don't need to know the value of the variables ahead of time, but rather
you can simply devise the equation on-the-fly and use the variables later. In
this way, all results of a block of code are already known, they're just
waiting on some prior input to determine which result is actually the correct
one.
If you believe that, I've got a bridge to sell you.

I do believe that. And I'm not buying a bridge from you. :)
We will see more "massively parallel" systems getting wider use (there
are already lots of these for specific tasks), but your PC is still
going to have trouble making good use of two cores for some time to
come. And "quantum computing" is a waste of time and money (except when
viewed as purely scientific research).

It will so long as we maintain the current CPU architecture, and the current
operating system architecture. What's needed is a new way of thinking. I
have that way of thinking ... I just need to get it all down in code, and it's
taking a long time to do that by myself.

Best regards,
Rick C. Hodgin
 
D

David Brown

Understandably. I believe if you want something to be constant, you should
declare it as const, and then it is created at executable load time into an
area of memory which is explicitly marked read-only. It will signal a fault
when an attempt to write to it is made.

An aim for a compiled language like C is to spot errors at compile time,
rather than at run time - it is better to give a compiler error when
trying to write to read-only data than to wait for a run time error
during testing.

Also note that C is used on a huge variety of systems - including many
for which "signal a fault" or "memory area marked read only" makes no sense.
I disagree. A literal should be a typed bit of text which explicitly conveys
a value of known type at compile time. It should not be read-only unless it
is explicitly cast with a const prefix. I would also change that to not
require the bulky keyword const, but to use C"text" or some other shorthand
equivalent.

A literal conveys a specific value known at compile time - something
that is known at compile time should remain known at /all/ times. So
literals should be "absolute" constants - there should be no way to
change them.
 
B

BartC

A good 10-15 years of coding should be required for anyone who is to have
any input on the future direction of a language.

I don't agree. Sometimes five minutes is enough to form a strong opinion
about a new language, and not much longer to start having ideas on how to
fix it.

Regarding the trailing comma in C lists, it *is* an untidy, odd-looking
feature, but as has been said, can be useful, and is optional to use. There
are more worse things to criticise.

(There are also a few things that I didn't use to like in C, but have since
adopted myself. Not many though..)
 
R

Rick C. Hodgin

An aim for a compiled language like C is to spot errors at compile time,
rather than at run time - it is better to give a compiler error when
trying to write to read-only data than to wait for a run time error
during testing.

Agreed. It's not always possible though. Some testing can only be done
at runtime because it relies upon external conditions or environments.
Also note that C is used on a huge variety of systems - including many
for which "signal a fault" or "memory area marked read only" makes no sense.

Agreed. There are, however, other systems where a program could be tested
which do support the greater debugging features (like fault signaling), so
that when the same code is run on the weaker systems which do not support
such features and catch such common program errors, the issue is a non-issue
because it's already been caught on the better debugging platform.
A literal conveys a specific value known at compile time - something
that is known at compile time should remain known at /all/ times. So
literals should be "absolute" constants - there should be no way to
change them.

Ridiculous. A literal conveys something known explicitly at compile-time,
but when used in conjunction with a variable name it only initially populates
the variable. A literal is only an always-constant value if it's used
directly in an expression, however that is not always the case either
because there exists the idea of self-modifying code. It's just that C
doesn't provide for those abilities natively.

A literal is conveyed quantity initially, but is something that can also
change at any point thereafter when used in a variable.

What you're referring to as "literal" by your definition is actually a
"constant". They're different. A literal is simply something conveyed
explicitly. A constant is something that cannot be changed (at least
legally, as per facilities given by the language).

Best regards,
Rick C. Hodgin
 
D

David Brown

Are you using a GUI for each bash instance in a terminal window? A GUI
supports easy access to multiple terminal windows. A text-based interface
does not.

Most of the bash instances have their own tabs within a terminal window
- so yes, they are command line text interfaces within a gui interface.
I've got other command line interfaces running on different machines
via ssh - most of these machines don't have a gui of any kind.

And yes, text-based interfaces easy support multiple windows. I use
"screen" extensively for that purpose.
Try using Visual Studio with the Visual Assist X plugin. You'll be floored
at how much more productive you are. Edit-and-continue alone will shock you.

I have absolutely zero interest in Visual Studio. I have only used it a
bit, but I saw nothing that enticed me from my current editors (Eclipse
when I want something big and powerful for heavy work, gedit for quick
and simple work, and nano for command line editing).

No, I would not be "floored" by anything VS has - disregarding any
Windows-specific development features (I don't use MS's tools, so they
are of no interest), for every feature VS has that Eclipse doesn't have,
Eclipse has two that VS doesn't. And since most of such complex
features are rarely used, there is no added value. On the other hand,
Eclipse works cross-platform and easily supports the wide range of
compilers I use - in fact, many toolchains I use come with Eclipse
pre-configured. VS, on the other hand, won't even run on my Windows
machine (MS has declared it to be out of date, even though it is fine
for the other Windows-only software I use) - and it certainly won't run
on my Linux machines.

And no, "edit and continue" would not "shock" me. I do much of my PC
development using Python, which is a language suitable for
interpretation and modifying code as you go along. Since my C
development is mainly for embedded systems, "edit and continue" would be
impossible (or at least very impractical), even if I thought the concept
were a good idea. If I stretch back to the time when I /did/ use MS
development tools, Visual Basic 3.0 (on Windows 3.1 it was the only
practical RAD tool available until Delphi), it supported "edit and
continue". It did not "shock" me at the time, 20 years ago, but it
/did/ cause a lot of problems with inconsistent behaviour.
It's my experience. I have not found a better developer environment than
one based on a GUI, and one using edit-and-continue. Edit-and-continue
brings so much to the table that I think most people don't realize.

I am sure you find VS useful for a lot of development. But that does
not mean it is useful for /all/ file editing purposes - can you honestly
say you never use "notepad" on your system?
At some point a computer will "know you" and when you sit down at a computer
it will know, from prior patterns, that you like to read this, that, the
other thing, and it will already have those ready for you. It will know you
like to perform these searches on certain things, that you like to read so-
and-so's posts first, and it will do all of this for you.

These will be little "agents" acting on your behalf, preparing data for your
consumption, doing all of it in parallel.

Doing multiple different small tasks in parallel in not a job for a
"massively parallel computing engine" - *nix systems have been doing
vast numbers of small tasks in parallel for 40 years, mostly with only 1
or a few cpu cores.
That's what I'm referring to. And there is more as it specifically relates
to programming.


Incorrect. In my opinion. I do not believe there will be anything that
remains completely in serial for much longer.

Well, I doubt if anything I say will shake your beliefs - but you might
find reading some history will help you guess the future.
I believe there are some
physical hardware limitations being imposed upon us today because the hardware
people have not thought along those lines, or are pursuing the high-speed
core execution engine and do not want a complete paradigm shift.

People /have/ thought along these lines - and have done so for decades.
The /fact/ is that few tasks really benefit from series parallelism
(though many can benefit from some multithreading and doing a few things
at the same time - mostly in the form of waiting for different things at
the same time). The /fact/ is that even for tasks that can be naturally
parallelised, it is a difficult job prone to very hard to find errors,
and the returns are often not worth the effort.
However, I
believe categorically that there will soon be a paradigm shift in computing.

Yeah, they said that about neural networks, fuzzy logic, genetic
programming, artificial intelligence, self-programming computers,
quantum computing, etc., etc. And yet we still program in C, mostly
single threaded. Somewhere there is a pattern to be found...
We will break through the serial thread spawning other big serial thread
nature and move into something much more conducive to processing regular
top-down code in parallel by processing through both instances of an IF block,
for example, and then disregarding the results of the path not taken. In this
case, for an expense of electricity and heat generation, the process is sped
up as much as it can be because it doesn't have to wait for a prior result
before continuing on with the next result.

Several years ago I came up with a concept for a CPU. I mentioned it briefly
to Jerry Bautista at Intel when I went to visit him on the Terascale project.
I also submitted it to some folks at Tom's Hardware who "promised to pass it
along." It was a way to aggregate a series of "waiting threads" around the
current CS:EIP/RIP of the x86 engine, so that they could be kicked off in
advance to move temporally the components of reads and writes, but also to
handle future operations before knowing prior input results. The idea was that
everything in computing is essentially an equation. You can build a lengthy
equation and then fill in a few variables to determine the ultimate path taken,
but you don't need to know the value of the variables ahead of time, but rather
you can simply devise the equation on-the-fly and use the variables later. In
this way, all results of a block of code are already known, they're just
waiting on some prior input to determine which result is actually the correct
one.


I do believe that. And I'm not buying a bridge from you. :)


It will so long as we maintain the current CPU architecture, and the current
operating system architecture. What's needed is a new way of thinking. I
have that way of thinking ... I just need to get it all down in code, and it's
taking a long time to do that by myself.

It is easy to imagine a Utopia. But getting there from the real world
is where the /real/ problem lies.
 
J

James Kuyper

Interesting. To recap, my reasoning relates to the nature of software and
computers in general. They are just that: computers. They take input, process
it, and generate output. That, by definition, means read-write. The only

Just because something is possible, doesn't mean it's always a good
idea. One of the most valuable features of any high-level language is
that it makes it easier to AVOID doing some things that lower level
languages allow, because those things shouldn't be done. Identifying
some things as read-only, so that it's a detectable error if an attempt
is made to write to it, is a prime example of this.
cases where I would like to have something be read-only is when I explicitly
cast for it, indicating that this variable should not be modified. In that
way the compiler can help catch errors, as can the runtime environment.

The key thing that makes the error checking possible is the ability for
things to be marked read-only; whether that's the default or something
that must be explicitly specified is less important. Since developers
would otherwise often not bother to mark things read-only, making
read-only the default would actually increase the number of errors caught.

C takes a middle ground; objects of most type are read/write unless
declared 'const', but constants of all kinds, including string literals,
are read only - it's a constraint violation to even attempt to write to
most constants, but because of the way C deals with pointers, it's only
undefined behavior to attempt to write to a string literal. It could be
a constraint violation if string literals were given the type "const
char[n]", but 'const' was added to C too late; it would have broken too
much code to make that change. This seems to me to be a reasonable
arrangement.

You can easily create a modifiable string by using a string literal to
initialize a character array. This strikes me as a reasonable approach.
To make a modifiable number, you have to store it in a variable; why
should strings be any different?
 
R

Rick C. Hodgin

I have absolutely zero interest in Visual Studio. I have only used it a
bit, but I saw nothing that enticed me from my current editors (Eclipse
when I want something big and powerful for heavy work, gedit for quick
and simple work, and nano for command line editing).

No, I would not be "floored" by anything VS has - disregarding any
Windows-specific development features (I don't use MS's tools, so they
are of no interest), for every feature VS has that Eclipse doesn't have,
Eclipse has two that VS doesn't.

I've tried Eclipse. I prefer Netbeans. What is it about Eclipse that is
so great? I'm not aware of many things Eclipse can do that VS can't,
especially when the Whole Tomato Visual Assist X extension is added.
And since most of such complex features are rarely used, there is no
added value. On the other hand,
Eclipse works cross-platform and easily supports the wide range of
compilers I use - in fact, many toolchains I use come with Eclipse
pre-configured. VS, on the other hand, won't even run on my Windows
machine (MS has declared it to be out of date, even though it is fine
for the other Windows-only software I use) - and it certainly won't run
on my Linux machines.

Use an Oracle VirtualBox instance running an older version of Windows. Or
get an updated version of Visual Studio. Personally I use Visual Studio 2008
only and it runs fine on everything from Windows 2000 up.
And no, "edit and continue" would not "shock" me.

I has shocked everyone I've shown it to. The ability to fix an error and
keep on going in your debug environment ... people are floored by it.
I do much of my PC
development using Python, which is a language suitable for
interpretation and modifying code as you go along. Since my C
development is mainly for embedded systems, "edit and continue" would be
impossible (or at least very impractical), even if I thought the concept
were a good idea. If I stretch back to the time when I /did/ use MS
development tools, Visual Basic 3.0 (on Windows 3.1 it was the only
practical RAD tool available until Delphi), it supported "edit and
continue". It did not "shock" me at the time, 20 years ago, but it
/did/ cause a lot of problems with inconsistent behaviour.

Perhaps you were not using it correctly. Edit-and-continue has flaws. It
won't handle certain things. If you introduce an error it doesn't always
give you the correct error code or explanation, etc. But, on the whole it
is well tried and debugged.
I am sure you find VS useful for a lot of development. But that does
not mean it is useful for /all/ file editing purposes - can you honestly
say you never use "notepad" on your system?

I use Notepad++ on some things. I use Sammy Mitchell's The SemWare Editor
on many others. On Linux I use nano when I'm on ssh, but for most local
things I use Gedit.

Visual Studio's IDE lacks several features. Visual Assist X adds most of
those lacking features back in. It makes Visual Studio much more like
Eclipse or Netbeans. Refactoring is one of the biggest with Ctrl+Alt+R to
rename a symbol. It also provides many speedups, has an Alt+G "goto
definition" lookup, an Ctrl+Alt+F "find all references" and so on.

Plus, Visual Studio itself has a Code Definition window which constantly
shows you the code definition line for the current symbol. It too is one
of the greatest time savers there is. Especially when going back in to
maintain older code.
Doing multiple different small tasks in parallel in not a job for a
"massively parallel computing engine" - *nix systems have been doing
vast numbers of small tasks in parallel for 40 years, mostly with only
1 or a few cpu cores.

With the soon-to-be many cores (64+) it will be done truly in parallel.
Well, I doubt if anything I say will shake your beliefs - but you might
find reading some history will help you guess the future.

What history teaches me is that technologies progress until there is something
radical that changes the nature of the thing. Horse selling businesses were
wide, as were horse facilities, buggies, whips, farriers, and so on. When the
automobile came along it changed everything. Not all at first, but ultimately.

The same will happen with computers. People will be building programming
stables, programming buggies, programming whips, being programming farriers,
until finally they step away from those forms and move to the new thing,
which will be the massively parallel computing model.
People /have/ thought along these lines - and have done so for decades.
The /fact/ is that few tasks really benefit from series parallelism
(though many can benefit from some multithreading and doing a few things
at the same time - mostly in the form of waiting for different things at
the same time). The /fact/ is that even for tasks that can be naturally
parallelised, it is a difficult job prone to very hard to find errors,
and the returns are often not worth the effort.

That's because of the existing hardware. The algorithm I described does things
which are serial in nature... in parallel. It's a new thing.
Yeah, they said that about neural networks, fuzzy logic, genetic
programming, artificial intelligence, self-programming computers,
quantum computing, etc., etc. And yet we still program in C, mostly
single threaded. Somewhere there is a pattern to be found...

I did not say those things. I realize other people have done so. What I do
believe is we will see the shift to a new CPU core that handles parallelism
at what today is only serial code, in a way that changes the paradigm. As
such a new language will be needed to describe it. That's what I'm building.
It is easy to imagine a Utopia. But getting there from the real world
is where the /real/ problem lies.

Agreed. It has been very hard on me to continue. I face exceeding opposition
whenever I describe these things. On top of which I profess Christianity as
the driving force for me doing this (God gave me certain abilities, and as a
believer I desire to give those abilities back to Him, and unto all men).

Both of these facets cause barriers between myself and others. It is a hard
thing for me to figure out how to get around. I believe it will require a
special type of person to do so: (1) a devout believer, (2) someone highly
skilled in software development, and software and hardware theory.

Best regards,
Rick C. Hodgin
 
J

James Kuyper

James Kuyper said:
On 01/21/2014 04:39 PM, Rick C. Hodgin wrote:
Try this, and then just always start at 1 instead of 0, and process until
you reach null:

const char *archiveFormats[] = {
null
#if CPIO_SUPPORTED
,"cpio"
#endif
#if TAR_SUPPORTED
,"tar"
#endif
#if ZIP_SUPPORTED
,"ZIP"
#endif
#if APK_SUPPORTED
,"apk"
#endif
,null
};

As he said: ugly. The two extra nulls (and "null" needs to be defined)
seem far worse to me than the extra comma - they survive into the object
file, and even into the final executable, taking up extra space. The
extra comma disappears during translation phase 7 and has no impact on
the actual executable.

I don't understand the need for the prepended comma and the two nulls. I
would just have everything with a trailing comma, then have NULL as the
final element, which also serves as a useful sentinel instead of trying to
figure out the length of the list.

You're quite right, and I should have noticed that, but it was so
gratuitously ugly I didn't want to spend much time examining it.
 
R

Rick C. Hodgin

Just because something is possible, doesn't mean it's always a good
idea. One of the most valuable features of any high-level language is
that it makes it easier to AVOID doing some things that lower level
languages allow, because those things shouldn't be done. Identifying
some things as read-only, so that it's a detectable error if an attempt
is made to write to it, is a prime example of this.

I completely agree ... hence the use of "const" where appropriate.
The key thing that makes the error checking possible is the ability for
things to be marked read-only; whether that's the default or something
that must be explicitly specified is less important. Since developers
would otherwise often not bother to mark things read-only, making
read-only the default would actually increase the number of errors caught.

Ibid. :)
C takes a middle ground; objects of most type are read/write unless
declared 'const', but constants of all kinds, including string literals,
are read only - it's a constraint violation to even attempt to write to
most constants, but because of the way C deals with pointers, it's only
undefined behavior to attempt to write to a string literal. It could be
a constraint violation if string literals were given the type "const
char[n]", but 'const' was added to C too late; it would have broken too
much code to make that change. This seems to me to be a reasonable
arrangement.

That's why a new language is needed. C will never change. But something
can be very C-like, while looking to the future, as well as the way things
should've been (when peering back with 20/20 hindsight).
You can easily create a modifiable string by using a string literal to
initialize a character array. This strikes me as a reasonable approach.

It's reasonable for a small number of relatively stable unchanging items.
When you get into code that will be altered from time to time it is less
reasonable because it requires maintaining more than one thing.
To make a modifiable number, you have to store it in a variable; why
should strings be any different?

One way to look at it is that's what's been done with this syntax:
char* list[] = { "one", "two", "three" };

In this case I have created three variables which I can explicitly reference
in my code:
list[0]
list[1]
list[2]

They just don't have separate, individualized names, but are part of an array.

It's a quirk. No biggie. I might come across as very passionate about this,
because I've been told by many people over the course of my lifetime that I
do, but I honestly don't care. I know I will never change the minds of other
people. And I am content to proceed on with my own language. I have goals in
mind for which C is ill-equipped (from what I know about it, not knowing every
standard, but from coding samples I've seen over my programming life).

Best regards,
Rick C. Hodgin
 
K

Keith Thompson

Rick C. Hodgin said:
How about this?
@ cat bar.c
char *bar[2] =
{
"jjj",
(char []) { "kkk" },
};

Schoolmaster Joe, brilliant! Fantastic! Thank you!

#define _rw(x) (char []) { x }

char* bar[] =
{
"jjj",
_rw("kkk"),
};

Works in GCC. Visual C++ is non-compliant though and doesn't support this
syntax. Still, a truly excellent solution!

One thing to watch out for (I think I mentioned this before): the object
associated with a composite literal has a lifetime that depends on where
it appears. If it's at file scope, the lifetime is static, just like a
string literal, but if it appears at block scope (inside a function
definition) it has automatic storage duration.

That means that this:

const char *foo(void) { return "hello"; }

returns a pointer to a string that exists for the entire execution of
the program, but this:

char *bar(void) { return (char[]){"good-bye"}; }

returns a pointer to a string that ceases to exist as soon as the
function returns. (gcc doesn't warn about this.)

Also, identifiers starting with underscores (like _rw) are reserved to
the implementation. If you know the exact rules you can sometimes
define such identifiers safely, but it's easier just to avoid defining
your own identifiers with leading underscores.
 
R

Rick C. Hodgin

You're quite right, and I should have noticed that, but it was so
gratuitously ugly I didn't want to spend much time examining it.

LOL! It's one extra null (beyond the "useful sentinel"), and it has the solid
purpose of not needing the extra comma.

Here's an interesting idea, the concept of the -1 member, which does not
exist as data, but exists as a placeholder to allow the commas to work
as being always prepended, rather than appended. This dropin would also
serve as a cue that this code is variable in length:

const char* archiveFormats[] = {
#pragma index(-1)
null
#if CPIO_SUPPORTED
,"cpio"
#endif
#if TAR_SUPPORTED
,"tar"
#endif
#if ZIP_SUPPORTED
,"ZIP"
#endif
#if APK_SUPPORTED
,"apk"
#endif
,null
};

That pragma allows data to exist in syntax before the actual definition of
the array begins at element 0, allowing the standard syntax to be satisfied,
yet without actual data being conveyed in the first null position. It also
extends the ability of an array definition to include automatic null place-
holders so that data can be inserted at a later time, as in:

char* archiveFormats[] = {
"arc", // Known formats supported at compile time
"zip",
"tar",
// Room for future expansion
#pragma index(29)
null
};
int archiveFormatsCount = elementsCount(archiveFormats);

Now the array has 30 elements, all except the first three are null, allowing
for run-time expansion by populating the list as extensions or plugins are
added.

In the alternative, use a linked list. :)

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

Schoolmaster Joe, brilliant! Fantastic! Thank you!
#define _rw(x) (char []) { x }
char* bar[] =
{
"jjj",
_rw("kkk"),
};
Works in GCC. Visual C++ is non-compliant though and doesn't support this
syntax. Still, a truly excellent solution!

One thing to watch out for (I think I mentioned this before): the object
associated with a composite literal has a lifetime that depends on where
it appears. If it's at file scope, the lifetime is static, just like a
string literal, but if it appears at block scope (inside a function
definition) it has automatic storage duration.

Also, identifiers starting with underscores (like _rw) are reserved to
the implementation. If you know the exact rules you can sometimes
define such identifiers safely, but it's easier just to avoid defining
your own identifiers with leading underscores.

Thank you. :)

Well, it is my hope with _rw() that it makes it into the C2014 language
revamp. And even if not, it will be in RDC. :)

Best regards,
Rick C. Hodgin
 
J

James Kuyper

On 01/21/2014 06:51 AM, Rick C. Hodgin wrote:
....
I don't want to change the value of a constant. By definition one
should not be able to do that. :)

What I want is a way to change the value of my variable, the one
defined and stored as a literal string in source code, one that I
desire to be a variable, yet the one being interpreted by the
compiler as a constant instead of a variable with no apparent
override available to direct it where I would like it to go even
though those members in attendance were content to allow such an
ability through common extensions (nearly all of which were later
dropped it seems).

If you don't use the language in such a way as to declare a variable
containing your string, you shouldn't expect it to be variable.

char variable_string[] = "My variable string";

There's no denying that C's string oriented facilities are not as
convenient as those of some other languages; but an inability to declare
variable strings is not one of those inconveniences.
 
J

James Kuyper

The GUI is as far superior to text-based interfaces as anything is superior
to anything else. ...

You seem to habitually denigrate anything old as obsolete. In time,
you'll learn better, as you yourself start getting older. Knives are
among the oldest of weapons, dating back to the stone age and even
before. Rifles, artillery, or nuclear bombs are clearly greatly superior
to knives for the purposes for which those weapons were designed; yet
the military still trains soldiers how to fight with knives. Think about
that for a while, and when you've fully understood why that's a
reasonable decision for the military to make, you'll have a better
understanding of the disconnect between "old" and "obsolete".

GUIs are very useful, in some contexts, particularly when using things
that I don't use very often, and am therefore not very familiar with.
However, I greatly prefer a command line interface, where feasible, for
things that I do frequently enough to have memorized the available options.
 
K

Kaz Kylheku

Then there is a third group: insipid whiners about trailing commas being
allowed for convenience in a language they barely understand.

Kaz, I don't know about the history of C. Over time I've learned to use the
language and I have written a lot of code in it. I am currently working on a
large project written entirely in C. I understand most C code when I read it.
Much beyond a pointer to a pointer and my head starts to spin though. :)

I don't purport to be an expert in C standards. Never have been. Never will
be. I have no interest in becoming an expert in that. I came here asking for
something specific and no one had a solution for me, just workarounds. That
was, until this solution was introduced:
char* list[] = { "jjj", (char[]){"kkk"}}.

The following solves the problem, and is highly portable. Moreover, it has
basically the same structure and should compile to the same object code; it
just doesn't have the source-level "syntactic sugar":

char el0[] = "jjj";
char el1[] = "kkk";
char *list[] = { list_elem1, list_elem1 };

You do not have an actual "problem" here; you're just looking for how to do
something using the minimal number of keystrokes.

Also note that you could just waste a few bytes and do this, which also
works in pretty much any C dialect:

char list[][128] = {
"everything",
"is mutable now"
};

(This program isn't targetting tiny embedded systems where every byte
counts, is it? It's a code generation tool that runs on developer machines
and build servers. It runs for a few milliseconds and then terminates,
freeing all of its memory to the OS.)

The number of keystrokes you've wasted in this newsgroup outweighs
any savings by now from the syntactic sugar.

The approach you're taking of mutating global strings is an poor design for
what you're doing, based on a poor choice of programming language.

A high school kid working in, say, Lisp, Python, Ruby or Perl would have the
project done already.
That was THE solution I was looking for. Unfortunately Visual C++ doesn't
support it, but that's a totally separate issue.

Yeah, minor issue: can't actually use the solution in one of the development
environments you use, and have been mentioning from the start.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,083
Messages
2,570,591
Members
47,212
Latest member
RobynWiley

Latest Threads

Top