Non-constant constant strings

R

Rick C. Hodgin

But adding new variables can move around existing variables.
As they won't be in the place that pointers are pointing, things
can get bad pretty fast.

When you work within the edit-and-continue model, there are constraints
that allow for extra space to be created so you have room to add a certain
range of new variables, etc., before a full recompile is required. The
compiler will let you know if the change is more complex than edit-and-
continue will support. But, generally speaking, since it is designed to
work around this API model, there is typically a lot of room provided to
allow you to continue working.

Of the 100+ edit-and-continue modifications I'd make in an average day of
coding, probably 5 of them actually require me to exit and recompile from
scratch -- and those are typically only large cumulative changes over many
edit-and-continue sessions while I'm debugging.

I've added entire functions, entire source code files, entire new variables
and constant declarations ... it all works up until the point where I get
beyond its default set-aside quantity of space. Then I simply click restart
and it automatically stops, recompiles, and restarts the debugger if there
were no errors. If there were errors it prompts me.
Consider other than the first call to a recursive routine.
It might know where variables are in the current instance, but
that will be wrong for others.

Yes. In such a case, because the data which was already computed and
prepared came from another program (the one which existed before you made
your change), it will not longer mate up and will fail. In such a case,
restart. When such a case does not exist, then it's not an issue.

Best regards,
Rick C. Hodgin
 
K

Kenny McCormack

Rick C. Hodgin said:
The purpose of edit-and-continue is to work with what is already there.
Microsoft uses what they call an "incremental linker" which is able to
link to the previously constructed ABI.

Another thing you need to be aware of is that every time you mention
"Microsoft", you inflame the regs to a subtle, but unmistakable rage.

They will never come out and say it in so many words, but every mention of
that hated (i.e., non-Unix) company makes them just that much less likely
to want to actually communicate with you. I.e., that much more likely to
just see your posts as opportunities for more endless nitpicking.

--
(The Republican mind, in a nutshell)
You believe things that are incomprehensible, inconsistent, impossible
because we have commanded you to believe them; go then and do what is
unjust because we command it. Such people show admirable reasoning. Truly,
whoever is able to make you absurd is able to make you unjust. If the
God-given understanding of your mind does not resist a demand to believe
what is impossible, then you will not resist a demand to do wrong to that
God-given sense of justice in your heart. As soon as one faculty of your
soul has been dominated, other faculties will follow as well. And from this
derives all those crimes of religion which have overrun the world.

(Alternative condensed translation)
"Those who can make you believe absurdities, can make you commit atrocities".
 
J

James Kuyper

It's completely obvious that the developer would be using his machine,
and that my comment meant somebody's computer ... you know ... out
there ... in the world.

It was completely obvious that your sentence contained a logic error of
some kind. What it would look like after the error was corrected was not
at all obvious.
Exactly! It makes even less sense ... so that's not what I would've meant.

Using a requirement that the correct interpretation had to make sense
would have required that I come up with a relevant response to his
question that bore some similarity to what you actually wrote. I
couldn't come up with any such interpretation at the time, and I still
don't know of any. That approach would never have led me to the meaning
you actually intended.
As my mother would say to me, "C'mon, man ... figure it out."

There are multiple possible corrections, depending upon how serious of a
error you made in expressing yourself. I was giving you the benefit of
the doubt, and holding out for the possibility that you were making a
relevant statement VERY badly. I didn't want to automatically assume you
were making only a one-word mistake in a completely irrelevant response.
 
R

Rick C. Hodgin


I appreciate your advice and I apologize for my error. I can tell it's
greatly affected you. I will try to remember in the future to be more
clear. If I could return the favor and offer you some advice: lighten
up. We are human beings. There is no reason to be so picky over an
issue that at best was a brief question in someone's mind. You need
only read on to another post I wrote about local and remote debugging
to figure out what I must've meant, even with me leaving out the
"else's" part.

This will be my last post to you on the subject of any grammar errors
relating to my not using "else's." Good day.

Best regards,
Rick C. Hodgin
 
J

James Kuyper


I appreciate your advice and I apologize for my error. I can tell it's
greatly affected you.

It's true that I've been annoyed by your responses, but you're wrong
about the basis for that annoyance. The fact that your statement needed
to be corrected in order to say what you meant it to say was only a
minor annoyance. The fact that what you meant to say was irrelevant to
BartC's question, and that you've said nothing to address that issue, is
by far the biggest factor in my annoyance.
 
J

James Kuyper

On 02/04/2014 01:57 PM, Dr Nick wrote:
....
I do find it odd that Rick is, essentially, saying "this is something
really useful that some compilers/debuggers already do and wouldn't it
be great if more did" and everybody else is explaining how it's quite
impossible, and if it wasn't impossible no-one would implement it, and
if it was implemented no-one would use it.

I don't remember anyone say that. I do remember people expressing
skepticism about the possibility of it working as well as Rick has
implied. Such skepticism is not at all inconsistent with the existence
of current compilers/debuggers that are actually capable of doing this.
Rick has indicated that some changes are too extreme to allow
"edit-and-continue" to actually continue; the skeptics are simply
suggesting that such changes have to be a lot more common than Rick has
implied.
 
G

glen herrmannsfeldt

Dr Nick said:
(snip)

(snip)
I do find it odd that Rick is, essentially, saying "this is something
really useful that some compilers/debuggers already do and wouldn't it
be great if more did" and everybody else is explaining how it's quite
impossible, and if it wasn't impossible no-one would implement it, and
if it was implemented no-one would use it.

I don't know about "everybody", but it seems to me that if your
program has been running for hours or days before it hits the bug,
it might be nice to fix it without having to restart. But that is
pretty rare.

It might also depend on how you think about bugs. Sometimes when I
know where a bug is, I will immediately know the cause, and wonder
how I missed it in the first place. Maybe even once in a while, I
will have two thoughts on how to do something, and then realize later
that I chose the wrong one. I try to think about my programs as a
system, knowing how parts have to work together.

When I started programming, it was often a day before I saw the
results. That was incentive to think ahead. I still try to think ahead,
but sometimes just try running something to see what it does.
(Such as if I know there are two choices, but it is too much work
to figure out which one is right.) Even so, I usually know that is
what I am doing.

For people who started when you know in a second if a program compiles
or not, maybe it is different. Write first, ask questions later.
Randomly change things until the program runs, while expecting
each to be the last bug. The first person I worked for, actually
writing programs, told me "All programmers are optimists."
If not, they would never get anything done.

-- glen
 
R

Rick C. Hodgin

For people who started when you know in a second if a program compiles
or not, maybe it is different. Write first, ask questions later.

The bulk of the development I do is user applications. The response is
almost immediate if there's an error. You launch the app, go to the screen
or function, do the thing, see the results, and so on.

Edit-and-continue definitely isn't for every type of development. The ABI
it uses is far less efficient than a properly optimized ABI. But, for
development and developer-based testing ... it's adequate, and it makes
fixing bugs much faster.

Best regards,
Rick C. Hodgin
 
G

glen herrmannsfeldt

James Kuyper said:
On 02/04/2014 01:57 PM, Dr Nick wrote:
...
I don't remember anyone say that. I do remember people expressing
skepticism about the possibility of it working as well as Rick has
implied. Such skepticism is not at all inconsistent with the existence
of current compilers/debuggers that are actually capable of doing this.
Rick has indicated that some changes are too extreme to allow
"edit-and-continue" to actually continue; the skeptics are simply
suggesting that such changes have to be a lot more common than Rick has
implied.

It might even be different for different debugging styles.

Often enough when I find a bug, I find that other changes should also
be done to be consistent with the fix. Not necessarily needed, and
a quick patch might work, but that in the long run there is a better
way.

It is then, for me, better to fix it right once. Otherwise, it takes
two fixes (the quick patch and the do it right later). Considering
the possible extra bugs, I prefer the former.

-- glen
 
R

Rick C. Hodgin

Rick has also stated that he uses that facility over a hundred times
each day ("the 100+ edit-and-continue modifications I'd make in an
average day of coding"), which certainly implies that the changes he's
making with it are relatively tiny. I think many people would argue
that such usage is likely an indication of a problem with the
development methodology.

I do a large percentage of my development from inside the debugger using
edit-and-continue. It allows me to test continually as I'm developing,
and with a data environment that's already and continuously populated,
with only periodic changes being made (depending on the nature of the
development, the bulk of what I do is new development, not bug fixes).

Best regards,
Rick C. Hodgin
 
B

BartC

Rick C. Hodgin said:
The bulk of the development I do is user applications. The response is
almost immediate if there's an error. You launch the app, go to the
screen
or function, do the thing, see the results, and so on.

Edit-and-continue definitely isn't for every type of development.

Most of the C I do is associated with creating interpreters. Moreover, this
C is usually write in language X which is auto-translated to C. When a
problem in the underlying code occurs, the program will generally be in the
middle of executing byte-code corresponding to some source code in language
Y that I'm trying to debug.

You can appreciate that debug/edit&continue reports in terms of C code and
line numbers will not that be that useful! (It gets hairier when Y is used
to compile or implement a further language Z or, maybe a new version of X)

And quite often when things go wrong, it is due to something in a totally
different part of the underlying code, and at a considerable time before.

Maybe the way I use C is not typical (certainly my use of X isn't); but I'm
sure lots of developers will have their own reasons why such a system as you
use is not so useful to them.
 
R

Rick C. Hodgin

Maybe the way I use C is not typical (certainly my use of X isn't); but
I'm sure lots of developers will have their own reasons why such a
system as you use is not so useful to them.

Yeah. In such a case ... don't use it. It would be a complete waste
of time given the levels of indirection between source code and executable.

Best regards,
Rick C. Hodgin
 
D

David Brown

I do find it odd that Rick is, essentially, saying "this is something
really useful that some compilers/debuggers already do and wouldn't it
be great if more did" and everybody else is explaining how it's quite
impossible, and if it wasn't impossible no-one would implement it, and
if it was implemented no-one would use it.

I think people have been saying that it is not impossible to implement,
but it is severely limited. There are some types of changes that can
usefully be made while debugging, but a great deal that cannot be done -
or at least, cannot be done while keeping everything consistent. And I
am very sure that compiling a program in a way that will work with
edit-and-continue will greatly compromise the kinds of optimisations
that are possible for the compiler. I know that some people don't
enable compiler optimisation - but for my own part, I think C without
optimisation is a waste of time. If I don't need to worry about code
speed or size, I use a higher level language (usually Python - where the
language supports "edit-and-continue" development directly) for faster
development. And when I do need to think about speed and size (for my
embedded work), I need to enable compiler optimisations that would
hinder edit-and-continue debugging (even if it were practical with code
in flash).

I also think the real-world uses of edit-and-continue are rare, and
would not make a big difference for most development work. Rick appears
to have a development methodology that is centred around
edit-and-continue, while many (including me) find it an unusual system
that would not work for other people or other uses. There are times
when edit-and-continue could be useful, but they would be few, and the
time savings small (certainly totally different from the hyperbola used
by Rick).

So it is impossible to implement completely (at least for C), hard to
implement usefully (especially for toolchains where the IDE, debugger
and compiler are separate), and provides only modest productivity gains
for most developers. It does not surprise me that it is not widely
implemented (though at least two implementations have been mentioned
here) - for most debugger developers it is on the list of "nice to have
if we get the time, but not a high priority" features.
 
R

Rick C. Hodgin

I also think the real-world uses of edit-and-continue are rare, and
would not make a big difference for most development work.

Microsoft included it in their languages since Visual Studio 98. It has
been included in every one since. Visual Studio 2010 did not allow
edit-and-continue on 64-bit ABIs, but Visual Studio 2012 did. Microsoft
is continuing to support it and add functionality.

Edit-and-continue works on both un-optimized and optimized code. It's
just that making certain changes in source code can greatly alter the
resulting ABI when optimizations are enabled. For this reason, most
people do their debugging in debug mode (no optimizations) and then do
final testing in release mode (full optimizations).
Rick appears
to have a development methodology that is centred around
edit-and-continue,
Correct.

while many (including me) find it an unusual system
that would not work for other people or other uses. There are times
when edit-and-continue could be useful, but they would be few, and the
time savings small (certainly totally different from the hyperbola used
by Rick).

Each developer must look at what they're doing. If you're writing a C-only
program and you need to debug it, then edit-and-continue will almost always
provide significant benefit. If you're doing raw data processing, then
it will help, but probably not as much.
So it is impossible to implement completely (at least for C), hard to
implement usefully (especially for toolchains where the IDE, debugger
and compiler are separate), and provides only modest productivity gains
for most developers. It does not surprise me that it is not widely
implemented (though at least two implementations have been mentioned
here) - for most debugger developers it is on the list of "nice to have
if we get the time, but not a high priority" features.

Apple saw enough value to add "fix and continue" to their C compiler
toolchain. Must not be a useless feature if they took the time to code,
debug, test, and use it (as it takes a significant overhaul to the existing
static compile-link-run model to implement).

My position: Everything in the future will be edit-and-continue. This
will be mostly limited to program logic fixes, and not data fixes. But
the advantages of being able to immediately fix an error are so worthwhile
that eventually all tools of major use will provide the ability.

GCC already has "fix and continue" on their roadmap, for example.

Best regards,
Rick C. Hodgin
 
B

BartC

Rick C. Hodgin said:
Apple saw enough value to add "fix and continue" to their C compiler
toolchain. Must not be a useless feature if they took the time to code,
debug, test, and use it (as it takes a significant overhaul to the
existing
static compile-link-run model to implement).

There are two aspects: (1) avoiding a regular edit-compile-link cycle; (2)
avoiding the set-up or run-time needed to get to a test-point.

(1) comes about because C isn't really designed to be compiled and linked
quickly, and there are some complicated (and therefore slow) tools around.

I acknowledge that (2) can sometimes be of benefit, although what can be
done is restricted. MSDN says this about E&C on C#: "/The rule of thumb for
making E&C changes is that you can change virtually anything within a method
body, so larger scope changes affecting public or method level code are not
supported./"

I think that if compile-and-build was instantaneous, then you would have far
less need of such a feature (E&C). You don't always have an elaborate
set-up, or you can make other arrangements to take care of it (as I do).
My position: Everything in the future will be edit-and-continue. This
will be mostly limited to program logic fixes, and not data fixes. But
the advantages of being able to immediately fix an error are so worthwhile
that eventually all tools of major use will provide the ability.

You might have become too dependent on this feature. Even ordinary debugging
should only be used for 'intractable' problems, it is sometimes said.

In fact you might be using the wrong language, if you do a lot of
programming by trial and error. (Which is what I often do, but I know that C
isn't the best language to use that way: you spend time typing loads of
semicolons and punctuation, writing forward prototypes, creating convoluted
code to get around a missing syntax feature, only to tear it all down five
minutes later as you try something else!)
 
R

Rick C. Hodgin

There are two aspects: (1) avoiding a regular edit-compile-link cycle;
(2) avoiding the set-up or run-time needed to get to a test-point.

(1) comes about because C isn't really designed to be compiled and linked
quickly, and there are some complicated (and therefore slow) tools around.

In today's tools this is often the case. There is nothing inherent about
C that requires it be a slow compilation + linking process. The linker is
not even needed. It's used as a standard component so that multiple prior-
compiled objects can be linked together without requiring the entire thing
be compiled whenever one change is made. But, that's a design decision.
A rethinking of how that process could work (even with static-linked-in
objects) bypasses it. That's what Microsoft did with their incremental
linker (ilink.exe) which links in components atop the previously
constructed ABI.
I acknowledge that (2) can sometimes be of benefit, although what can be
done is restricted. MSDN says this about E&C on C#: "/The rule of thumb for
making E&C changes is that you can change virtually anything within a method
body, so larger scope changes affecting public or method level code are not
supported./"

C# is a managed language running in virtual machine (like Java). It has
limitations imposed upon it by the virtual machine.
I think that if compile-and-build was instantaneous, then you would have
far less need of such a feature (E&C). You don't always have an elaborate
set-up, or you can make other arrangements to take care of it (as I do).

Visual Studio 2003's "apply changes" is almost instantaneous (on the types
of programming I do, typically a main .exe, a few DLLs).
You might have become too dependent on this feature. Even ordinary debugging
should only be used for 'intractable' problems, it is sometimes said.

Computers compute. I believe that we should have immediate results from
our efforts. When we're coding, everything up to the point where we are
coding should be available for us to see the results of immediately. The
computer can spawn, execute until failure, and then pause or discard the
computed results. This ability doesn't exist today in standard tool chains
so it may be hard to visualize ... but there is nothing preventing it from
happening except the fact that it's never been coded for.
In fact you might be using the wrong language, if you do a lot of
programming by trial and error. (Which is what I often do, but I know
that C isn't the best language to use that way: you spend time typing
loads of semicolons and punctuation, writing forward prototypes, creating
convoluted code to get around a missing syntax feature, only to tear it
all down five minutes later as you try something else!)

You have a few false premises. I don't do a lot of programming by trial and
error. I have discovered that it is much faster to code something and then
see immediate results on it, making any required changes if it's wrong, than
to spend time in a static in-my-head environment trying to get it perfect and
flawless before I ever compile. Since the computer can run something for me,
and because it is very fast, and, with edit-and-continue, I can set a
breakpoint to a particular point in code, then step over my lines of code
one-by-one and see the immediate result on the computed data, I don't need
to spend as much time writing and thinking through code. I can give it my
best consideration, and then run it and fix it on-the-fly.

In my experience this model is notably faster than trying to get everything
perfect the first time before ever compiling because it takes a long time to
think through every scenario, and it doesn't take a long time to test it.
And, as you are seeing the changes come through line-by-line, it will spark
other considerations as you go, things the data makes you think of that you
may not of spontaneously thought of while you were in "coding the algorithm"
mode.

Most times I code something anymore I get it exactly right. I may make odd
typing mistakes, or what have you, but in my head the algorithm was exactly
what I needed the first time out. That doesn't change the fact that it's
still faster to do this inside the debugger where I can directly test my
changes in real-time, than to do it all in my head.

The computer is a tool. It should be helping people. If the environment
exists to let it have immediate execution on data, and a reset ability
should you need to start back over ... it only makes sense to use it. And
I'm not the only person who agrees. The GCC folk agree. Apple agrees.
And Microsoft has agreed for a long time.

I don't think you realize the project I'm currently working on, BartC.
If you are interested, please spend some time considering it:

http://www.visual-freepro.org

There's a wiki, videos, and you can see all of the source code. In one
paragraph, this is what I'm doing:

Creating a new virtual machine, writing the compilers for it,
creating the debugger for it, creating the IDE for it, coding all
algorithms related to this from the ground up, and providing
facilities for a Visual FoxPro-compatible language called Visual
FreePro.

I've been 18 months on this project since I started. I'm pressing forward
on it daily. It is a tremendous amount of design, coding, and so on. I
am currently about 50,000 lines of code into it, and I have about another
50,000 lines of code to go before it's where I want it to be. After that,
I will be porting it to my own operating system on x86, then later ARM,
then on to 64-bit x86, and 64-bit ARM.

My goals are to create new toolset alternatives to what currently exists,
and ones which, from inception, employ these features I go on about. They
exist in part today in various toolsets ... but what I'm trying to create
is a new ecosystem, a new community, something I call "The Village Freedom
Project" whereby all of the people world-wide will have free access to
these tools should they wish to use them. And I am doing all of this upon
the name of Jesus Christ, giving back the best of the talents, skills, and
abilities He gave me, unto Him, and unto mankind.

Nobody has to use my offering. I am doing this for Him, and for all of
those people who will want to use it. I am doing this because I recognize
from where my skills originate, and who it was who gave them to me in the
first place, and I desire to pass along my skills to you, and others, so
that each of you may gain from that which He first gave me.

Best regards,
Rick C. Hodgin
 
G

glen herrmannsfeldt

David Brown said:
On 04/02/14 19:57, Dr Nick wrote:

(snip on edit-and-continue)
I think people have been saying that it is not impossible to implement,
but it is severely limited. There are some types of changes that can
usefully be made while debugging, but a great deal that cannot be done -
or at least, cannot be done while keeping everything consistent.

Seems to me that one could write a C interpreter, or something close
to one, that would allow for the usual abilities of interpreted
languages. Among other things that could make C easier/faster to
debug would be full bounds checking.
And I am very sure that compiling a program in a way that will work
with edit-and-continue will greatly compromise the kinds of
optimisations that are possible for the compiler.

Well, even more, does that change the bugs that need to be found?

If it takes one second to start the program from the beginning and
get to the point of the bug, is it really worth edit-and-continue?

For me, when I am debugging I want to be thinking as much as I can
about the program I am working on. What it could possibly be doing
wrong, and how to fix that. If I also have to think about other causes
for bugs, such as having the wrong pointer due to edit-and-continue,
then I can't completely think about the program.

As I wrote, I did years ago use a BASIC system that allowed for
edit and continue. (You could even continue at a different place.)
But later, due to changes in ORVYL, that feature was removed.
I know that some people don't enable compiler optimisation - but
for my own part, I think C without optimisation is a waste of time.
If I don't need to worry about code speed or size, I use
a higher level language (usually Python - where the language
supports "edit-and-continue" development directly) for faster
development. And when I do need to think about speed and size (for my
embedded work), I need to enable compiler optimisations that would
hinder edit-and-continue debugging (even if it were practical with code
in flash).
I also think the real-world uses of edit-and-continue are rare, and
would not make a big difference for most development work.

They might not be so rare, but in many cases other ways have been
found to get around them. Years ago, I was debugging a program that
took some time to get to each bug. First was to find input data that
would get there faster. Second was to use a compiler (WATFIV) that did
subscript checking when other compilers didn't. But so few programs
now run minutes or hours or days before a bug is found. (I suppose
operating systems being a big exception. I know that MS has debug
kernels for some versions of Windows. I can see that might be useful
sometimes.)
Rick appears to have a development methodology that is centred around
edit-and-continue, while many (including me) find it an unusual system
that would not work for other people or other uses. There are times
when edit-and-continue could be useful, but they would be few, and the
time savings small (certainly totally different from the hyperbola used
by Rick).

If you can get back to the point of the bug fast, it seems less useful.
In the case of GUI input, you need some system to allow for saving the
GUI input, (I have never used Applescript, but I believe it is supposed
to do that) to allow for speedy debugging.
So it is impossible to implement completely (at least for C), hard to
implement usefully (especially for toolchains where the IDE, debugger
and compiler are separate), and provides only modest productivity gains
for most developers. It does not surprise me that it is not widely
implemented (though at least two implementations have been mentioned
here) - for most debugger developers it is on the list of "nice to have
if we get the time, but not a high priority" features.

And, with the added restrictions, might hide some bugs that could
otherwise be found.

-- glen
 
G

glen herrmannsfeldt

(snip)
You might have become too dependent on this feature. Even
ordinary debugging should only be used for 'intractable'
problems, it is sometimes said.
In fact you might be using the wrong language, if you do a lot of
programming by trial and error.

I try not to, but I know that others do. I still remember undergrad
days, having to write sort programs. (Quicksort, Shellsort, Heapsort)
where I bought Knuth Volume 3 (recently released), figured out how
to write the program, and quickly wrote and debugged them.

At the same time, others were running around the computer room
explaining the latest bug they found, and what might be a next
step to fix it. Pretty much, trial and error, without much thinking
in between.

Now, I remember recently I had a program where it needed either +1
or -1, and it was more work to figure out than to run both ways.
Even more, it was a program that only had to work once.
(Which is what I often do, but I know that C isn't the best
language to use that way: you spend time typing loads of
semicolons and punctuation, writing forward prototypes, creating
convoluted code to get around a missing syntax feature, only to
tear it all down five minutes later as you try something else!)

Once you avoid making lots of dumb mistakes, the bugs left tend
to be big, sometimes requiring big changes.

-- glen
 
G

glen herrmannsfeldt

(snip)
Computers compute. I believe that we should have immediate results from
our efforts. When we're coding, everything up to the point where we are
coding should be available for us to see the results of immediately.

Many problems I work on are not instantaneous because computers are
still not fast enough. (Actually, the problems scale up as computers
get faster.)

One problem I am interested in has an O(n**2) algorithm and n=3e9.
It isn't instantaneous, and if it can be done in a day, that is good.

People doing weather forecasting have even more strict requirements.
If it takes 10 days to predict tomorrow's weather, it is useless.

Computers are most useful for things that take a long time.

-- glen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,075
Messages
2,570,549
Members
47,197
Latest member
NDTShavonn

Latest Threads

Top