R
What I meant was:
1) Load an editor program and debug it for a while.
2) Stop the program and start an edit.
3) Delete *every* line of source code for that program (except
system and compiler include files and system and compiler
libraries).
4) Using your choice of (a) manual typing, (b) cut-and-paste,
or (c) having the editor read in additional files,
add in the source code of a completely different
application.
5) Continue the program.
Assuming for the moment that there are some variables with the same
name and types in both programs (stuff from the C library implementation
if nothing else), under what circumstances would values from the
program state be used in place of initial values from the compilation?
Your previous description led me to believe that the debugger
supplied all the necesary cognizance, even if it required running
time backwards, mind reading, and artificial intelligence well
beyond what will be accomplished by the year 2200, and perhaps
violation of temporal causation, which I expect to never be
possible.
I had assumed that, say, changing foo (an auto variable) from type
integer to double, and removing or adding the ! in front of it,
would cause the debugger to realize (with no hints) that:
1) Since this code is inside of a loop and is on the 37th iteration,
but the first time it was executed was on the 33rd iteration (making an
error but not an obvious one so the program continued), so it
has to back up to the 33rd iteration and change the program state to
accomodate that. This may require un-closing files and un-writing
data to them.
2) Since this function has called itself recursively 6 times and
you changed the size of foo (this is a 32-bit x86 machine, so sizeof
int < sizeof double), some reshuffling of the stack (Microsoft
code usually runs only on machines that use a stack), is necessary,
and the program has to examine every pointer in the program state
and adjust each pointer, if necessary, to point to the new location
of the same variable it was pointing at originally.
3) The goal of the debugger in such a stack reshuffling is to get
pointer adjustments 90% correct where a pointer into a buffer has
overrun an array beyond the point allowed by the C standard.
So if the function you are running when the program stopped is deleted,
is renamed to something else, and you *DON'T* change the program counter,
what's supposed to happen?
If you really mean that, it could be difficult remapping the old
breakpoints to the new program, especially if you deleted the
function they are in or quit calling it.
It only permanently saves the changes on a restart? That could be
a problem debugging the BSOD() function, which intentionally or
accidentally causes a Blue Screen Of Death.
Rick C. Hodgin said:I don't know. I've never used an auto variable.
....
I don't know. I've never used an auto variable.
int i, len;
char* ptr;
for (i = 0; list != null; i++)
void foo(void)
{
int i;
char* s; // Note I use the D language syntax of keeping the pointer
int i;
int* iptr;
char* cptr;
i = 5;
int main(int argc, char* argv[])
{
int i, j;
int main(int argc, char* argv[])
{
for (int i = 0; list; i++)
char* foonew = malloc(1024);
int main(void)
{
int a, b;
populate_my_variables(&a, &b);
I don't believe that.
Any variable defined anywhere between the { and } of a function...
Given the new-to-me information below, there is a certain ability
to add or resize variable names declared within the function. It will
not go back and adjust prior nested recursive calls. Those will
now be invalid upon returning. However, the original function with
the old declarations persists as what is now called "stale code".
It appears in purple indicating that it does not represent the current
function. This allows you to make a change and continue debugging
from that point forward without restarting, which can be more useful
than constantly restarting on every change.
Some additional explanation:
01: int main(void)
02: {
03: int a = 5, b = 7;
04: populate_two_variables(&a, &b);
05: printf("A:%d B:%d\n", a, b);
06: }
Suppose I compile this and begin debugging, then break on line 4. I
manually edit the file and change line 3 to:
03: int a = 9, b = 13, c = 3;
On the whole, it is far far far more helpful than harmful. And in all
of my development, having this ability allows me to code with excessive
speed and flexibility, as compared to the code/compile/start/stop model,
which I am still forced to use in languages like Visual FoxPro, for
example.
Some additional explanation:
01: int main(void)
02: {
03: int a = 5, b = 7;
04: populate_two_variables(&a, &b);
05: printf("A:%d B:%d\n", a, b);
06: }
Suppose I compile this and begin debugging, then break on line 4. I
manually edit the file and change line 3 to:
03: int a = 9, b = 13, c = 3;
I click the "apply changes" button and I've just added a new variable.
Edit-and-continue allows you to correct code on-the-fly, reset the
instruction pointer to re-initialize a variable, or re-perform a logic
test, and then continue on as you would've. It prevents you from
needing to do a constant series of restarts to accomplish the same
thing should you find items in your code that need modified.
This is the part which requires developer cognizance to recognize
what's going on with the data, and what impact your changes will have
on the data that's already been setup.
On the whole, it is far far far more helpful than harmful. And in all
of my development, having this ability allows me to code with excessive
speed and flexibility, as compared to the code/compile/start/stop model,
which I am still forced to use in languages like Visual FoxPro, for
example.
Have you already worked on something along these lines? Because it all
sounds incredibly hairy. And if do you make some changes to a running
program (which will have a certain state including the contents of hundreds
of megabytes of memory, full of pointers to every other part, a few dozen
open files on disk, a part-finished video display etc etc) who decides
whether a continue operation is going to be viable, you or the debug
program?
Have you already worked on something along these lines? Because it all
sounds incredibly hairy. And if do you make some changes to a running
program (which will have a certain state including the contents of hundreds
of megabytes of memory, full of pointers to every other part, a few dozen
open files on disk, a part-finished video display etc etc) who decides
whether a continue operation is going to be viable, you or the debug
program?
It sounds at about the same level as patching the machine code of an
executable file, and that is hairy enough even when you're not doing it
when it's running!
I never had much trouble with such a model; my edit-compile-run cycles were
always engineered by me to never be more than a second or so (even in the
1980s). On larger applications, I arranged things so that most code was
actually interpreted bytecode. If one such module went wrong, it went back
to the application input loop so that I could try something else. So I was
developing inside the application itself, and could spend hours without
restarting!
There were also command scripts, auto-save etc to quickly reconstruct any
large test data that was relevant.
Anyway, the point is that I was using very simple tools, but various
different approaches to development.
On Saturday, February 1, 2014 7:32:01 PM UTC-5, Bart wrote:
This is done during development. You do not do this on a running system
on somebody's computer.
This is done during development. You do not do this on a running system
on somebody's computer. You do this while you, the developer, are
creating
the system. As you're writing code, you use edit-and-continue to speed up
the development and testing. But once you're ready to release the app to
users, you build it like normal, without edit-and-continue support, in
release mode, and you distribute that binary.
Remember, this is on Windows. It's very rare in Windows to build a
system from sources.
This is done during development. You do not do this on a running system
on somebody's computer.
That's another aspect: many errors only come up after the program is shipped
and running on a client's computer.
When it crashes, the clients tends to be pretty irate about it. The error
message (address where it crashed) is not terribly useful (and of course a
debugger would be no help here at all, not with the client in another
country).
This is where having the bulk of the commands (mine were interactive
applications) implemented as interpreted byte-code modules was hugely
advantageous: it showed what went wrong and on what line (which was reported
back to me), but the application itself was still running and the user could
carry on and do something else!
(There were still plenty of regular failures; the priority there was to
try and recover the user's data wherever possible.)
You mean, that you distribute an application as a binary, instead of
requiring ordinary users to start unravelling .tar.gz2 files and running
make and gcc?! I don't see why it would be expected of anyone to do that
anyway! (And I can just imagine what my customers would have said! Anyway
my stuff wasn't open-source.)
(I'm not exactly oblivious of these things, but I would balk at having to
build applications from sources, because *it never works*. Not on Windows;
there is always something that goes wrong even after slavishly following any
instructions.
Agreed.
Even at this moment, I haven't been able to install a 64-bit version of gcc
(onto Windows) because an official binary distribution doesn't exist! And
the unofficial ones don't work. And there's no way I will attempt to build
from sources. Of course they don't exactly tell you in so many words: you
spend hours downloading something that looks promising, with a '64' in the
names, and it turns out to be just a library. Too many things are just too
Unix-centric. </rant>)
If it's not on a running system on somebody's computer, what does the
"continue" part of "edit-and-continue" refer to? I'd always assumed that
it was "continue running" with "on somebody's computer" (namely, your
own) inherently applied. If it isn't running on my computer, in what
sense does it qualify as debugging?
"YMMV" means "Your Mileage May Vary," indicating that your experience may be
different from mine. As I say, I can only convey my personal experience.
As I've noted previously, a lot of the regs in this ng (such as Mr. Seebs
above) are unable to distinguish between their own personal experiences and
opinions and global/universal truth.
Now, to be fair, I think they really are aware of the difference but they
post as if there was no difference. It's this callousness that is even
more annoying than it would be if they actually didn't know any better.
Picture this sequence of events: ....
(4) I set a breakpoint on some new line of code, launch the debugger
and it stops at that location in my program.
In all of the long explanation you gave, you didn't say anything to
change my understanding of what you meant by "edit-and-continue",
nor did you say anything to answer my question: What are you launching
that debugger on, if it isn't "a running system on somebody's computer"?
....
At this point you can:
(a) Fix the program locally on your developer machine, or
(b) Fix the computer on John Smith's machine.
Edit-and-continue actually lets you do both (depending on the type of
error) using remote debugging ... but the general purpose of edit-and-
continue is to fix the error locally.
So, when I had said previously, "a running system on somebody's computer,"
I was referring to the process (your application) running on John Smith's
computer.
You do not do this on a running system on somebody's computer.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.