Non-constant constant strings

R

Rick C. Hodgin

What I meant was:
1) Load an editor program and debug it for a while.
2) Stop the program and start an edit.
3) Delete *every* line of source code for that program (except
system and compiler include files and system and compiler
libraries).
4) Using your choice of (a) manual typing, (b) cut-and-paste,
or (c) having the editor read in additional files,
add in the source code of a completely different
application.
5) Continue the program.

Assuming for the moment that there are some variables with the same
name and types in both programs (stuff from the C library implementation
if nothing else), under what circumstances would values from the
program state be used in place of initial values from the compilation?

All of the previous memory by the original application would be allocated
as it was, persisting in memory, essentially orphaned. The new code would
be compiled. The instruction pointer would be in total limbo depending on
where you stopped. You, the developer, would have to move the instruction
pointer to some location and begin processing. Assuming you moved it to
the start of the program, it would launch like normal, just with less
memory and file handles than it might otherwise have had were the first
program never launched.
Your previous description led me to believe that the debugger
supplied all the necesary cognizance, even if it required running
time backwards, mind reading, and artificial intelligence well
beyond what will be accomplished by the year 2200, and perhaps
violation of temporal causation, which I expect to never be
possible.

Nope. The debugger provides some cognizance. For example, if you insert
five lines of source code above the instruction pointer, it knows to
adjust the instruction pointer to its new location (onto the same source
line as it was before the apply changes button was pressed). It can
also recognize that if you delete the source code line it was on, that
it will position it on the next one after that's available after the
apply changes.

It does not go back and do anything to memory. It will not clean up
anything that was corrupted as per the previous code which may have
written to some wrong memory locations, or computed something incorrectly.
It just leaves it as it was and now has the corrected code able to be
run. If you, as the developer, recognize that there are things it needs
to do differently, then you can bring those portions up in a watch or
memory window, edit the raw data, and then proceed. Otherwise, click the
restart button and start over like traditional debugging.
I had assumed that, say, changing foo (an auto variable) from type
integer to double, and removing or adding the ! in front of it,
would cause the debugger to realize (with no hints) that:

I don't know. I've never used an auto variable.
1) Since this code is inside of a loop and is on the 37th iteration,
but the first time it was executed was on the 33rd iteration (making an
error but not an obvious one so the program continued), so it
has to back up to the 33rd iteration and change the program state to
accomodate that. This may require un-closing files and un-writing
data to them.

It will not do this. You can manually add a "return" statement to
your code temporarily and execute returns to get back to the 33rd level,
and then remove the return and re-enter as you had planned. But, if
it's sequential processing and you're simply on the 33rd iteration there
is no way to undo that processing and return to the way it was.

My virtual machine allows for something called pursue mode. In pursue
mode you basically snapshot a moment in time of what your memory
variables looked like at that instant. Later, you can accept the pursue
or reject it. If you accept it, the pursue is exited and whatever the
current contents of memory are persists. If you reject the pursue, then
whatever existed when you entered the pursue is restored.

In this way you could set a pursue when you start, and then reject the
pursue later, and continue re-processing after the code is corrected.
This requires writing code for my virtual machine which will be notably
slower than the native CPU runtime environment, however, it does afford
these debugging abilities, along with several other features which will
make it worthwhile for certain types of programming.
2) Since this function has called itself recursively 6 times and
you changed the size of foo (this is a 32-bit x86 machine, so sizeof
int < sizeof double), some reshuffling of the stack (Microsoft
code usually runs only on machines that use a stack), is necessary,
and the program has to examine every pointer in the program state
and adjust each pointer, if necessary, to point to the new location
of the same variable it was pointing at originally.

The stack and all memory remains exactly as it was before the resize.
In this case the compiler is probably cognizant enough to tell you that
a change has been made that is no longer compatible with the original
ABI. At that time, just click restart and begin again.
3) The goal of the debugger in such a stack reshuffling is to get
pointer adjustments 90% correct where a pointer into a buffer has
overrun an array beyond the point allowed by the C standard.

It's a useless pursuit. There are cases where it could get everything
right, but if the developer used any custom programming ... it's not
worth the effort. If you change something that alters the stack, then
use a pursue in the virtual machine (in my toolset), or simply do a
restart in Microsoft's Visual Studio.
So if the function you are running when the program stopped is deleted,
is renamed to something else, and you *DON'T* change the program counter,
what's supposed to happen?

The instruction pointer will automatically reposition itself to some new
location which existed in memory after the function which was deleted.
however, it might also be such that today Microsoft's compiler recognizes
That the type of change being made is not supported, and requires a full
restart and standard compilation before continuing. In my toolset, such
a change will only produce a prompt indicating you need to choose where
you want to continue from.
If you really mean that, it could be difficult remapping the old
breakpoints to the new program, especially if you deleted the
function they are in or quit calling it.

Deleted lines of code have their associated breakpoints automatically
deleted. If you add lines or delete other lines of source code which
have now moved the breakpoint's location, they are automatically synched
up with the changes so they point to the correct location.
It only permanently saves the changes on a restart? That could be
a problem debugging the BSOD() function, which intentionally or
accidentally causes a Blue Screen Of Death.

It always saves your source files, but the way the compiler handles this
type of ABI is through a special format. That format is very extensible,
but it is also very inefficient. So, the compiler likes to automatically
re-link everything once you exit the debugger instance to remove anything
that was done through edit-and-continue, thereby removing the "fluff".

As such, it will only rebuild the .DLL or .EXE when you exit. Until then
it is constantly appending your changes onto the end of the executable,
which obviously bloats it up.

If you get a BSOD, simply restart your IDE and rebuild the program and
it's all there. Your source files were saved as of the instant you
clicked the "Apply Changes" button.

Best regards,
Rick C. Hodgin
 
K

Keith Thompson

Rick C. Hodgin said:
I don't know. I've never used an auto variable.

I don't believe that.

Any variable defined anywhere between the { and } of a function
definition without the "static" keyword is an auto variable.
More precisely it's an object with automatic storage duration.
"Local variable" is a common name for the same thing. It doesn't
require using the "auto" keyword, which is entirely redundant
in modern C. (<OT>Which is why C++ was able to re-use it for a
different purpose.</OT>)
 
J

James Kuyper

....
I don't know. I've never used an auto variable.

You don't know what an auto variable is. You use them constantly. Every
function parameter is an auto variable, and every variable declared with
block scope, and without the "static" or "thread_local" keyword, is an
auto variable. Every single variable defined in the following quotes
from your previous messages is an auto variable:
int i, len;
char* ptr;
for (i = 0; list != null; i++)


for() loops are allowed inside a function body, so "i", "len", and "ptr"
must all have block scope.
void foo(void)
{
int i;
char* s; // Note I use the D language syntax of keeping the pointer
int i;
int* iptr;
char* cptr;

i = 5;

Assignment statements are allowed only in function bodies, so "i",
"iptr", and "cptr" must have block scope.
int main(int argc, char* argv[])
{
int i, j;
int main(int argc, char* argv[])
{
for (int i = 0; list; i++)

char* foonew = malloc(1024);

malloc(1024) is not a constant expression, so this declaration is legal
only if it occurs at block scope.
int main(void)
{
int a, b;

populate_my_variables(&a, &b);

You don't provide a definition for populate_my_variables(), but it
apparently takes at least two arguments. The corresponding function
parameters are also auto variables.
 
R

Rick C. Hodgin

Given the new-to-me information below, there is a certain ability
to add or resize variable names declared within the function. It will
not go back and adjust prior nested recursive calls. Those will
now be invalid upon returning. However, the original function with
the old declarations persists as what is now called "stale code".
It appears in purple indicating that it does not represent the current
function. This allows you to make a change and continue debugging
from that point forward without restarting, which can be more useful
than constantly restarting on every change.
I don't believe that.

Now that you explain that auto variables are something I know as
other names, I don't believe it either.
Any variable defined anywhere between the { and } of a function...

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

Given the new-to-me information below, there is a certain ability
to add or resize variable names declared within the function. It will
not go back and adjust prior nested recursive calls. Those will
now be invalid upon returning. However, the original function with
the old declarations persists as what is now called "stale code".
It appears in purple indicating that it does not represent the current
function. This allows you to make a change and continue debugging
from that point forward without restarting, which can be more useful
than constantly restarting on every change.

Some additional explanation:

01: int main(void)
02: {
03: int a = 5, b = 7;
04: populate_two_variables(&a, &b);
05: printf("A:%d B:%d\n", a, b);
06: }

Suppose I compile this and begin debugging, then break on line 4. I
manually edit the file and change line 3 to:

03: int a = 9, b = 13, c = 3;

I click the "apply changes" button and I've just added a new variable.
That is perfectly fine. It will contain whatever was initialized on the
stack when the function was first entered (unknown garbage), but it is
perfectly legal. The value of a and b will still exist, being populated
with their initial values of 5 and 7 as per when that code was executed
when entering line 1 and line 3.

Suppose I now re-edit line 3 and change it to two lines:

01: int main(void)
02: {
03: float a;
04: int b, c;
05: populate_two_variables(&a, &b);
06: printf("A:%d B:%d\n", a, b);
07: }

When I click "apply changes" it will generate an error because there is
no populate_two_variables() function which takes float& and int&. If I
create the function and apply changes again, it compiles properly.

When it comes to line 6, it will print correctly for b, but the hexadecimal
encoding of the float value a in base-10 integer form.

The value of c will have never been initialized, even though the code
indicates that by the time it is on line 4 or 5 it should've been
initialized. Had the instruction pointer been reset back up to that
source code line and single-stepped over, then it would've been
initialized.

Everything with edit-and-continue works from where you currently are in
terms of data processing. It allows you to make changes which create a
new runtime environment for your program which accurately reflects the
source code changes you've made, and you can then act upon the newly
created or modified variables and functions, however, everything that
was setup previously remains exactly as it was setup.

Edit-and-continue allows you to correct code on-the-fly, reset the
instruction pointer to re-initialize a variable, or re-perform a logic
test, and then continue on as you would've. It prevents you from
needing to do a constant series of restarts to accomplish the same
thing should you find items in your code that need modified.

This is the part which requires developer cognizance to recognize
what's going on with the data, and what impact your changes will have
on the data that's already been setup.

On the whole, it is far far far more helpful than harmful. And in all
of my development, having this ability allows me to code with excessive
speed and flexibility, as compared to the code/compile/start/stop model,
which I am still forced to use in languages like Visual FoxPro, for
example.

Best regards,
Rick C. Hodgin
 
K

Kaz Kylheku

Some additional explanation:

01: int main(void)
02: {
03: int a = 5, b = 7;
04: populate_two_variables(&a, &b);
05: printf("A:%d B:%d\n", a, b);
06: }

Suppose I compile this and begin debugging, then break on line 4. I

What if you instead you break inside populate_two_variables?
manually edit the file and change line 3 to:

03: int a = 9, b = 13, c = 3;

Now suppose that a and b change addresses. You need to fix the pointers
that have been passed into populate_two_variables where the program is
currently stopped.

In general, you need to trace all live pointers in memory and fix them up.

This is not really workable in a language like C; it is a wrong-headed
approach that has no practical value.
On the whole, it is far far far more helpful than harmful. And in all
of my development, having this ability allows me to code with excessive
speed and flexibility, as compared to the code/compile/start/stop model,
which I am still forced to use in languages like Visual FoxPro, for
example.

False dichotomy. The alternative to edit-compile-run isn't this
self-modifying nonsense.

Dynamic languages already showed us more than forty years ago how you can have
a running image and interact with it to make alterations, without
resorting to self-modifying code hacks.
 
B

BartC

Some additional explanation:

01: int main(void)
02: {
03: int a = 5, b = 7;
04: populate_two_variables(&a, &b);
05: printf("A:%d B:%d\n", a, b);
06: }

Suppose I compile this and begin debugging, then break on line 4. I
manually edit the file and change line 3 to:

03: int a = 9, b = 13, c = 3;

I click the "apply changes" button and I've just added a new variable.

Have you already worked on something along these lines? Because it all
sounds incredibly hairy. And if do you make some changes to a running
program (which will have a certain state including the contents of hundreds
of megabytes of memory, full of pointers to every other part, a few dozen
open files on disk, a part-finished video display etc etc) who decides
whether a continue operation is going to be viable, you or the debug
program?

It sounds at about the same level as patching the machine code of an
executable file, and that is hairy enough even when you're not doing it when
it's running!
Edit-and-continue allows you to correct code on-the-fly, reset the
instruction pointer to re-initialize a variable, or re-perform a logic
test, and then continue on as you would've. It prevents you from
needing to do a constant series of restarts to accomplish the same
thing should you find items in your code that need modified.
This is the part which requires developer cognizance to recognize
what's going on with the data, and what impact your changes will have
on the data that's already been setup.

On the whole, it is far far far more helpful than harmful. And in all
of my development, having this ability allows me to code with excessive
speed and flexibility, as compared to the code/compile/start/stop model,
which I am still forced to use in languages like Visual FoxPro, for
example.

I never had much trouble with such a model; my edit-compile-run cycles were
always engineered by me to never be more than a second or so (even in the
1980s). On larger applications, I arranged things so that most code was
actually interpreted bytecode. If one such module went wrong, it went back
to the application input loop so that I could try something else. So I was
developing inside the application itself, and could spend hours without
restarting!

There were also command scripts, auto-save etc to quickly reconstruct any
large test data that was relevant.

Anyway, the point is that I was using very simple tools, but various
different approaches to development.
 
K

Kaz Kylheku

Have you already worked on something along these lines? Because it all
sounds incredibly hairy. And if do you make some changes to a running
program (which will have a certain state including the contents of hundreds
of megabytes of memory, full of pointers to every other part, a few dozen
open files on disk, a part-finished video display etc etc) who decides
whether a continue operation is going to be viable, you or the debug
program?

Moreover, ironically, that's precisely the sort of program which benefits
most from not having to stop. Oops!

Who who cares if you can do this self-modifying edit-and-continue hack over
some academic code examples.
 
R

Rick C. Hodgin

Have you already worked on something along these lines? Because it all
sounds incredibly hairy. And if do you make some changes to a running
program (which will have a certain state including the contents of hundreds
of megabytes of memory, full of pointers to every other part, a few dozen
open files on disk, a part-finished video display etc etc) who decides
whether a continue operation is going to be viable, you or the debug
program?

This is done during development. You do not do this on a running system
on somebody's computer. You do this while you, the developer, are creating
the system. As you're writing code, you use edit-and-continue to speed up
the development and testing. But once you're ready to release the app to
users, you build it like normal, without edit-and-continue support, in
release mode, and you distribute that binary.

Remember, this is on Windows. It's very rare in Windows to build a
system from sources.
It sounds at about the same level as patching the machine code of an
executable file, and that is hairy enough even when you're not doing it
when it's running!

That's more or less what it does, but rather than a person doing it
by hand, the compiler is doing it through compilation and a physical
set of rules about how to swap things out in a running environment.
I never had much trouble with such a model; my edit-compile-run cycles were
always engineered by me to never be more than a second or so (even in the
1980s). On larger applications, I arranged things so that most code was
actually interpreted bytecode. If one such module went wrong, it went back
to the application input loop so that I could try something else. So I was
developing inside the application itself, and could spend hours without
restarting!

I had a different experience. I always hated the edit-compile-run cycles.
I used to get upset with the fact that I couldn't modify source code in
the debugger before (to my knowledge) edit-and-continue ever existed for
any Microsoft tools. I used to find it a very slow aspect of design.
When I accidentally (yes, truly accidentally) discovered that such a
feature existed in Visual Studio 98 in probably late 1999, or early 2000,
I was floored. It has been my single greatest developer asset ever
since.
There were also command scripts, auto-save etc to quickly reconstruct any
large test data that was relevant.

The thing about edit-and-continue is that it allows you to work on individual
algorithms, to develop them as you go along. It's very easy to click the
restart button at any point and start back over, but the beauty is you don't
need to for most changes.
Anyway, the point is that I was using very simple tools, but various
different approaches to development.

Yup. A lot of people develop differently than I do. I sometimes wonder
how many of them would switch if they were to sit down and use edit-and-
continue for a little while.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

On Saturday, February 1, 2014 7:32:01 PM UTC-5, Bart wrote:
This is done during development. You do not do this on a running system
on somebody's computer.

I should also add that you can do this on a running system. The Microsoft
compiler and linker create a program database (myapp.pdb) which contains
information about the compiled system. When it is created to include
edit-and-continue abilities, the PDB can be used at any time allowing the
Microsoft Debugger to attach to a running process, suspend it manually or
with a breakpoint, make changes and apply them.

In such a case, again, it requires the wherewithal of the developer to
recognize the extent of the change(s). In such a case, there are program
logic / flow changes which could be affected, new local variables added,
an addition of parameters to some function, or a change of parameters,
and so on. All of these will allow the code to run properly, but there
are things which must be recognized (such as nested calls, recursive calls,
data on the stack, prior computed data which now may be out of sync with
the new logic, and so on), each of these may disallow modification to the
running system even if the mechanics of the edit-and-continue fix are
allowed to the ABI.

Edit-and-continue is not flawless ... but in the case where a particular
program bug exists, such as this:

if (foo) {
// Do something
}

And it should be:
if (foo && foo2) {
// Do something
}

It can be changed without restarting the system. And in certain cases that
may be desirable. In other cases, you terminate the process, apply the
changes, test, and restart.

However, as I have indicated, edit-and-continue's greatest application is
during development (of new code and maintenance of existing code) when the
developer is able to make a change to something observed in error during
debug stepping, saving the time of recompile and restart. And, FWIW, the
older compilers (Visual Studio 98 and Visual Studio 2003) perform notably
faster in this regard, with Visual Studio 98 being literally almost
instantaneous, and Visual Studio 2003 taking only a second or two. The
newer compilers (I have only tested VS2008 and VS2010) can be quick, but
many times take several seconds to complete the compile (5 or more),
thereby somehow diminishing its usefulness.

Best regards,
Rick C. Hodgin
 
B

BartC

This is done during development. You do not do this on a running system
on somebody's computer. You do this while you, the developer, are
creating
the system. As you're writing code, you use edit-and-continue to speed up
the development and testing. But once you're ready to release the app to
users, you build it like normal, without edit-and-continue support, in
release mode, and you distribute that binary.

That's another aspect: many errors only come up after the program is shipped
and running on a client's computer.

When it crashes, the clients tends to be pretty irate about it. The error
message (address where it crashed) is not terribly useful (and of course a
debugger would be no help here at all, not with the client in another
country).

This is where having the bulk of the commands (mine were interactive
applications) implemented as interpreted byte-code modules was hugely
advantageous: it showed what went wrong and on what line (which was reported
back to me), but the application itself was still running and the user could
carry on and do something else!

(There were still plenty of regular failures; the priority there was to try
and recover the user's data wherever possible.)
Remember, this is on Windows. It's very rare in Windows to build a
system from sources.

You mean, that you distribute an application as a binary, instead of
requiring ordinary users to start unravelling .tar.gz2 files and running
make and gcc?! I don't see why it would be expected of anyone to do that
anyway! (And I can just imagine what my customers would have said! Anyway my
stuff wasn't open-source.)

(I'm not exactly oblivious of these things, but I would balk at having to
build applications from sources, because *it never works*. Not on Windows;
there is always something that goes wrong even after slavishly following any
instructions.

Even at this moment, I haven't been able to install a 64-bit version of gcc
(onto Windows) because an official binary distribution doesn't exist! And
the unofficial ones don't work. And there's no way I will attempt to build
from sources. Of course they don't exactly tell you in so many words: you
spend hours downloading something that looks promising, with a '64' in the
names, and it turns out to be just a library. Too many things are just too
Unix-centric. </rant>)
 
J

James Kuyper

This is done during development. You do not do this on a running system
on somebody's computer.

If it's not on a running system on somebody's computer, what does the
"continue" part of "edit-and-continue" refer to? I'd always assumed that
it was "continue running" with "on somebody's computer" (namely, your
own) inherently applied. If it isn't running on my computer, in what
sense does it qualify as debugging?
 
R

Rick C. Hodgin

That's another aspect: many errors only come up after the program is shipped
and running on a client's computer.

You can do this with Microsoft's Debugger. There is an installable
executable which allows remote debugging. You can attach to a remote
process and effect changes there, across the LAN, or across the Internet.
When it crashes, the clients tends to be pretty irate about it. The error
message (address where it crashed) is not terribly useful (and of course a
debugger would be no help here at all, not with the client in another
country).

The idea with edit-and-continue, in general, is you are doing this while
you're writing the code. If there are runtime errors that occur you may
typically have special beta testers who are more tolerant of errors, who
make more frequent backups, or whatever is necessary to prevent data loss
on code that has not been well tested and shown to be working on a wide
range of variables and input.

In those cases, you can remote into the system in another country, load
up the program in error, and make changes to your source code on your
local machine, transmitting the compiled changes to the remote system
where the edit-and-continue affects its ABI as through the remote
debugger software. In this way, on those beta testers, you could change
their program while it's running (provided the changes were of the type
which would allow those changes to be made while it's running).
This is where having the bulk of the commands (mine were interactive
applications) implemented as interpreted byte-code modules was hugely
advantageous: it showed what went wrong and on what line (which was reported
back to me), but the application itself was still running and the user could
carry on and do something else!

The general idea is that you have debugged systems released to people. And
if bugs are found, then you fix them like normal. With edit-and-continue,
it gives you better tools to do your own testing and debugging while you're
first writing the algorithm, or when you're stepping through code. You can
even write in things to test against the running environment (a few lines of
code to print something repeatedly during testing, which are removed later
before you exit the debugger session). And so on.
(There were still plenty of regular failures; the priority there was to
try and recover the user's data wherever possible.)

If a system needs to retrieve something in memory that's not committed to
a permanent storage, then the algorithms on a fragile system should be
modified so that some kind of semaphore-based system can be employed which
allows the data to be "walked through" during runtime, saving it out to
disk periodically. Even with 100s of Megabytes of data in memory, most
hard drives today write data at well over 50 MB/s. It would only take a
few seconds in a background thread to write the data out. And if you don't
want to write it out, simply copy it to another process and let that
process possess a watchdog ability. If it's not pet periodically, have it
write out the data. And so on.
You mean, that you distribute an application as a binary, instead of
requiring ordinary users to start unravelling .tar.gz2 files and running
make and gcc?! I don't see why it would be expected of anyone to do that
anyway! (And I can just imagine what my customers would have said! Anyway
my stuff wasn't open-source.)

Yes. I don't either. Many distros hide that, but many projects release
their products only in that way. I've had to build probably 20% of the
apps I run which do not come with my Linux distro from sources, rather
than just installing a binary.

This sequence becomes your friend (where "n" is the number of CPUs you have):

./configure
make -Jn
sudo make install
(I'm not exactly oblivious of these things, but I would balk at having to
build applications from sources, because *it never works*. Not on Windows;
there is always something that goes wrong even after slavishly following any
instructions.
Agreed.

Even at this moment, I haven't been able to install a 64-bit version of gcc
(onto Windows) because an official binary distribution doesn't exist! And
the unofficial ones don't work. And there's no way I will attempt to build
from sources. Of course they don't exactly tell you in so many words: you
spend hours downloading something that looks promising, with a '64' in the
names, and it turns out to be just a library. Too many things are just too
Unix-centric. </rant>)

I don't know. I was able to install MINGW the other day from a binary with surprising ease. I have always tried CYGWIN in the past and had nothing
but difficulties. With MINGW it was just an install, and then adding that
new folder ("c:\mingw\bin") to my path. Blew me away how much easier it
was. That was what I used in this thread to get GCC to compile and link
together with Visual C++ using the COFF format.
 
R

Rick C. Hodgin

If it's not on a running system on somebody's computer, what does the
"continue" part of "edit-and-continue" refer to? I'd always assumed that
it was "continue running" with "on somebody's computer" (namely, your
own) inherently applied. If it isn't running on my computer, in what
sense does it qualify as debugging?

Picture this sequence of events:

(1) I have a large system I'm developing.
(2) I'm currently working on module X, developing it.

In Visual Studio this would be done from within the Visual Studio IDE mostly,
however there are still times when it's advantageous to use a third party
tool, such as Notepad++, because of special features it possesses which the
default VS IDE does not, such as the ability to have multiple simultaneous
cursor/caret flashing locations where input is done to multiple lines
simultaneously.

(3) From within VS IDE, I code something and compile. No errors.
(4) I set a breakpoint on some new line of code, launch the debugger
and it stops at that location in my program.
(5) I single-step through the code, line-by-line, looking at the
various variables, seeing the logic flow, etc.
(6) I step over a line of code and realize I used the wrong logic
test.
(7) Right there, in the debugger (which is really nothing more than
the standard IDE with all of its editor abilities, the only real
difference being that the currently executing line of source
code is highlighted in yellow), you make a change using the full
IDE editor abilities of auto-complete, showing the code preview
window, auto-highlighting nearby variable usages of the one you
may be typing in, and so forth...
(8) You click "Apply changes" or simply press the single-step keystroke
(and it will automatically apply changes for you).
(9) Perhaps you need to move the instruction pointer back to the top
of the logic test block, so you right-click and choose "Set next
statement".
(10) And you continue single stepping.

You have made this/these changes without ever stopping the one debugger
session. You have not had to stop, compile, restart, and get back to the
line of code you were on in your program. It was all done for you as a
continuation of your thought process:

(a) Oh, there's an error.
(b) Fix the error.
(c) Continue on debugging.

Rather than:
(a) Oh, there's an error.
(b) Stop debugging.
(c) Fix the error.
(d) Recompile.
(e) Restart the debugger.
(f) Get back to the line I was on.
(g) Continue on debugging.

And in my experience with certain types of errors, I may also have the
thought process which says something like: Can I go ahead and continue
on knowing that this code is in error, and just manually change the value
each iteration so I don't have to do stop right now, and I can continue
on looking for other errors.

There are lots of ways to use edit-and-continue. Apple thought enough of
the concept to add fix-and-continue, which is something similar. I don't
know if they have the remote debugger transport layer which allows me on
my machine with the source code to access a remote machine someplace else
and do fix-and-continue there, like edit-and-continue can ... but at least
they thought enough of it to introduce it into their compiler chain for
local debugging during development.

Best regards,
Rick C. Hodgin
 
K

Kenny McCormack

"YMMV" means "Your Mileage May Vary," indicating that your experience may be
different from mine. As I say, I can only convey my personal experience. :)

As I've noted previously, a lot of the regs in this ng (such as Mr. Seebs
above) are unable to distinguish between their own personal experiences and
opinions and global/universal truth.

Now, to be fair, I think they really are aware of the difference but they
post as if there was no difference. It's this callousness that is even
more annoying than it would be if they actually didn't know any better.
 
R

Rick C. Hodgin

As I've noted previously, a lot of the regs in this ng (such as Mr. Seebs
above) are unable to distinguish between their own personal experiences and
opinions and global/universal truth.

Now, to be fair, I think they really are aware of the difference but they
post as if there was no difference. It's this callousness that is even
more annoying than it would be if they actually didn't know any better.

I desire peace, not war. Peace, and good coffee ice cream on occasion.

Best regards,
Rick C. Hodgin
 
J

James Kuyper

Picture this sequence of events: ....
(4) I set a breakpoint on some new line of code, launch the debugger
and it stops at that location in my program.

In all of the long explanation you gave, you didn't say anything to
change my understanding of what you meant by "edit-and-continue", nor
did you say anything to answer my question: What are you launching that
debugger on, if it isn't "a running system on somebody's computer"?
 
R

Rick C. Hodgin

In all of the long explanation you gave, you didn't say anything to
change my understanding of what you meant by "edit-and-continue",

Excellent! I'm glad you understand it then.
nor did you say anything to answer my question: What are you launching
that debugger on, if it isn't "a running system on somebody's computer"?

What I mean is this:

(1) You write some software and publish on a website.
(2) Many people download it.
(3) One of them, John Smith in another country, runs it.
(4) He finds and error and contacts you.

At this point you can:

(a) Fix the program locally on your developer machine, or
(b) Fix the computer on John Smith's machine.

Edit-and-continue actually lets you do both (depending on the type of
error) using remote debugging ... but the general purpose of edit-and-
continue is to fix the error locally.

So, when I had said previously, "a running system on somebody's computer,"
I was referring to the process (your application) running on John Smith's
computer.

Hope this makes it clear.

And if you'll permit me, may I now ask you a direct question? When you
wear pants, are you also wearing a Kuyper belt?

Best regards,
Rick C. Hodgin
 
J

James Kuyper

....
At this point you can:

(a) Fix the program locally on your developer machine, or
(b) Fix the computer on John Smith's machine.

Edit-and-continue actually lets you do both (depending on the type of
error) using remote debugging ... but the general purpose of edit-and-
continue is to fix the error locally.

So, when I had said previously, "a running system on somebody's computer,"
I was referring to the process (your application) running on John Smith's
computer.

So it is a running system on your developer machine, just not a running
system on John Smith's computer. The simplest way to modify your
statement to mean what you intended would be to change "somebody's" to
"somebody else's", or perhaps "an end user's".

Now that it's clear what you meant, let's get back to the context in
which you said it. BartC raised a bunch of issues. You dismissed those
issues with the explanation:
You do not do this on a running system on somebody's computer.

But the issues he raised are just as relevant to a program running on
the developer's system as on "John Smith's" system. Therefore, with
corrected wording, your dismissal changes from nonsense into a comment
that is meaningful but irrelevant. That's a bit of improvement, but not
much.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,075
Messages
2,570,562
Members
47,197
Latest member
NDTShavonn

Latest Threads

Top