P
Pallav singh
when we should use strcpy( ) and when memcpy( ) ? is it w.r.t. to
Data Type
Thanks
Pallav
Data Type
Thanks
Pallav
when we should use strcpy( ) and when memcpy( ) ?
Pallav singh said:when we should use strcpy( ) and when memcpy( ) ? is it w.r.t. to
Data Type
Tim said:strcpy() is more convenient for strings, because it stops when it
finds a null byte. With memcpy you have to tell it when to stop. Use
whichever meets your needs.
memcpy() might be more efficient because its implementation might be
able to copy entire words at a time, rather than single bytes at a time
(as strcpy() is forced to do).
Pallav said:when we should use strcpy( ) and when memcpy( ) ? is it w.r.t. to
Data Type
Thanks
Pallav
Remember that strcpy will copy until it finds a '\0'.
This could lead to buffer overruns and reading from
undefined places in memory.
strncpy(), notice the 'n', is a lot safer.
[ ... ]memcpy() might be more efficient because its implementation
might be able to copy entire words at a time, rather than
single bytes at a time (as strcpy() is forced to do).It's not really forced to do so. At least at one time,
Microsoft used an implementation that scanned for the end,
then did the copying in a separate pass -- and the copying
itself was done in 32-bit words.With a modern CPU, that would probably be a pessimization
though -- except under rather special circumstances, anything
with a cache will combine the reads and writes so all
transactions with the main memory happen in cache-line sized
chunks (typically substantially larger than a full word). Such
a copy will normally be memory-bound anyway, so the difference
between a really simplistic byte-at-a-time implementation and
a much more complex one that copies entire words when possible
will generally be minuscule.
With a modern CPU, the CPU should take care of merging the byte
accesses into word accesses, and copying bytes or words should
not make a significant difference.
With an older CPU, of course, std::copy should be significantly
faster than either, because the compiler can regerate the code
each time it sees the call, taking into account the actual size
and the alignments of the pointers---doing this in memcpy means
you have a lot of extra tests, which slow it down significantly
for small blocks. This is why many compilers actually defined
memcpy and strcpy as macros, expanding to something like
__builtin_memcpy and __builtin_strcpy---the compiler can do a
better job expanding the function each time it is invoked.
Yannick Tremblay said:And any sane programmer will prefer strncpy() over strcpy() for safety
reason unless you do not know the size of the destination.
Yannick Tremblay said:Agree, strncpy has a somewhat counter intuitive behaviour. But it is
easy to fix the non-terminated string issue using a single line of
code:
char dest[SOME_SIZE];
dest[SOME_SIZE-1] = 0;
strncpy( buffer, source, sizeof(buffer)-1);
Jerry said:Such a copy will normally be memory-bound anyway,
so the difference between a really simplistic byte-at-a-time
implementation and a much more complex one that copies entire words
when possible will generally be minuscule.
[email protected] said:I'm not so sure. Copying linear data from one place to another
byte-by-byte requires more raw clock cycles than word-by-word. It's
relatively easy to count how many clock cycles more it would take, in an
optimal situation. Even if the memory/cache chips were somehow able to
optimize consecutive individual byte reads/writes into larger chunks,
the CPU will still perform more operations than when copying entire words.
Jerry said:Well, I'll admit I haven't tested it to be sure, but the idea is
pretty simple: yes, the CPU itself is performing more operations --
but the CPU is so much faster than the memory that it should rarely
matter. A typical word is 4 or 8 bytes, but a typical CPU currently
runs with a multiplier of at least 10:1, and often around 15:1.
Pallav said:when we should use strcpy( ) and when memcpy( ) ? is it w.r.t. to
Data Type
[email protected] said:Which memory are you talking about? Naturally I'm talking about the L1
cache, ie. the fastest memory, closest to the CPU, which is what the CPU
directly accesses.
I have to admit, though, that I'm not completely sure how the speed of
the L1 cache compares to the speed of the CPU, but given that the
machine opcodes being run by the CPU (ie. the actual program being
executed) come from the L1 cache it would seem odd that the CPU would
run 10 times faster than what the cache can feed it. It would sound like
the CPU would be idle for the majority of the time simply because the L1
cache is too slow to feed it more opcodes to run.
It is quite easy: when you know the length of the source
string use memcpy(), otherwise strlcpy().
James said:There is no strlcpy in C or C++. There is a proposal for a TR
to C with asafe strcpy_s, but for the moment, it's just a
proposal, and at any rate, it will be a TR, and not part of the
language (so I don't know what C++ will do with it). FWIW: it's
implemented in VC++ (at least with the compiler options I use),
but not in g++ under Linux (again, with the compiler options I
usually use).
James Kanze said:There is a proposal for a TR to C with asafe strcpy_s
FWIW: it's implemented in VC++ (at least with the compiler options I
use), but not in g++ under Linux (again, with the compiler options I
usually use).
That's because "strcpy_s" is a (somewhat ugly) microsoftism,
so naturally VC supports it.
James Kanze said:It's part of a TR being processed by the C standards committee,
which means that it is, or will be, an optional part of standard
C. Certainly not a Microsoftism.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.