Looking for experienced C/C++ / ASM programmer

B

BGB

Smartphone shipments started exceeding PC shipments last year. And
those are all programmable (and does not include feature phones or
tablets).

but, this is because the market has not yet saturated.

PC sales are lower because the market is fairly saturated:
say, one buys a PC in 2007, and it still works fairly well now, so why
buy a new one?... likewise, if one has a laptop from 2003, and it still
works...

whereas not everyone has a smartphone or tablet, so people are still
making sales.
once nearly everyone who wants a smartphone or tablet has one, then
sales will likely drop off significantly, with most new sales either
being to get new features/upgrades, or to replace lost/damaged units.


the issue is that current sales are a poor indicator of total units in-use.

so, the question is what the sales will look like once market saturation
is reached.

For the vast majority of users, PC effectively aren't programmable
either - almost all software run on 95% of PCs is written by a "few"
people who then run off "millions" of units.

but, there are lots more programmers on the PC, like just how many
people are involved in the creation of typical desktop software (such as
an OS, web-browser, of office suite).


whereas, say, with an alarm-clock (or watch, or microwave oven, ...),
pretty much the entire system (software-wise) is implemented by a single
person. many other units would be things like set-top boxes, broadband
routers, DSL or cable modems, ... again, most of which are largely
non-programmable.


with a system like Android, much of the "actual work" was likely done by
people who were originally targeting the PC (say, everyone involved in
writing the Linux kernel).

it is not clear if at this point the entire combined code-base of
dedicated Android software likely exceeds that which has gone into the
Linux kernel (somewhere around 40 Mloc last I checked).


also, given one can just wander around, and encounter people IRL who
write software (most often them being for C# or VB.NET or similar,
generally doing something involving an SQL-Server backend), indicates
that programmers (at least in some form) are not *that* rare.

I have thus far IRL only encountered a single person who was writing
apps for a mobile target, and this was for iOS (the person in question
being primarily an Objective-C developer). the person in question also
wrote software for OS/X.


or such...
 
I

Ian Collins

but, there are lots more programmers on the PC, like just how many
people are involved in the creation of typical desktop software (such as
an OS, web-browser, of office suite).

So what minute fraction of 1% of those programmers are writing (or even
know how to write) in assembler?
 
B

BGB

So what minute fraction of 1% of those programmers are writing (or even
know how to write) in assembler?

who knows exactly?...

but, at least parts of both Linux and FireFox are written in
assembler... (although the vast majority of this is C, or a mix of C and
C++ in the case of FF).


parts of my 3D engine and VM stuff are also written in assembler
(although the vast majority of the code is C).

in terms of raw volume, it is possible that my ASM code is smaller than
the amount of code I have written in Java and BGBScript...

there is pretty much no pure/standalone ASM in my stuff, as most of what
is present is either inline assembler (minor), or procedurally-generated
assembler (more common), much of this being either because:
it is doing something difficult to do in plain C;
because ASM is really fast to "eval" (my assembler can assemble
10-20MB/s of ASM code, which can be transformed readily into callable
function pointers);
because it didn't really fit well with using BGBScript for some reason;
....


the Quake3 engine also manually-crafted machine code in a few places
(generally building machine code from hex-strings).

I guess the question would be to determine just how many people
in-general use ASM/native-code this was...


or such...
 
I

Ian Collins

who knows exactly?...

So "lots of people" is a small number.
but, at least parts of both Linux and FireFox are written in
assembler... (although the vast majority of this is C, or a mix of C and
C++ in the case of FF).

Very small parts, parts that need to access machine registers. The only
x86 bits assembler I've ever had to write are low level scheduling
primitives.
the Quake3 engine also manually-crafted machine code in a few places
(generally building machine code from hex-strings).

Yet another very small niche.
I guess the question would be to determine just how many people
in-general use ASM/native-code this was...

Hardly any.
 
B

BGB

So "lots of people" is a small number.

but what is to say it is any smaller than the number of people writing
ARM assembler?...

most cases, if one is developing on ARM, it is also in C or C++ or similar.

Very small parts, parts that need to access machine registers. The only
x86 bits assembler I've ever had to write are low level scheduling
primitives.

most "pure ASM" stuff I did was related to things like bootloaders and
entry-points for kernels and processes (or "crt0 stubs" or whatever they
are best called).

also, a few misc things, like code for 128 and 256 bit multiplication
and division and similar.


most other ASM I have done has been "impure", usually because it is
either inline assembler, or is related to procedural code generation.

I distinguish between procedural code-generation tasks, and a full
codegen (such as a JIT or compiler), primarily on the level of scale and
complexity involved.

Yet another very small niche.

yeah...

actually, my strategy was originally partly influenced by the Quake3
strategy (in the use of command-laden hex-strings), but sort of differed
in that in my case it was later transformed into an x86 assembler, and
most later effort was based on producing globs of ASM and feeding them
through said assembler, rather than via more direct means (most of which
were generally bulkier and much more effort...).

typically, in this case, one uses "printf()" style calls to produce code
to be run (in my case generally managed by begin/end pairs...).

in its present form:
void (*fcn)();
....
dyllAsmBegin();
....
dyllAsmPrint("...", ...);
....
fcn=dyllAsmEnd();
....
fcn();

Hardly any.

maybe so.


then again, by a similar notion "hardly anyone" makes use of eval
either, but it is fairly usedful, when needed, in languages which have it...


or such...
 
K

Kaz Kylheku

So "lots of people" is a small number.


Very small parts, parts that need to access machine registers. The only
x86 bits assembler I've ever had to write are low level scheduling
primitives.


Yet another very small niche.


Hardly any.

Writing in assembly language is something that any competent developer
can be called upon to do, but rarely will be.

There is hardly any need for anyone to be an "assembly language programmer".

However, *debugging* at the machine level is far from uncommon.

Tough bugs in optimized C code cannot all be found without resorting to
examining the program state at the instruction, register and memory level.

I haven't had a job in the past 20 years in which I did not have to get up to
the elbows in the machine language to root cause a few bugs. The fix didn't
involve writing machine language, but the bug hunt required reading machine
language: knowing the instruction set, register usage, calling convention,
structure of the stack and other info, etc.
 
I

Ian Collins

but what is to say it is any smaller than the number of people writing
ARM assembler?...

Who knows? But I'd wager there are more people writing ARM assembler
simply because it's popular in embedded platforms and many of these run
OS free. But even there, the number will be getting smaller. I'm
designing an ARM based module at the moment and I can all but guarantee
I won't have to write any assembler.
most cases, if one is developing on ARM, it is also in C or C++ or similar.

Quite. In a hosted environment, the choice of language is broader still.
most "pure ASM" stuff I did was related to things like bootloaders and
entry-points for kernels and processes (or "crt0 stubs" or whatever they
are best called).

As I said, very small parts. Not may of us have to dabble at that level.
maybe so.

then again, by a similar notion "hardly anyone" makes use of eval
either, but it is fairly usedful, when needed, in languages which have it...

True, but at least eval uses the same language!
 
B

BGB

Who knows? But I'd wager there are more people writing ARM assembler
simply because it's popular in embedded platforms and many of these run
OS free. But even there, the number will be getting smaller. I'm
designing an ARM based module at the moment and I can all but guarantee
I won't have to write any assembler.

fair enough...

Quite. In a hosted environment, the choice of language is broader still.

yep.

C has a merit for use in embedded systems:
no real need for an OS or a VM framework.

As I said, very small parts. Not may of us have to dabble at that level.

well, yes, but writing large volumes of code in plain ASM would "kinda
suck...".

True, but at least eval uses the same language!

yeah.

sadly, C generally lacks a proper eval.

in my case, I can use the BGBScript eval and generally call it "good
enough" (the syntax is fairly similar, and the FFI sufficiently
transparent, that it will often work "about right").

sadly, proper C is not terribly well suited for use in eval (and by the
time one fudged it enough to be to be better suited for use with eval,
it wouldn't be particularly C anymore).

my own attempt was slow (WRT compile times), buggy, and not very useable
for interactive use. likewise, having to include headers, type out
complete functions, and not having the ability to execute
statements/expressions at the toplevel, is not ideal for interactive
entry. (just using ASM was actually a much preferable experience...).


FWIW, many parts of the BGBScript VM themselves depend on the ability to
print out and execute ASM/native-code.

so, I generally classify it as an "impure interpreter" or "native-code
enhanced interpreter".

so:
a "pure interpreter" would be one written purely in C (or another HLL);
a compiler or JIT would essentially directly produce native code, and
then run this (this is what my C compiler did);
so, an "impure interpreter" is be primarily an interpreter (main
control/dispatch logic is written in C), whereas many parts of the
execution-path may involve dynamically-generated machine code.

nevermind the use of function-pointer based "plumbing-pipe" logic (where
one short-circuits some logic code by essentially building the logic
using function-pointers and swapping around functions, in place of
winding through more general-purpose logic code). in this case, it is
like "plumbing pipes" in that one piece of logic will set up the pipes,
and subsequent control flow will "flow" through said pipes (by calling
through the function pointers). in a few cases I had actually gained
significant speedups by doing this.


I also have considered the possibility of migrating over to
threaded-code, but then the definitions start getting fuzzy (is it still
an interpreter at this point?...).

one possible definition could be:
it is still an interpreter if it retains a traditional
instruction-dispatch loop (but, would this include the use of a
trampoline loop?...).

(the design I had considered would no longer have an instruction
dispatch loop, but would retain the use of a trampoline loop, as my
interpreters tend to essentially operate in a CPS-like manner).


or such...
 
N

Nick Keighley

Le 01/11/11 14:29, Nick Keighley a écrit :




One of the big problems of old people is that they tend to live
in the past.

ANY discussion in this group leads to

"And in machine XXX (dead since at least 20 years) bytes were
8.75 bits, remember that eh?"

This has nothing with remembering history but everything
with seeking to revive "old memories of yore".

I have nothing against old people but as I age, I try to
stay away from old age's pitfalls, in the same way as when I was young
I tried to avoid young age's pitfalls.

I my defence I didn't start the "thermionic valves! you were lucky we
had to grease t'rachets everymorning at 4am with frozen lard!"

It's still worth remembering the whole world isn't a VAX/Unix/Wintel
machine. At that there might be some surprises in future.
 
M

Malcolm McLean

As I said, very small parts.  Not may of us have to dabble at that level.

If you're writing a video game, probably what you'll fund is that you
need a function to normalise vectors for lighting, and that it's
called millions of times and is a bottleneck. So to speed things up
you need to express the normal as three signed chars, then write a
rough and ready way of getting the length, which doesn't involve a
call to square root. So it's a job for assembly. It's only one
function, but it's absolutely vital that you have someone who can
write it.
 
S

Seebs

If you're writing a video game, probably what you'll fund is that you
need a function to normalise vectors for lighting, and that it's
called millions of times and is a bottleneck. So to speed things up
you need to express the normal as three signed chars, then write a
rough and ready way of getting the length, which doesn't involve a
call to square root. So it's a job for assembly. It's only one
function, but it's absolutely vital that you have someone who can
write it.

So far as I can tell, this post was probably accurate sometime in the early
90s, but isn't now. In particular, stuff like that is very, very, often done
in hardware, so you don't have to do that *at all* -- you merely tell the
hardware that you want some lighting, and whaddya know, there is lighting.

It's also not at all obvious that, even if you did need to write such a
thing, you would consistently find that hand-writing it in assembly was a good
choice. Even assuming that your video game runs only on x86 processors, like
those used in the Xbox 360 and PS3 -- whoops, those are both PowerPC, I was
thinking of the iPhone and iPad -- whoops, those are both ARM... You may well
find that you can't beat a compiler on well-written code to begin with.

Back in the 90s, we had people claiming that assembly was necessary for
performance-critical stuff in this group. At least one of them, admittedly
something of a twit, produced an example of a function which "needed" to be
in assembly for speed. His assembly code for it was, on real hardware,
noticably slower than at least one reasonably straightforward C
implementation.

-s
 
K

Kaz Kylheku

Back in the 90s, we had people claiming that assembly was necessary for
performance-critical stuff in this group. At least one of them, admittedly
something of a twit, produced an example of a function which "needed" to be
in assembly for speed. His assembly code for it was, on real hardware,
noticably slower than at least one reasonably straightforward C
implementation.

I remember many years ago finding a very idiotic book in the library about
assembly language programming (Motorola 68000) on the Apple Mac.

The textbook presented a circle-drawing routine, and bragged about it only
takes a quarter second (don't remember the exact time unit) to draw a big
circle, thanks to being written in assembly language!

I thought, what? On the same machine, the classic MacPaint program draws fat
ellipses at a decent enough frame rate that you can interactively resize the
suckers. Haven't these authors ever used any applications on a Mac?

So I looked at what their routine was doing. It was using the naive algorithm
based on square root instead of Bresenham!
 
B

BGB

So far as I can tell, this post was probably accurate sometime in the early
90s, but isn't now. In particular, stuff like that is very, very, often done
in hardware, so you don't have to do that *at all* -- you merely tell the
hardware that you want some lighting, and whaddya know, there is lighting.

well, there is a little more than that:
if one wants lighting that doesn't look like crap, then it is needed to
write pixel shaders / fragment shaders.

granted, these are typically in GLSL or similar.

It's also not at all obvious that, even if you did need to write such a
thing, you would consistently find that hand-writing it in assembly was a good
choice. Even assuming that your video game runs only on x86 processors, like
those used in the Xbox 360 and PS3 -- whoops, those are both PowerPC, I was
thinking of the iPhone and iPad -- whoops, those are both ARM... You may well
find that you can't beat a compiler on well-written code to begin with.

the option is, of course, to write different ASM versions for pretty
much any combination of OS and/or CPU one wants to run on.

so, ASM versions for:
Windows and Linux on x86;
Windows on x86-64;
Linux on x86-64;
XBox 360 (PPC);
PS3 (PPC);
iPhone and iPad (ARM);
Android (ARM).

the reason some targets for the same CPU are duplicated, is because
there are generally ABI differences between the targets (actually, there
are ABI differences between Windows and Linux on x86, but one can often
hedge around them).

if one is using an external assembler, then there is an issue that the
exact syntax will often vary from one assembler to another. likewise for
inline assembler (which is often compiler-specific).

the result is that often it is better to limit ASM to cases where the
task can't be effectively accomplished in C, which is usually because it
involves breaking through the normal language abstractions, and is
rarely particularly related to performance.


this is also partly why I use my own in-program assembler: I can better
control the ASM syntax (and most things I use ASM for wouldn't work so
well with static ASM anyways).

my stuff currently runs on a combination of Windows and Linux on x86,
x86-64, and ARM (although on ARM many things are disabled as I have not
yet ported the ASM-generation logic...).

in a few cases, I did make use of fallback pure-C logic (such as
handling "apply" via a giant procedurally-generated "switch()" block).
its limitation is that it only deals with certain argument types and
only currently supports up to 6 arguments or so (and produces a 200kB or
so object file).

Back in the 90s, we had people claiming that assembly was necessary for
performance-critical stuff in this group. At least one of them, admittedly
something of a twit, produced an example of a function which "needed" to be
in assembly for speed. His assembly code for it was, on real hardware,
noticably slower than at least one reasonably straightforward C
implementation.

yep.


I guess a question would be if there is any good way to apply an
arbitrary argument list (not known until runtime) against an arbitrary
function pointer, without needing a giant switch statement.

likewise goes for some way to produce a function pointer which can:
accept an arbitrary argument list (only known at runtime);
keep track of internal state.

the way I had generally handled this was to generate a thunk which would:
fold the arguments into an on-stack buffer;
keep track of a data-pointer given to it at construction;
call its attached function with the data-pointer and the arguments buffer.

generally, this was used to plug script-language functions into C
function pointers.

I have not yet come up with any "good" way to do this in pure C, apart
from generating arrays of dummy functions (and allocating from these),
and assigned pointers, and figuring out a good/portable way to do
0-argument varargs (normally, stdarg needs at least 1 fixed argument,
which is a problem in this case).


also, a number of other tasks roughly along these lines.


ASM is also sort of needed to implement a JIT, as otherwise one has an
interpreter which will generally run slower than native code, but very
often this is not a huge issue (where I need performance and where I am
using my scripting language don't really overlap all that much, so no
huge issue that it is an interpreter...).


most other things though are plain C though, as (generally) the C
compiler does a fairly decent job.
 
B

BartC

gwowen said:
x86 is likely still most commonly used, even for assembler.

By volume, ARM chips massively outsell x86 chips, and most of those
are used on devices so diverse that each one will be require bespoke
code - frequently in assembler (especially for the chips with little
memory). The x86 chips will run Windows or some Unix-like OS, which
means even the bespoke code will be compiled [C/C++/Fortran], byte-
compiled [Java, C#] or interpreted (VB, Perl, Python).

I spend a lot of time messing with interpreters. I found that you need to
have a lot of assembly to achieve a decent execution speed; optimising
compilers are just not clever enough.

Once that's done, then actual applications will be written in the language
that's being interpreted.

But, someone has to be doing the assembly coding to start with though!
(Python in particular could have done with some instead of being in pure C.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,085
Messages
2,570,597
Members
47,218
Latest member
GracieDebo

Latest Threads

Top