Self-taught C

M

Malcolm McLean

But this basically brings you back to a command line interface. You can't
compare a window system with detailed mouse and keyboard events to form
submission (which requires some kind of GUI underneath it). That is completely
moronic.
For some applications you do need to be able to send keystrokes to the
main logic cruncher. But usually you don't. Someplace somewhere you
need the logic to detect the keystroke, decode it, convert it to a
character stream, and display the text. Users like instantaneous
feedback for this. Also, it's not very processor heavy, and virtually
always the task is very standardised, just a case of entering text
into an edit box. So the obvious place to do this is in the terminal.
That's not a moronic design decision at all.
 
B

Ben Bacarisse

Malcolm McLean said:
The problem is that the messages were wrong. In X the server sends a
message "A key was pressed" and the client has to send back a mesage
"please draw the letter X in such and such a font in that window". So
of course the poor user would hammer the key again, then get a row of
xes, then press delete several times, delete the lot, then tap the x
key once, deliberately and smartly, and go for a coffee until he could
type in the next character.

I've been using X based systems for more than a quarter of a century and
I've never had that problem. I'm using X right now and there's no such
problem. I used thin clients in the '80s and never had that problem.

I am sure it's possible, but it's not obviously X's problem if it is
used where it is inappropriate to do so.

<snip>
 
B

Ben Bacarisse

Stephen Sprunk said:
The "problem" with X was/is really that people tried to use it over
networks that simply didn't have the bandwidth to support what they were
trying to do.

Small niggle: it's often the latency rather than the bandwidth that
kills X applications.

<snip>
 
S

Shao Miller

Plus, when we stumble on a problem we might benefit from having a community
willing to help us out on it. Conversely, we may help others with their
problems, and in the process learn a bit more from it.

Some folks say you only really know something if you can teach it to
someone else. Helping others in a community forum seems like a useful
way to get to know something, then. :)
 
S

Shao Miller

If I'm not mistaken, the X window system was developed with remote clients
in mind and in a time where remote clients were the norm. So, with that in
mind, I don't agree that X was fundamentally misdesigned.

I remember using X on terminals that were diskless and had mouse,
keyboard, monitor, network. It startled me.
 
J

Joe Pfeiffer

Malcolm McLean said:
X is fundamentally misdesigned, because the idea is that the "client",
which means sever, sends a stream of bytes over a network to the "X
sever", which means client.

Sigh. Are there stil people left who want to pretend that they don't
understand why the X server is referred to as the server, and then want
to use that as a criticism of X? Apparently.
However virtually no systems work like
that, and in fact it's hard to get interactivity if the client is
remote. But that consideration dominates the interface.

Well, yeah, the idea is to be able to work easily on a LAN. If you're
still trying to run X over a 1200 baud modem or something, you're a
*really* slow learner.
Then it's stupidly hard to get up a wondow because you've got to mess
about with defining colormaps.

So learn about toolkits.
I did try to solve these problems with BabyX, which was going to be a
simple interface to X. But I never finished it, and there were some
difficulties. It worked on one system, but not when I tried it on
another.

Seems like everybody tried to solve these problems at one time or
another, and created yet another toolkit, that they never
finished.... (in my case, it least it was a student project... and the
student did learn a lot...).
 
R

Rui Maciel

Malcolm said:
Also, it's not very processor heavy, and virtually
always the task is very standardised, just a case of entering text
into an edit box. So the obvious place to do this is in the terminal.
That's not a moronic design decision at all.

Maybe you are forgetting that, when X was developed, terminals were truly
"thin", and therefore they only had the processing power to barely handle
stuff such as networking.

Nowadays, the thinnest of clients has more computational power than the
typical server of that time. So, with such a resource abundance, it is
terribly easy to make bold claims about where a certain data should be
processed, how a protocol should be designed and about what messages should
be sent to whom.

It is also easy to lose perspective when we have the same system internally
performing the job of the client and the server.

Yet, forgetting about these details isn't a good reason to start assuming
that back then everyone was a fool who only took bad decisions and couldn't
put together an acceptable system that was able to work reasonably well. If
we aren't aware of what constraints lead to a specific design decision then
we might actually not learn anything from it.


Rui Maciel
 
G

gwowen

Sigh.  Are there stil people left who want to pretend that they don't
understand why the X server is referred to as the server, and then want
to use that as a criticism of X?  Apparently.

Staggering, isn't it? The UNIX Haters Handbook was ill-conceived even
in 1994, and nearly 20 years later people are still parroting its
idiocies as insight.
 
S

Stephen Sprunk

Small niggle: it's often the latency rather than the bandwidth that
kills X applications.

Raw latency is rarely a problem; even the RTT of a trip around the
entire planet borders on acceptable performance.

However, if a network link is congested, packets get buffered and that
can increase the _effective_ latency by an order of magnitude or two.
And buffers have a finite size, so when they fill up packets start
getting dropped--and detecting and retransmitting those packets will
increase the effective latency by _another_ order of magnitude.

S
 
J

Joe keane

If I'm not mistaken, the X window system was developed with remote clients
in mind and in a time where remote clients were the norm. So, with that in
mind, I don't agree that X was fundamentally misdesigned.

Using Andrew wm, it was helpful, because the machines were chronically
short on memory. Running your big apps on different machines [if
they're available] improved the experience quite a bit.

After we switched to X, i found that what people were using it for is to
run a remote xterm! Does that make any sense? Sending X protocol over
the wires when you could send VT52 protocol?

OT OT OT OT
 
B

Ben Bacarisse

Stephen Sprunk said:
Raw latency is rarely a problem; even the RTT of a trip around the
entire planet borders on acceptable performance.

X was designed when that was very far from true. You said "was/is" so I
though you were covering this historical situation.
However, if a network link is congested, packets get buffered and that
can increase the _effective_ latency by an order of magnitude or two.
And buffers have a finite size, so when they fill up packets start
getting dropped--and detecting and retransmitting those packets will
increase the effective latency by _another_ order of magnitude.

It sounds like we agree.
 
J

Jorgen Grahn

On 02/08/2012 11:42 AM, Jorgen Grahn wrote:
...

If you haven't tried doing it on Windows yet, what is it you're
comparing Unix against when you say it's "easier"? I know there's things
other than Unix and Windows, but the significance of your statement
would be easier to evaluate if we knew what you were comparing Unix with.

With Windows, like I wrote. I base that on what I see as a Windows
user: none of the tools I use there are ones I could easily write
myself. On Unix, several of them are.

/Jorgen
 
J

Jorgen Grahn

What are the best methods you all have found for teaching yourselves > >> how to code proficiently in C?
[...]
[...] get graphics working as fast as you
can. It's harder now than it was to get a simple character-based
raster that can be used for moving space invaders round the screen.

It's still easy on Unix, where you have the curses library.

{{Unix bigot mode|
Unix in general makes it easier to write programs which are small and
have a simple interface, yet are useful. I haven't tried it, but I
imagine the threshold is much higher on Windows.}}

simple filters that work in Unix work fine in windows. You may even be
able to run the same code.

Yes, but you have no data to feed into them (you can't sort(1) a
Powerpoint) and you aren't, as a user, exposed to the idea of building
systems by chaining small tools and file formats together.

/Jorgen
 
J

James Kuyper

With Windows, like I wrote. I base that on what I see as a Windows
user: none of the tools I use there are ones I could easily write
myself. On Unix, several of them are.

I'll agree with your observation, and your conclusion might be correct,
but your argument connecting the two doesn't hold up. The lack of
Windows programs with small, simple interfaces might not be because it's
hard to create such programs. To mention one alternative possibility
(which is inconsistent with my own prior experience with Windows - but
that's 20 years out of date), it might be so easy to create such
programs that as soon as developers get them working, they almost
immediately decide to add more features, so they're no longer small and
simple. Without actual recent experience with Windows development,
neither of us is qualified to speculate on how the developer's design
decisions might be shaped by the features of the Windows API.
 
W

Willem

Jorgen Grahn wrote:
) It's still easy on Unix, where you have the curses library.
)
) {{Unix bigot mode|
) Unix in general makes it easier to write programs which are small and
) have a simple interface, yet are useful. I haven't tried it, but I
) imagine the threshold is much higher on Windows.}}

Disagree. I come from Unix, and I'm now working in a Windows shop, as a
sort-of-admin, and I often write small programs with a simple interface to
get stuff done.

I mainly use Perl and C#, although must 'real' .NET programmers would
probably qualify my code as 'dirty hacks', as it contains one file, with
one class, and only 'static' functions.

So, basically, I write C# like it's ordinary C but with all those extra
neato library functions (LINQ ftw!)


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
 
8

88888 Dihedral

在 2012å¹´2月14日星期二UTC+8下åˆ10æ—¶50分03秒,Willem写é“:
Jorgen Grahn wrote:
) It's still easy on Unix, where you have the curses library.
)
) {{Unix bigot mode|
) Unix in general makes it easier to write programs which are small and
) have a simple interface, yet are useful. I haven't tried it, but I
) imagine the threshold is much higher on Windows.}}

Disagree. I come from Unix, and I'm now working in a Windows shop, as a
sort-of-admin, and I often write small programs with a simple interface to
get stuff done.

I mainly use Perl and C#, although must 'real' .NET programmers would

That is good for a lot .net libraries installed under
the Windows OS.

The cross-platform requirement might not be necessary
for manny Windows AP. programmers.
 
B

BartC

James Kuyper said:
Sure, but hunt-and-peck typists have just hunted for the character, they
shouldn't be uncertain about whether they hit the right one. Such

I usually type with two fingers but without having to look at the keyboard
too often. But this way I sometimes don't hit a key squarely (because my
fingers have to travel across many keys) and have to look at the screen to
see what's come out. (That also explains why I find typing C source code a
challenge!)

Thirty years ago with the 8-bit microprocessor systems I was building, I was
getting instant echoing to the screen, in text *or* graphic modes.

With machines now ten thousand times faster, a million times more memory,
and up to a hundred thousand times faster networking than the rs232 I was
using, you're saying you can sometimes type a whole paragraph before the
first character is echoed?

In that case, what's gone wrong?

[Repost of something that seems to have vanished.]
 
P

Phil Carmody

Malcolm McLean said:
For some applications you do need to be able to send keystrokes to the
main logic cruncher. But usually you don't. Someplace somewhere you
need the logic to detect the keystroke, decode it, convert it to a
character stream, and display the text. Users like instantaneous
feedback for this. Also, it's not very processor heavy, and virtually
always the task is very standardised, just a case of entering text
into an edit box. So the obvious place to do this is in the terminal.
That's not a moronic design decision at all.

If I'm not mistaken, when I briefly enabled javascript for translate.google.com,
every single keypress would cause a network transaction. It wouldn't even wait
for me to reach whitespace or punctuation. Add a corporate VPN to the mix, and
you have a nightmare. NoScript was happy to save me from google's dumbarsery.
I believe their search boxes are the same (but I've never had JS enabled for
those pages). And google's not alone. The decision to process text entry purely
in the client is falling out of fashion in a big way.

Phil (big fan of the "don't send it until return's pressed" interfaces)
--
I'd argue that there is much evidence for the existence of a God.
Pics or it didn't happen.
-- Tom (/. uid 822)
 
N

Nomen Nescio

Phil (big fan of the "don't send it until return's pressed" interfaces)

First thing I've heard from you I agree with ;-)

Up with MVS, down with UNIX!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,082
Messages
2,570,589
Members
47,212
Latest member
JaydenBail

Latest Threads

Top