E
Elliott Cable
So, I’ve a problem. I’m using ncurses (or possibly not, might just
`STDIN.read(1)` or something, we’ll see) to grab byte–level input from
the terminal. Purpose being to catch and handle control characters in a
text mode application, such as “meta–3†or “control–c.â€
Currently, I have a really ugly method that manually parses UTF-8 and
ASCII directly in my Ruby source; however, this is extremely slow, and
seems quite a bit like overkill. After all, with 1.9’s wonderfully
robust `Encoding` support, it seems silly to duplicate all that
byte–parsing work that *must* be going on somewhere in Ruby already.
Here’s my current method (forgive the horrendous code, please! I fully
intended to get rid of it right from the start, so…):
http://github.com/elliottcable/nfoi...141912053fe5ae6/lib/nfoiled/window.rb#L80-175
The goal is to devise some method by which I can:
1) Determine whether or not an `Array` of so–far–received bytes is, yet,
a valid `String` of a given `Encoding` (I can get the intended input
`Encoding` by way of a simple `Encoding.findlocale)`, so we’re always
in–the–know as to which `Encoding` the incoming bytes are intended to
be)
2) Once we know the`Array` instance containing the relevant bytes
pertains to a valid `String`, convert that into a `String` and further
store/cache/process it in some way.
Yes, this means that the `String` will almost always be one character
long; I am uninterested in parsing lengths of characters out of the
input stream, I can deal with that later. At the moment, I very simply
want to ensure that I can retrieve, in real time, the latest character
entered at the terminal, as a `String`, in any `Encoding`.
Any help would be much appreciated; I’ve been banging my head against
this on–and–off for weeks! (-:
`STDIN.read(1)` or something, we’ll see) to grab byte–level input from
the terminal. Purpose being to catch and handle control characters in a
text mode application, such as “meta–3†or “control–c.â€
Currently, I have a really ugly method that manually parses UTF-8 and
ASCII directly in my Ruby source; however, this is extremely slow, and
seems quite a bit like overkill. After all, with 1.9’s wonderfully
robust `Encoding` support, it seems silly to duplicate all that
byte–parsing work that *must* be going on somewhere in Ruby already.
Here’s my current method (forgive the horrendous code, please! I fully
intended to get rid of it right from the start, so…):
http://github.com/elliottcable/nfoi...141912053fe5ae6/lib/nfoiled/window.rb#L80-175
The goal is to devise some method by which I can:
1) Determine whether or not an `Array` of so–far–received bytes is, yet,
a valid `String` of a given `Encoding` (I can get the intended input
`Encoding` by way of a simple `Encoding.findlocale)`, so we’re always
in–the–know as to which `Encoding` the incoming bytes are intended to
be)
2) Once we know the`Array` instance containing the relevant bytes
pertains to a valid `String`, convert that into a `String` and further
store/cache/process it in some way.
Yes, this means that the `String` will almost always be one character
long; I am uninterested in parsing lengths of characters out of the
input stream, I can deal with that later. At the moment, I very simply
want to ensure that I can retrieve, in real time, the latest character
entered at the terminal, as a `String`, in any `Encoding`.
Any help would be much appreciated; I’ve been banging my head against
this on–and–off for weeks! (-: