Isn't OpenGL way overkill for 2D? Or were you doing 3D stuff and textures,
etc.? It would seem that all you'd gain with your approach is cross-platform
capability, but in doing so, creating a dependence on another monstrosity.
To me, it seems like just abstracting some platform primitives (or a bit
higher-level) would be much less work than even just figuring out OpenGL's
complexities. ?
I mostly do 3D stuff, but OpenGL works pretty well for both 2D and 3D stuff.
for many non-trivial tasks, it is still much easier overall than drawing
pixel values into a buffer (although a pixel-buffer gives a lot more
control, and for some tasks is easier to use and more capable than is
using shaders).
well, except this is tied to Linux or similar.
technically, this is inside of a game, where the GUI would be integrated
with the delta-protocol otherwise used for scene-graph delta messages
(entity movements, ...).
theoretically, a GUI is basically just a small scene-graph with event
messages thrown in, but the specifics get a little less obvious.
to some extent I had been using console-style UIs (via the in-program
console), and have in a few cases implemented a subset of the VT100
escape-sequences, but thus far they are mostly just used for changing
colors and other things. also, unlike a real console, this system
doesn't generally (by default) have a freely movable cursor, although my
text-editor interface did go in this direction (by disabling/ignoring
the usual command-entry prompt, and setting up another faux cursor).
note: the console interface is itself drawn using OpenGL (and uses an
8x8 bitmap font).
xlib? Motif?
my initial API was done similar to OpenGL, namely:
begin/end pairs and get/set functions (using handles), just with more
strings (I took API design ideas mostly from OpenGL and the Win32 GDI,
with a few misc ideas from GTK). this turned out to kind-of suck.
basically, begin/end pairs were used to submit widgets or forms, and
typically forms or menus were modified by entirely replacing them (by
begining a new form using the old one's ID-number), though some
properties could be changed via get/set calls, which would be passed a
widget-handle.
my more recent idea had more been based on the idea that widgets are
basically structs (similar to "entities") which can be linked together,
and linked onto "containers" or "surfaces".
in the protocol though, it will probably all just boil down to delta
messages and handles though (so, any changed variables in the
server-side structure are relayed as delta messages across to the other
end, where they update the client-side version of the widget).
likewise, if the user interacts with the widget, a message will be sent
back indicating this.
I haven't yet gotten to this though...