I
Ivan Shmakov
[Cross-posting to for the reasons below.
Feel free to drop if inappropriate.]
SQLite seems to fit nicely such a description. Consider, e. g.:
CREATE TABLE "file" (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
text TEXT NOT NULL);
-- ensure that names are unique
CREATE UNIQUE INDEX "file-unique"
ON "file" ("name");
-- @file-get name
SELECT "text" FROM "file"
WHERE "name" = ?1;
-- @file-put name text
INSERT INTO "file" ("name", "text")
VALUES (?1, ?2);
-- @file-replace name text
UPDATE "file"
SET "text" = ?2
WHERE "name" = ?1;
AIUI, SQLite has strong support for static linking, even at the
"source level", which could be important for one wishing to keep
the number of dependencies low.
The above somehow reminds me of XML (the model, if not the
representation), and the associated "tools": XInclude, XPath,
XSLT and XQuery. And there's Fast Infoset for a space- and
time-efficient XML representation, BTW.
The use of XML to encode the structure of the data (and code)
being stored could bring a level of consistency, but depending
on the task, it may be too much pain for too low gain.
Feel free to drop if inappropriate.]
[...]There're a plenty of data formats allowing for such a use. Did youI want to use a tar file like an IBM partitioned dataset, i. e., a
file with multiple members, from a C program.
consider SQLite [1] or HDF5 [2]? Or even GDBM [3]?
If its octet sequences instead, SQLite BLOB's [4] could be the
way to go.
[1] http://sqlite.org/
[2] http://www.hdfgroup.org/HDF5/
[3] http://www.gnu.org.ua/software/gdbm/
[4] http://sqlite.org/c3ref/blob.html
Thanks, Ivan, that'll all work, too. The data's more like TLOB's
(text large objects), with each "record" a small program, or more
often plain english text, in its own file.
SQLite seems to fit nicely such a description. Consider, e. g.:
CREATE TABLE "file" (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
text TEXT NOT NULL);
-- ensure that names are unique
CREATE UNIQUE INDEX "file-unique"
ON "file" ("name");
-- @file-get name
SELECT "text" FROM "file"
WHERE "name" = ?1;
-- @file-put name text
INSERT INTO "file" ("name", "text")
VALUES (?1, ?2);
-- @file-replace name text
UPDATE "file"
SET "text" = ?2
WHERE "name" = ?1;
AIUI, SQLite has strong support for static linking, even at the
"source level", which could be important for one wishing to keep
the number of dependencies low.
Then some set of those is collected and assembled. One first small
test application (just to exercise the code) is listings, e. g.,
www.forkosh.com then click "Alps" under Sample Code (I'd give you a
direct deep link, but you'll see it's a real long constructed link,
passing lots of query_string attributes on to the cgi program, under
construction, that I've been talking about).
The more important application will be algorithmically collecting
snippets of boilerplate text and constructing complete documents
"according to spec".
The above somehow reminds me of XML (the model, if not the
representation), and the associated "tools": XInclude, XPath,
XSLT and XQuery. And there's Fast Infoset for a space- and
time-efficient XML representation, BTW.
The use of XML to encode the structure of the data (and code)
being stored could bring a level of consistency, but depending
on the task, it may be too much pain for too low gain.