* Juha Nieminen <
[email protected]> [comp.lang.c]:
It's a too-overloaded token: dos/windows paths,
Did you know that you can use '/' to separate paths in dos/windows
as well? (Well, at least with all compilers I know.)
The compiler doesn't matter here. Most windows api functions accept
'/' nowadays (but not all). You're better off using '\' for the
windows platform, because there are still cases '/' won't work.
But then, it was Microsoft's awkward decision to use '\' as a path
separator with the rest of the world using '/'. It probably came back
on them the instant they built web functionality (browser, server) in
their OS.
My understanding is that early versions of MS-DOS didn't support
directories (or maybe it was CP/M). By the time they decided to add
directories, they were already using '/' as an option delimiter (like
'-' in Unix).
Correct. MS-DOS 2.x introduced directories. Before that, MS-DOS, like
the other 8 bit OSen (FLEX, FLEX-09, CP/M, etc.), just had a flat list
of file names which was OK because few of its users could afford disks
big enough for the lack of directories to be a problem.
It's about to happen again. In the mid-80s we reached the scaling limits
of flat file lists and went to hierarchical directories. Now we're
approaching the scaling limits of hierarchical directories. I haven't
seen a modern system that doesn't have some ludicrously long full-path-
names of files, highly deeply nested directory structures, and human
trouble navigating the wilderness of files and keeping track of what
files are for what purpose. Applications come with numerous files; on
Windows systems they tend to be bundled with it in Program Files (with
some libraries being elsewhere, or else duplicated wastefully); on Unix
systems all kinds of config and other files get scattered to the four
winds, /usr, /etc, /bin, and ~/.appname. Documents might end up anywhere,
at least on Windows machines.
Another tension is between organizing by program and organizing by higher
level task. Say someone's throwing together a report and presentation.
The report is made in Word, the presentation in PowerPoint, both using
data from an Excel spreadsheet, and some of the graphs were copied,
pasted into Photoshop, prettied up, and then embedded as .png files. Oh,
and the whole thing is also HTMLized using FrontPage and posted to the
company LAN as well as the PP presentation presented at the Friday
afternoon general staff meeting and the printed report handed to the boss
afterward in the hall outside the conference room. And the guys that
couldn't attend the meeting in person phone-conferenced with those that
were while consulting the copy on the LAN's internal website.
So you've got Photoshopped pngs along with xls, doc, pp, and other files,
that logically are part of one project. The user wants to file them as
"2010 2nd quarter budget report" or whatever, under "2010 budget
reports", under "budget reports", ideally; the programs all would like to
keep track of their own files, docs with docs, xlses with xlses; and
Windows itself would dearly love it if you'd just shove the whole mess in
"My Documents" along with every single other file the user ever creates,
thus turning back the clock to 1982 and flat file lists again.
Unix users get nearly the analogous treatment with ~/ substituting for
"My Documents" and the gimp for Photoshop and other more or less
straightforward substitutions.
Add to that how every kind of browser, file sharing tool, or similar
client for downloadable content ends up with its own preferred
directories for storing received files, plus iTunes, plus various sync
folders for your phone and laptop, and so on, and so forth, and we're
rapidly heading straight back into file management hell.
What's our savior going to be? I'm beginning to suspect we're going to
soon see a wave of new file management tools, at first appearing as third-
party "knowledge manager" programs and eventually superseding Explorer-
style shells as those have superseded the old C:\ prompt. These will
provide their own nonhierarchical, link-based file management ability,
probably with the ability to easily convert any subnetwork of stuff into
web pages, or even acting as a web site itself; a locally-hosted web app
that can easily adapt to make some stuff publishable remotely. Hyperlinks
will creep into everything and become easy to create via drag and drop;
no copying and pasting (or worse, memorizing and typing) long filenames.
We already see hints of this with the big commercial websites. When was
the last time you saw a human-readable URL at a major news or corporate
site instead of something like
http://site.example.com/html/content/
cms/1.45.907/08102010/58120156-18856-ac78f9d107bb8088.htm or similarly.
Occasionally you might see something like that but with a -report-on-iraq-
casualties just before the .htm, but dollars to doughnuts you can delete
that from the url or replace it with -report-on-stolen-doughnuts and it
will still fetch the same page.
Filenames and paths are becoming a layer increasingly managed by
automation instead of manually, now by big website CMSes and soon by
Windows 8 and Gnome 2015 I expect. The user will not type paths or even
drill down through folders, he will follow hyperlinks.
And when he needs to make a nonlocal jump he will use search. Faster,
more incremental, and better search than we have so far.
Vista already has raised the bar. I use some Vista machines and almost
never click Programs after Start, unlike when forced to use an older XP
box. Programs is slow, balky, unwieldy, and has shoddy ergonomics. It
worked at first and didn't scale, despite being hierarchical. Vista's
start menu has a nice fat incremental search box at the bottom just
begging for your input, and you can find something like Calculator much
faster by typing "calc" into it and then clicking one of the few
remaining items above than by clicking Programs, then Accessories, etc.
and waiting for each level of menu to unfold. Vista also makes it easy to
tag photographs and search by tag, just by typing e.g. tag
budget OR
finance) into an Explorer window's upper-right-corner search box.
Two things still scale poorly: the searches aren't especially fast,
particularly on large tagged photo libraries, and photo tagging is itself
one tag at a time one image at a time with poor support for copy, paste,
or mass tagging (it would be nice to be able to select a large group of
images and assign a tag to the lot with one command).
The knowledge managers can beat Explorer by providing their own facility
for associating any imported object with metadata, including metadata,
and using suitable lightweight local database software to make it fast to
search. This allows tagging images that aren't JPEG or TIFF (so don't
have EXIF metadata) and tagging non-images, as well as doing so in a
uniform manner. (EXIF tags and other document-format-supported metadata,
like mp3 ID3 tags, could be imported when a file is first seen by the
system.) Metatagging is most important, of course, for nonverbal data
such as audio, still pictures, and video. (Advanced tools could
potentially attempt voice recognition on audio and OCR on stills and
video to extract what verbal content is in there, but fuzzy-matching
would have to be used for this to be searchable I expect, and even then
an unlabeled photo of an apple would not be found by searching for
"apple" without solving some hard problems in AI and machine vision.)
So your documents become much more searchable and you can link them into
a web of interrelatedness. Something like this almost HAS to replace
Explorer-style shells in the very near future, I'm thinking, if only
because of the stupendous explosion of user-generated audiovisual content
that has to be searchable and the general scaling problems happening with
users' documents nowadays.
As for what any of this has to do with the original topic: very little. I
guess that's Usenet for you. On the other hand all of this will have to
be implemented in some language, and it's a sure bet that it's gonna wind
up involving C code, C++ code, and/or code that runs on the JVM.