T
The Natural Philosopher
AlmostBob said:All I do is this:
SELECT id FROM table;
print "<img src=url/to/$id.jpg>";
Compared to your way:
- Simpler
- No need to start new php scripts to output raw binary stream for
every image
- No sockets
- No need to read heavy binary BLOB from DB
- No chance for possible cache attacks in MySQL, PHP, filesystem or
Apache
I don't want to sound religious, but I think my way is much better.
There is no better: it depends on the requirements.
Your way there is no chance to protect the image directory from random
downloads for example.
In my case the user may be a user with far greater access than the
general public, and have access to internal data - like plans drawings
and specifications.
I don't want script kiddies stealing vital info: Putting them in a
database is one giant leap in that sense.
execution speed and efficiency is only one of many many issues.
In my case the above, plus a general requirement to try and get all
important corporate data in the data base, under one backup regime, were
more significant. I especially did NOT want user accessible image files
that might get deleted by accident. I could protect the database area by
making it only accessible by root or the mysql daemon: direct access to
download areas had to be at lest readable, and if uploaded, wrteable, by
the permissions the web server and php ran at.
In practice at moderate loads the download speeds are far more dominant
that CPU or RAM limitations. And indeed the ability to make a special
download script that re-sizes the images on the fly, turned out to be a
better way to go than storing thumbnails of varying sizes. One trades
disk space for processing overhead.
As a practicing engineer all my working life, it still amazes me that
people will always come up with what amounts to a religious statement
about any particular implementation, that it is universally 'better'.
If that were the case, it would be universally adopted instantly.
Jerry has (for once) made an extremely valid point about directory sizes
as well. Databases are far better at finding things quickly in large
amounts of data: far better than a crude directory search. Once the
overhead in scanning the directory exceeds the extra download
efficiency, you are overall on a loser with flat files.
AND if you run into CPU or RAM limitations, its a lot easier to - say -
move your database to a honking new machine, or upgrade the one you have
than completely re-write all your applications to use the database, that
used to use a file.
I am NOT claiming that a database is te 'right' answer in all cases,
just pointing out that it may be a decision you want to make carefully,
as it is somewhat hard to change later on, and in most cases the extra
overhead on using it is more than compensated by the benefits,
particularly in access control.
Which was the primary concern of the OP.