Matt Kruse wrote (2005:10:16):
No one can be an expert in everything.
It is not necessary to be an expert in order to understand browser
scripting.
Rather than considering yourself and your job position, consider
a one-man 'webmaster' for a small company who must run the web
server, the external web site, the internal intranet, code in
HTML and PHP, do some database work, write some javascript,
create some images, etc.
You are describing someone in an extremely responsible position.
Responsible for web site, database and network security (in relation to
external connections/access), responsible for a public face, and
consequent credibility, of the organisation, potentially responsible for
a significant revenue source, etc.
You are also describing someone who does not need to use javascript at
all, it is not compulsory.
Very few people in the real world would be an expert in all
of those areas,
And most of them will realise that they are not qualified to do that job
and so not apply for it.
or have anywhere close to enough time to study each
area in-depth.
In a responsible position, where the potential to do harm exceeds the
individuals wages, not having the skills to do the job effectively is
seriously unprofessional. And in the event of finding ones self
manoeuvred into that position without the skills it is seriously
unprofessional not to make the time to learn the required skills. But
even so, those skills do not necessarily need to include javascript at
all, as a web site can get by without it easily enough.
For many, the best hope is to keep things working
and _slowly_ advance knowledge in each of those areas.
To admit that advancing knowledge in each area is advantageous is to
question the rational of necessarily doing so slowly.
I'd like to know more about your work environment.
It sounds very different from any that I have worked in or
worked with.
So nobody manages the use of the available resources to take best
advantage of the skills they have?
In the places I have worked or worked with, and the people I've
known, I've never yet seen someone who was simply a "javascript
programmer".
Officially I am a Java and javascript programmer, but I have not written
a single line of Java in the last year, and am unlikely to be able to do
so in the next year.
In almost every case I've witnessed, javascript has been
an 'add-on' technology that web developers are expected to
use, but that employers almost never devote any time or
training towards.
It is fairly obvious form job adverts that many organisations have
little idea of which skills, technical and otherwise, they should expect
to hire in one individual. When someone advertises for a 'web designer
with DHTML' they are asking for a graphic designer and a programmer, two
skills that would rarely be available in equal measure in the same
individual.
Again, I think your current work environment - whatever it
is - does not represent the norm for many, many developers
out there.
That may well be true, I have always programmed for software houses
(mostly working on e-commerce web sites and web applications, as I
presently am). Software houses are, however, very interested in the
efficient and cost-effective creation of reliable and easily maintained
software.
An organisation such as a web design agency may be very differently
managed, but when they find themselves in the business of writing
software maybe they should be looking at how the business of writing
software is practiced professionally.
Certainly not for those developers who aren't even _in_
a work environment.
The degree to which the attitudes and behave of individual amateur
developers may be regarded as professional is not that important.
Good for you. You can be picky.
It is not a question of being "picky", hiring someone to white browser
scripts who cannot write browser script would be an expensive mistake.
Companies cannot verify that the people they hire possess the skills
they purport to have? I think you will find that they can.
Hell, many web sites are made by _volunteers_! It could be
for their church, their local charity, their boy scout troop,
or whatever. These people may not even be programmers. If
they want some cool javascript functionality on their site,
you would probably tell them "too bad, you can't.
Would I? As I recall I have put considerable effort into demonstrating
that much of what your are calling "cool javascript functionality" can
be achieved without introducing a javascript dependency, and so without
having a negative impact on the viability of any site that uses them.
I may in practice make it evened that any given individual doesn't know
enough to handle the issues involved in using javascript on a web site,
but I tend to suggest that the answer to that problem is acquiring the
knowledge.
Whereas I would tell them, "sure, you can do that.
Download X and Y and put Z in your HTML, and it will
work."
And you use a definition of 'work' that falls short. You introduce
dependencies and penalties without properly explaining them, and then
deny their significance when others raise the issues. In short, you give
someone who might be capable of doing better the ability to do harm
without necessarily being aware that they are doing so.
Whose approach do you think _they_ prefer?
When people are doing harm, to their employer, or their employer's
clients, or their own clients, or just some organisation that they have
volunteered to 'help', then they are usually happier to be unaware that
they are doing harm (assuming some personal moral integrity).
That's great, if it solves your specific problem. But is it
general enough to be used by thousands of other people around
the world?
No, of course not. It is general enough to be re-used in any context
within the application that may require a date picker.
If not, then you've solved 1 problem when you could have
solved 1,000 problems with a little more work.
I solved the problem in the context of the problem. Any additional work
solving other people's problems would represent a dishonest use of my
employer's resources.
A few extra k doesn't matter in most cases.
I'm never convinced by theoretical savings, whether it be
reducing code from 10k to 9k, or by speeding up a code block
by 5ms. There comes a point where the "savings" do not justify
the time invested to achieve them.
A large part of the point of the code design/implementation strategy I
have been promoting is that there is no extra time involved in
implementing it. Indeed, because code re-use is maximised the time spent
actually writing code is minimised.
Code bloat only matters if people care.
People care.
If you have 200k of javascript, I agree, people might actually
care. If you have a 30k lib that is cached and used repeatedly
on a site,
And as soon as you have 10 libraries providing separate functionality
you have 10 opportunities for essentially the same code to be appearing
in more than one of them.
I think you would have a hard time finding anyone who
realistically cared.
My CEO maintains that fast web applications sell better, and because he
cares all of the management cares. So, no I don't have to look far to
find someone who cares.
a) Not everyone has the skill to develop low-level reusable
functions such as you have done.
But do the people who don't have the skill also not have the potential
to acquire that skill?
b) People such as yourself who write these functions often
refuse to document and share them.
A layered design based on low-level re-usable components is a design
pattern not a collection of specific code.
c) Therefore, people without the time or skill to write those
functions cannot use your approach.
They can once they have acquired the knowledge to do so.
If I were you - someone repeatedly advocating reusable
low-level functions being combined to achieve a larger
task - then I would certainly be documenting those
functions and making them available to other lesser-skilled
javascript developers, so that everyone could benefit from
my approach. Why don't you?
Because I understand that a code design strategy is independent of
actual code, and that the actual components are amenable to may styles
of implementation with no good reason to believe that any individual
implementation would be automatically superior to one in another style.
What developers need in order to exploit the pattern is an understanding
of the idea and some examples of individual components.
I do have my own low-level reusable components for various
tasks. Many work very well, some could use some more
tweaking. My libraries use these reusable components and
package them into usable form for developers. If they want
to break it down and write things on their own using the
low-level components, that's fine. My approach is to package
them up in usable form so they don't _need_ to do all that
work.
Which is exactly what I have been saying. As soon as you put two such
libraries together any such code they both employ is needlessly
repeated, with the problem increasing as any individual libraries are
added.
I am constantly looking for the best low-level reusable
functions to perform very specific tasks. In fact, I've
asked for assistance several times in this group in writing
some very specific, reusable low-level functions, and people
aren't really interested in the topic.
You are mistaking not being interested in the topic with not being
interested in helping you. You have to remember that everyone knows what
you are going to be doing with any information/examples you are given.
It is not surprising that people should be reluctant to aid you in
making the Internet worse that it could be.
I would think that the regulars here would _love_ to work
together to find the best way to solve specific, low-level
tasks and document them in a repository of reusable code.
You haven't noticed the Usenet archives then?
People such as yourself - who _ADVOCATE_ such a method of
development - do not share your solutions so that others
may benefit.
I put them in a public place, if you don't care to look that is your
choice.
It's a phrase whose meaning should be obvious.
It reads like marketing-speak and so would be assumed to have no real
meaning.
If you've added value to something, you've increased a
positive trait or reduced a negative one. The end result
is worth more, or is superior to the original state. Maybe
you increased productivity, made something more robust, made
something easier to maintain, saved
money, etc, etc.
You used added "value" in the context of someone adopting one of your
"Best Practices" without understanding what the purpose of the "Best
Practice" is, or understanding why they were adopting it. And, as I
pointed out in the paragraph that you edited from your quoted response
(without marking the edit), the change in state that represents this
added "value" would consequently be negligible. As someone working with
a technology of which they are largely ignorant has the capacity to do
enormous damage without being aware that they are doing so describing
this negligible change of state as adding "value" is like characterising
the swatting of a mosquito as adding value to the fight against malaria.
It might sound good but it really doesn't mean much.
Because not everyone is like you, Richard.
Is that a hard concept to understand?
Editing out the paragraph following the question "why?" without any
indication that you have done so is seriously disingenuous. That
paragraph whent on to ask how long it took you to acquire the skills you
have, and how long you would expect to take learning skills you do not
have. You may be right in your implication and there may actually be
people who are capable of acquiring skills instantaneously, but the
majority are like me in that they will have to devote time to learning
anything new.
No I don't. If the outcome of a problem solving process is acceptable
then it is a solution to the problem. If that acceptable outcome is not
actually a solution to the stated problem then the initial problem was
incorrectly stated/analysed.
Let's take an easy example - medicine.
Interesting choice. You have been arguing here for a tolerance of
individuals working in a professional capacity without a working
understanding of the technology they are using. How well does that
notion translate into medicine? May the general practitioner be excused
for not finding the time to gain an understanding of skin diseases, or
the surgeon for attempting brain surgery prior to gaining some expertise
in the subject? And is the pharmacist rational in his expectation to be
allowed to work as a surgeon?
The situation is more extreme because the consequences of the
harm that can be done in the field of medicine are potentially lethal,
while in web development they are mostly fiscal.
If I have an incurable disease, I sure would appreciate
having medicine to fight the side-effects, and to delay
the inevitable results of the disease. The problem of the
disease is not solved. But it has been partially solved -
I will be more comfortable, and I will live longer than
if I didn't take the medicine. I sure would appreciate
such a partial solution to the problem.
The "problem of the disease" is a poor analyse of the situation. The
problems may be 1. finding a cure for the decease, 2. finding a
prevention for the decease, and 3. finding a palliative for the symptoms
of the decease. Your proposal is a full solution to the problem of
finding a palliative, not a partial solution to either of the other
problems. Indeed it does not even address either of the other problems,
though a solution to either of the other problems would eventually
negate the need for a palliative.
If you believe that your approach is perfect and ideal (or at least
closer than mine) for most people, then I think you are wrong. Sure,
it may be realistic and practical for some, but certainly not most.
IMO.
Go find a high school student who is learning web development
to make a school band web site and wants to use some javascript.
So the criteria for web development "Best Practices" are to be governed
by absolute beginners making amateur web sites?
Ask them if it's realistic and practical to invest weeks of
learning and testing to figure out how browser scripting works,
then spend weeks writing their own low-level reusable functions,
then spend time combining them to perform the specific task
they wanted to achieve on their site. Ask if that approach is
realistic and practical for them,
You have proposed someone who is "learning web development", and so time
spent learning the technologies involved will be valuable, and should be
expected to take time.
when they could have downloaded a solution with 10k of
'code bloat' and had it working in their page in less than an
hour.
You question doesn't make it clear whether it is "learning web
development" or "to make a school bad web site" that is the point of the
exercise. In the latter case you proposal may contribute to the outcome
in a way that does not require any learning of web development. If
learning something is the point of the exercise then learning to deploy
a third party library without understanding it (or the possible
consequences) is barely a contribution at all.
Ask them which approach is more realistic and practical.
<snip>
The judgement of someone who has no understanding of a subject as to
what would be practical and realistic in relation to that subject is of
little value. You would just be asking someone whether they would prefer
not to spend time learning how to go about doing something that they
want to do. It is an appeal to innate idleness in humans, and yes we
would all prefer not to have to spend time learning how to do what we
want to be able to do. However, if pressed, even you high school student
would have to admit that it is not a very realistic desire.
Richard.