I don't think you can use any measure as an accurate yardstick, but
rather as an impressionistic canvas.
Exactly. You can't measure "popularity" without defining the term.
Xah Lee appears to define popularity based on the number of posts made
in a given language's Usenet group (for his choice of which group
belongs to a given language). Given that a substantial portion of the
recent posts in each group is likely an off-topic Xah Lee crosspost,
this metric is probably unreliable even for measuring his own intended
metric: the amount of discussion taking place about each language on
Usenet.
How do you define popularity? Do you define it by how much people
talk about a language on the internet? How many programs are written
in it? How many lines of code are written in it? How many CPU cycles
are used to run code written in it?
None of these is fair, as it is. More people use Ada than talk about
it online, because it is a common language in classified government
work. More people talk about Lisp online than use it, because their
jobs or other circumstances limit their choice to other languages.
Moreover, most people use more than one language, and after a long day
at the office of pumping out Java or Perl, they go home and talk about
Lisp or C#. Online discussion isn't a measure of actual use, even if
you can actually measure the total amount of discussion.
The number of programs written is likely to be grossly inaccurate.
People write millions of small C or Perl utilities all the time, to a
combined effect of less problem-solving than one big Java application.
The number of lines of code written in a language is also unfair,
because it takes more lines of C than of almost any other language to
solve most problems.
The number of CPU cycles spent running code that was written in a
given language is also unfair, because, for instance, Ruby code burns
more CPU cycles to do something than C code does, in the average case.
So, how do you define popularity?