Here is what he says.
Counting papers encourages (1) writing hastily written papers, (2) overly large groups, (3) self-plagiarism, (4) small and insignificant studies, and (5) half-baked ideas. Even more, it leads to (1) publishing pacts where a groups appears on all individual papers, (2) clique building, (3) `anything goes', (4) `bespoke research', (4) minimum publishable increment, and (5) organizing workshops and conferences for the only purpose of publishing a clique's papers. Even even more, authors are not always authors. Many of these result from referees not doing a good job.
These are good points.
Parnas then proposes a better strategy for evaluation: “When serving on recruiting, promotion, or grant-award committees, read the candidate's papers and evaluate the contents carefully.” In other words, the evaluation should be a thorough review. Here is what the author says about reviews in the same article: “Anyone with experience as an editor knows there is tremendous variation in the seriousness, objectivity, and care with which referees perform their task.” (Let's ignore the obnoxious tone of this sentence that implies that the reader is inexperienced if she disagrees.) Well, here's the crux: The same observation applies to Parnas' proposed solution! If the real reviews today don't work then why should we expect that some idealized review process will work better in the future? The only difference is that the review targets a bunch of papers of the same author instead of a single paper. That's hardly a reason to have different expectations for the quality of the review. If anything, it indicates that reviewing an author will be done even worse than paper reviewing is done today: It requires more time and researchers care a lot about their time.
Should we despair? No: The numbers' game is here to save us!
Forget for a while about papers and think about the web, Google, and PageRank. Why does it work? Because numbers are used in a smart way. First, ditch supercomputers. Many cheap computers can do a better job than a couple of very high-quality super-servers. Second, ditch librarians. If you remember, most of the Google concurrents thought portals are the future — web information organized by professionals. Well, they were wrong. Many cheap computers that are crunching numbers aggregating the opinions of many people in a smart way are doing a much, much better job. What's the essential piece of information PageRank uses? Links. That's like citations. Parnas complains that some citations are negative. So what? Some links are negative too and still Google works best. I would even say that negative or positive is less important than interesting. Even bad papers can be interesting and can teach us something!
Frankly, I find it alarming that a mathematically inclined mind (as a computer scientist should be!) advocates not using numbers.