Koza proposed the following 8 criteria for measuring whether a result of an algorithm can be regarded as having some kind of "human competitive" machine intelligence
Definition of “Human-Competitive”
There are now 36 instances where genetic programming has produced a human-competitive result. These human-competitive results include 15 instances where genetic programming has created an entity that either infringes or duplicates the functionality of a previously patented 20th-century invention, 6 instances where genetic programming has done the same with respect to a 21st-centry invention, and 2 instances where genetic programming has created a patentable new invention. These human-competitive results come from the fields of computational molecular biology, cellular automata, sorting networks, and the synthesis of the design of both the topology and component sizing for complex structures, such as analog electrical circuits, controllers, and antenna.
What do we mean when we say that an automatically created solution to a problem is competitive with human-produced results?
In attempting to evaluate an automated problem-solving method, the question arises as to whether there is any real substance to the demonstrative problems that are published in connection with the method. Demonstrative problems in the fields of artificial intelligence and machine learning are often contrived toy problems that circulate exclusively inside academic groups that study a particular methodology. These problems typically have little relevance to any issues pursued by any scientist or engineer outside the fields of artificial intelligence and machine learning.
We say that an automatically created result is “human-competitive” if it satisfies one or more of the eight criteria below.
(A) The result was patented as an invention in the past, is an improvement over a patented invention, or would qualify today as a patentable new invention.
(B) The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal.
(C) The result is equal to or better than a result that was placed into a database or archive of results maintained by an internationally recognized panel of scientific experts.
(D) The result is publishable in its own right as a new scientific result ¾ independent of the fact that the result was mechanically created.
(E) The result is equal to or better than the most recent human-created solution to a long-standing problem for which there has been a succession of increasingly better human-created solutions.
(F) The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered.
(G) The result solves a problem of indisputable difficulty in its field.
(H) The result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs).
These eight criteria have the desirable attribute of being at arms-length from the fields of artificial intelligence, machine learning, and genetic programming. That is, each criteria requires a result that stands on its own merit ¾ not on the fact that the result was mechanically produced. In particular, a result cannot acquire the rating of “human-competitive” merely because it is considered “interesting” by researchers inside the specialized fields that are attempting to create machine intelligence. Instead, a result produced by an automated method must earn the rating of “human-competitive” independent of the fact that it was generated by an automated method. These eight criteria are discussed in detail in Genetic Programming III: Darwinian Invention and Problem Solving (Koza, Bennett, Andre, and Keane 1999) and in Genetic Programming IV: Routine Human-Competitive Machine Intelligence (Koza, Keane, Streeter, Mydlowec, Yu, and Lanza 2003).
Certainly, proof of principle ("toy") problems are occasionally useful for tutorial or introductory purposes. However, we believe that (after 50 years) it is time for fields of artificial intelligence and machine learning to start delivering non-trivial results that satisfy the test of being competitive with human performance.