Tuesday, September 18, 2007

Collective Intelligence and Evolution

Collective Intelligence and Evolution

by Akira Namatame


The mission of collective evolution is to harness the systems of selfish agents to secure a sustainable relationship, so that desirable properties can emerge as 'collective intelligence'.

Why do colonies of ants work collectively, and how do they do it so effectively? One key to answering this question is to look at interactions among ants. For the last decade, attempts have been made to develop some general understanding, which has produced the theory of collective systems, that is, systems consisting of a large collection of agents. It is common to refer to the desirable emergent properties of collective systems as 'collective intelligence'. Interactions are able to produce collective intelligence at the macroscopic level that is simply not present when the components are considered individually.

The concept of collective intelligence observed in social insects can be extended to humans. In his book, The Wisdom Of Crowds, Surowiecki explores a simple idea that has profound implications: a large collection of people are smarter than an elite few at solving problems, fostering innovation, coming to wise decisions, and predicting the future. His counterintuitive notion, rather than crowd psychology as traditionally understood, provides us with new insights for understanding how our social and economic activities should be organized.

On the other hand, the fact that selfish behaviour may not achieve full efficiency is also well known in the literature. It is important to investigate the loss of collective welfare due to selfish and uncoordinated behavior. Recent research efforts have focused on quantifying this loss for specific environments, and the resulting degree of efficiency loss is known as 'the price of anarchy'. Investigations into the price anarchy have provided some measures for designing collective systems with robustness against selfish behaviour. Collective systems are based on an analogous assumption that individuals are selfish optimizers, and we need methodologies so that the selfish behaviour of individuals need not degrade the system performance. Of particular interest is the issue of how social interactions should be restructured so that agents are free to choose their own actions, while avoiding outcomes that none would choose.

Darwinian dynamics based on mutation and selection form the core of models for evolution in nature. Evolution through natural selection is often understood to imply improvement and progress. If multiple populations of species are adapting each other, the result is a co-evolutionary process. However, the problem to contend with in Darwinian co-evolution is the possibility of an escalating arms race with no end. Competing species may continually adapt to each other in more and more specialized ways, never stabilizing at a desirable outcome.
The Rock-Scissors-Paper (RSP) game is a typical form of representing the triangular relationship. This simple game has been used to explain the importance of biodiversity. We generalize a basic rock-scissors-paper relationship to a non-zero-sum game with the payoff matrix shown in Table 1. In this triangular situation, diversity resulting from proper dispersal by achieving Nash equilibrium is not efficient, and the agents may benefit from achieving a better relationship.

Table 1: The generalized rock-scissors-paper game (ramda greater than or equal to 2). Figure 1: The state diagram of the strategy choices between two agents.
Table 1: The generalized rock-scissors-paper game (ramda greater than or equal to 2). Figure 1: The state diagram of the strategy choices between two agents.

In particular, we have examined the system of interactive evolving agents in the context of repeated RSP games, by considering a population of agents located on a lattice network of 20x20. They repeatedly play the generalized RSP game with their nearest eight neighbours based on the coupling rules, which are updated by the crossover operator. 400 different rules, one for each agent, are aggregated at the beginning into a few rules with many commonalities. The game between two agents with the learned coupling rule becomes a kind of stochastic process. The transitions of the outcome are represented as the phase diagram in Figure 1, and they converge into the limit cycle, visiting the Pareto-optimal outcomes: (0,1) (1,2) (2,0) (1,0) (2,1) (0,2). Therefore each agent learns to behave as follows: win three times and then lose three times. In this way, the agents succeed in collectively evolving a robust learning procedure that leads to near-optimal behaviour based on the principle of give and take.

The framework of collective evolution is distinguished from co-evolution in three aspects. First, there is the coupling rule: a deterministic process that links past outcomes with future behaviour. The second aspect, which is distinguished from individual learning, is that agents may wish to optimize the outcome of the joint actions. The third aspect is to describe how a coupling rule should be improved, using the criterion of performance to evaluate the rule.

In biology, the gene is the unit of selection. However, the collective evolutionary process is expected to compel agents towards ever more refined adaptation, resulting in sophisticated behavioural rules. Cultural interpretations of collective evolution assume that successful behavioural rules are spread by imitation or learning by the agents. This approach to collective evolution is very much at the forefront of the design of desired collectives in terms of efficiency, equity, and sustainability. Further work will need to examine how collective evolution across the complex socio-economical networks leads to emergent effects at higher levels.

Please contact:
Akira Namatame, National Defense Academy, Japan
Tel: +81 468 3810
E-mail: nama@nda.ac.jp
http://www.nda.ac.jp/~nama

Sunday, September 16, 2007

new Journal: Evolutionary Intelligence

12065

Evolutionary Intelligence

Editor-in-Chief: Larry Bull
ISSN: 1864-5909 (print version)
ISSN: 1864-5917 (electronic version)
Journal no. 12065
Springer
Description
|
Description

Evolutionary Intelligence is an international journal devoted to the publication and dissemination of theoretical and practical aspects of the use of population-based search for artificial intelligence. Techniques of interest include evolving rule-based systems, evolving artificial neural networks, evolving fuzzy systems, evolving Bayesian and statistical approaches, artificial immune systems, and hybrid systems which combine evolutionary computation with other A.I. techniques in general.

Saturday, September 15, 2007

Swarm Intelligence journal

11721

Swarm Intelligence

Editor-in-Chief: Marco Dorigo
ISSN: 1935-3812 (print version)
ISSN: 1935-3820 (electronic version)
Journal no. 11721
Springer US

Description
|
Description

Swarm Intelligence: the principle resource dedicated to reporting on developments in the new discipline of swarm intelligence.

Swarm intelligence research deals with the study of self-organizing processes in natural and artificial swarm systems. It is a fast-growing field that involves the efforts of researchers in multiple disciplines, ranging from ethologists and social scientists to operations research and computer engineers.

Swarm Intelligence is dedicated to reporting on advances in the understanding and utilization of swarm intelligent systems. Submissions that shed light on either theoretical or practical aspects of swarm intelligence are welcome. The following subjects are of particular interest to the journal:

  • modeling and analysis of collective biological systems such as social insects colonies, school and flocking vertebrates, human crowds;
  • discussion of models of swarm behavior in insect, animal, or human societies that can stimulate new algorithmic approaches;
  • modeling and analysis of ant colony optimization, particle swarm optimization, swarm robotics, and other swarm intelligent systems;
  • empirical and theoretical research in swarm intelligence;
  • application of swarm intelligence methods to real-world problems such as distributed computing, data clustering, graph partitioning, optimization and decision making;
  • theoretical and experimental research in swarm robotic systems.

For better or worse, sex chromosomes are linked to human intelligence

For better or worse, sex chromosomes are linked to human intelligence

by Ellen Ruppel Shell

Last January Harvard University president Lawrence Summers hypothesized that women may be innately less scientifically inclined than men. Not long after the ensuing uproar, researchers announced the sequencing of the human X chromosome. The project was hailed as a great leap forward in decoding the differences between men and women, at least from a biological perspective. While it did nothing to calm the maelstrom swirling around Summers, the new understanding of the chromosome revealed tantalizing clues to the role genes might play in shaping cognitive differences between the sexes. And while these differences seem to be largely to the female's advantage, permutations during the genetic recombination of the X chromosome may confer to a few men a substantial intellectual edge.

Considerations of this sort are mired in politics and sensationalism, but one fact is beyond dispute: Three hundred million years after parting ways in our earliest mammalian ancestors, the X and the Y chromosomes are very different genetic entities. The Y has been whittled down to genes governing a handful of functions, most entailing sperm production and other male-defining features. Meanwhile, the gene-rich X is the most intensely studied of the 23 chromosomes, largely because of its role in rendering men vulnerable to an estimated 300 genetic diseases and disorders associated with those mutations—from color blindness to muscular dystrophy to more than 200 brain disorders.

The sex chromosomes lay the foundation for human sexual difference, with women having two Xs, one from each parent, while men get an X from their mom and a Y from their dad. Only 54 of the 1,098 protein-coding genes on the X seem to have functional counterparts on the Y, a dichotomy that has led scientists to describe the Y chromosome as "eroded." This diminutive chromosome offers little protection against the slings and arrows of genetic happenstance. When an X-linked gene mutates in a woman, a backup gene on the second X chromosome can fill the gap. But when an X-linked gene mutation occurs in a man, his Y stands idly by, like an onlooker at a train wreck.

The brain seems particularly vulnerable to X-linked malfunction. Physician and human geneticist Horst Hameister and his group at the University of Ulm in Germany recently found that more than 21 percent of all brain disabilities map to X-linked mutations. "These genes must determine some component of intelligence if changes in them damage intelligence," Hameister says.

Gillian Turner, professor of medical genetics at the University of Newcastle in Australia, agrees that the X chromosome is a natural home for genes that mold the mind. "If you are thinking of getting a gene quickly distributed through a population, it makes sense to have it on the X," she says. "And no human trait has evolved faster through history than intelligence."

The X chromosome provides an unusual system for transmitting genes between sexes across generations. Fathers pass down nearly their entire complement of X-linked genes to their daughters, and sons get their X-linked genes from their mothers.

Although this pattern of inheritance leaves men vulnerable to a host of X-linked disorders, Hameister contends that it also positions them to reap the rewards of rare, beneficial X-linked mutations, which may explain why men cluster at the ends of the intelligence spectrum. "Females tend to do better overall on IQ tests; they average out at about 100, while men average about 99," Hameister says. "Also, more men are mentally retarded. But when you look at IQs at 135 and above, you see more men."

To understand his hypothesis, consider that during the formation of a woman's eggs, paternal and maternal X chromosomes recombine during meiosis. Now suppose a mother passes to her son an X chromosome carrying a gene or genes for superintelligence. While this genetic parcel would boost the son's brilliance, he could pass that X chromosome only to a daughter, where it could be diluted by the maternally derived X. The daughter, in turn, could pass on only a broken-up and remixed version to the fourth generation, due, again, to the recombination that occurs during meiosis. Odds are that the suite of genes for superintelligence wouldn't survive intact in the remix. "It's like winning the lottery," Hameister adds. "You wouldn't expect to win twice in one day, would you?"

The theory is controversial. Among its detractors is David Page, interim director of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts. "Many claims have been made about gene enrichment on the X, and most look quite soft to me," he says. Nonetheless, he says that the attempt to link the enrichment of cognitive genes on the X to IQ differences "is a reasonable speculation."

Intelligence is a multifaceted quality that is unlikely to be traced to a single gene. Yet the link between gender and cognition is far too persistent for the public—or science—to ignore. Until recently sex differences in intelligence were thought to result chiefly from hormones and environment. New findings suggest genes can play a far more direct role. Working constructively with that insight will be a delicate challenge for the new millennium, one perhaps best avoided by college presidents.

DIALOGUE

SOCIAL SMARTS

DAVID SKUSE, professor of behavioral and brain sciences at the Institute of Child Health in London, has shown how the X chromosome can influence social skills. In studies of women with only one X chromosome, he found that test subjects who inherited their X chromosome from their fathers had better social skills than those who inherited their X chromosome from their mothers. This disparity offers clues to why boys, who inherit their single X chromosome from their mothers, are more vulnerable to disorders that affect social functioning.

What does your research reveal?

S: Imprinted genes are expressed differently depending on whether they are inherited from the father or the mother. By comparing the cognitive social skills of women with a single X chromosome [Turner's syndrome]—which could be either maternal or paternal in origin—with the skills of normal women, who have an X chromosome from both parents, we were able to show that X-linked imprinted genes could influence sexually dimorphic traits. It is important to note a couple of things; first, the gene that is imprinted was not expressed in the parent from whom it was inherited, so girls do not get their social skills from their fathers in any simple sense. Second, we are talking about a mechanism that potentially affects every one of us, but its effects will be subtly different depending on our genetic makeup and our environment of rearing.

Have you looked at whether normal men and women differ in social cognition?

S: We did a study of normal males and females on skills such as the ability to tell whether someone is looking directly at you and interpreting facial expressions. We looked at 700 children and over 1,000 adults and discovered little difference between adult men and women. On the other hand, girls entering elementary school tend to do a much better job than boys in interpreting facial expressions. This difference almost completely disappears after puberty.

What are the implications of your work?

S: What I can say is that disorders of social cognitive skills seem to affect a surprisingly large number of people. The disability can lead, especially among boys, to disruptive behavior in childhood if it is not recognized and treated sufficiently early. Others have found that boys are more vulnerable than girls to the long-term impact of maltreatment in childhood, and the risk of such boys becoming antisocial in later life seems to be related to a gene on the X chromosome, although not one that is imprinted.

Oldest University Unearthed in Egypt

Oldest University Unearthed in Egypt

by Susan Karlin

In May a team of Polish and Egyptian archaeologists announced they had unearthed the long-lost site of Archimedes’ alma mater: the University of Alexandria in Egypt. Even Cambridge University in England, which boasts Sir Isaac Newton as an alum, cannot claim such a venerable pedigree.

The legendary university flourished 2,300 years ago when Alexandria was the intellectual and cultural hub of the world. While in the city, Archimedes crafted a water pump of a type still used today; Euclid organized and developed the rules of geometry; Hypsicles divided the zodiac into 360 equal arcs; and Eratosthenes calculated the diameter of Earth. Other scholars in the city are believed to have edited the works of Homer and produced the Septuagint, the ancient Greek translation of the Old Testament. “This is the oldest university ever found in the world,” Grzegorz Majcherek, who directed the dig under the auspices of Egypt’s Supreme Council of Antiquities, told the Associated Press. “This is the first material evidence of the existence of academic life in Alexandria.”

Emily Teeter, an Egyptologist with the Oriental Institute at the University of Chicago, adds: “This discovery is of tremendous importance because of its role as a nexus of learning among the great cultures of that time. It’s one of the most famous institutions of the ancient world, and it’s astounding that the exact location has been unknown until now. Archaeologists knew it was in Alexandria, but not where in Alexandria.”

The research team found 13 identical lecture halls lining a large public square in the ancient city’s eastern section. A nearby Roman theater, discovered a half century ago, now assumes new meaning as a possible part of the ancient university. The halls are lined on three sides with rows of elevated benches overlooking a raised seat thought to have been used by a lecturer to address students.

“The magnificence of Alexandria as a center of learning was not just a myth,” says Willeke Wendrich, an archaeologist at UCLA. “It gives us hope that some day we might even find the location of the famous Library of Alexandria.” The library thrived from 295 B.C. into the fourth century A.D., when it burned to the ground; its ruins have never been found.

In a nod to its glory, Alexandria two years ago opened a new $230 million library complex containing a quarter-million books, a planetarium, a conference hall, five research institutes, six galleries, and three museums.

Hod Lipson How to Draw a Straight Line Using a GP:

How to Draw a Straight Line Using a GP: Benchmarking Evolutionary Design Against 19th Century Kinematic Synthesis
Hod Lipson
Computational Synthesis Laboratory,
Mechanical & Aerospace Engineering, and Computing & Information Science,
Cornell University, Ithaca NY 14850, USA
hod.lipson@cornell.edu

Abstract. This paper discusses the application of genetic programming to the synthesis of compound 2D kinematic mechanisms, and benchmarks the results against one of the classical kinematic challenges of 19th century mechanical de-sign. Considerations for selecting a representation for mechanism design are presented, and a number of human-competitive inventions are shown.

http://ccsl.mae.cornell.edu/papers/GECCO04_Lipson.pdf

Hod Lipson: Reinventing the Wheel: An Experiment in Evolutionary Geometry

http://ccsl.mae.cornell.edu/papers/GECCO05_Bongard2.pdf

In the domain of design, there are two ways of viewing the competitiveness
of evolved structures: they either improve in some manner
on previous solutions; they produce alternative designs that were
not previously considered; or they achieve both. In this paper we
show that the way in which designs are genetically encoded influences
which alternative structures are discovered, for problems in
which a set of more than one optimal solution exists. The problem
considered is one of the most ancient known to humanity: design
a two-dimensional shape that, when rolled across flat ground,
maintains a constant height. It was not until the late 19th century—
roughly 7000 years after the discovery of the wheel—that Franz
Reuleaux showed that a circle is not the only optimal solution. Here
we demonstrate that artificial evolution repeats this discovery in under
one hour.

Several interesting papers from Hod Lispon about modeling

From Publications of Cornell Computational Synthesis Lab by Hod Lipson


Bongard J., Lipson H. (2007), “Automated reverse engineering of nonlinear dynamical systems", Proceedings of the National Academy of Science, vol. 104, no. 24, pp. 9943–9948.

Schmidt M., Lipson H. (2007), "Learning Noise", Genetic and Evolutionary Computation Conference (GECCO'07), pp. 1680-1685.

Bongard J., Zykov V., Lipson H. (2006), “Resilient Machines Through Continuous Self-Modeling", Science Vol. 314. No. 5802, pp. 1118 - 1121 (see commentary by Adami "What Do Robots Dream Of?")

Aquino W., Kouchmeshky B., Bongard J., Lipson H., (2006) "Co-evolutionary algorithm for structural damage identification using minimal physical testing", Int. Journal for Numerical Methods in Engineering

Bongard J., Lipson H. (2005) “Nonlinear system identification using coevolution of models and tests”, IEEE Transactions on Evolutionary Computation, 9(4): 361-384.

Schmidt M., Lipson H., (2005) "Co-evolution of Fitness Maximizers and Fitness Predictors",


Pat Langley at Stanford has a strong interests in automated scientific discovery


Pat Langley

Institute for the Study of Learning and Expertise
2164 Staunton Court, Palo Alto, CA 94306
(650) 494-3884 (phone); (650) 494-1588 (fax)
langley@isle.org


I currently serve as Director for the Institute for the Study of Learning and Expertise and as Head of CSLI's Computational Learning Laboratory. I am also a Consulting Professor of Symbolic Systems at Stanford University.

My research interests revolve around computational learning and discovery, especially their role in constructing scientific models, architectures for intelligent agents, and adaptive user interfaces.

  • Research activities:
  • Recent talks
  • Recent papers
  • Books
  • Biography
  • Curriculum vitae (ps, pdf)
  • Travel plans

    For more information, send electronic mail to langley@isle.org


    © 1997 Institute for the Study of Learning and Expertise. All rights reserved.

  • 2007 John Koza's research focus

    Koza Research:

    automating the invention process and generating useful, patentable, and human-competitive inventions by means of genetic programming


    Genetic Programming Inc. is a small privately funded corporation operating a beowulf cluster computer system consisting of 1,000 Pentium processors to do research in applying genetic programming to produce human-competitive results. Our current focus is on automating the invention process and generating useful, patentable, and human-competitive inventions by means of genetic programming. Our group publishes and presents numerous research papers each year at various scientific conferences and journals (both in the field of genetic and evolutionary computation and in the particular fields of the work). A good overview of the type of work we do can be seen on the home page of Genetic Programming Inc. at www.genetic-programming.com and the home page of John R. Koza at Stanford University is http://www.johnkoza.com Our genetic programming system is written in Java. The simulators with which we frequently work are typically written in C and other languages.

    Genetic Programming Inc. is seeking a SCIENTIFIC RESEARCH PROGRAMMER. The position requires the ability to

    ● read scientific literature, suggest interesting new problems on which to work,

    ● contribute ideas on how to solve new problems using genetic programming;

    write, debug, and run the necessary programs in the context of our existing software and hardware system;

    ● analyze the results;

    help in the writing up of the results for publication (including making figures and graphics);

    make changes in our existing software and hardware system; and

    do systems administration to maintain the operation of our existing software and hardware system.

    Excellent programming skills and productivity are a must. Although actual programming will consume less than half of the time, the person in this position must be able to rapidly prototype code. The code includes code to analyze results as well as to produce results. This position calls for at least a B.S. degree and at least two years experience in Java and/or C doing academic research programming or corporate research programming. A Master’s degree is preferable and a PhD degree would be even more desirable. It would be desirable for the successful candidate to have expertise in some specific science or engineering domain in order to advance the aim of applying genetic programming to that domain. The possibilities for domain areas are very open-ended and include, but are not limited to, analog circuit design, controller design, finite element analysis, shape optimization, operations research, mechanical design, signal processing, bioinformatics, optical lens systems; antennas; etc. Actual previous experience with, or knowledge of, genetic programming is a big plus.. Experience with, or knowledge of, genetic and evolutionary algorithms in general, machine learning, neural networks, artificial intelligence, artificial life, etc. is a plus, but not required.

    The position offers competitive compensation and benefits. Compensation will be appropriate for the level of education and experience.

    Please include all relevant information, date available, and several references.

    Our offices are located 3 blocks from the Mountain View Cal Train station and Mountain View San Jose Light Rail Station in downtown Mountain View, California. Downtown Mountain View is an attractive and lively area with about 40 restaurants and various other shops and businesses.

    Genetic Programming Inc. is an equal opportunity employer.

    John R. Koza

    Genetic Programming Inc. (Third Millennium On-Line Products Inc.)

    Post Office Box K

    Los Altos, California 94023 USA

    FAX: 650-941-9430

    E-mail: koza@genetic-programming.com

    www.genetic-programming.com

    Toward automated discovery in the biological sciences.

    Publication: AI Magazine
    Publication Date: 22-MAR-04
    Format: Online - approximately 11151 words
    Delivery: Immediate Online Access
    Author: Buchanan, Bruce G. ; Livingston, Gary R.

    Article Excerpt
    The biological sciences are rich with observational and experimental data characterized by symbolic descriptions of organisms and processes and their parts as well as numeric data from high-throughput experiments. The complexity of the data and the underlying mechanisms argue for providing computer assistance to biologists. Initially, computational methods for investigations of relationships in biological data were statistical (Sokal and Rohlf 1969). However, when the DENDRAL project demonstrated that AI methods could be used successfully for hypothesis formation in chemistry (Buchanan and Feigenbaum 1978; Buchanan, Sutherland, and Feigenbaum 1969), it was natural to ask whether AI methods would also be successful in the biological sciences. (1)

    Data in the biological sciences have been growing dramatically, and much of the computational effort has been on organizing flexible, open-ended databases that can make the data available to scientists. After the initial demonstrations of the power of applying machine learning to biological databases (Harris, Hunter, and States 1992; Qian and Sejnowski 1988), the application of machine learning to biological databases has increased. It is now possible to carry out large-scale machine learning and data mining from biological databases. Catalysts for this research were the Intelligent Systems in Molecular Biology conferences, the first of which was held in 1993. This conference brought together people from diverse groups, all with the realization that biological problems were large and important and that there was a need for heuristic methods able to reason with symbolic information.

    Toward Automated Discovery

    The end point of scientific discovery is a concept or hypothesis that is interesting and new (Buchanan 1966). Insofar as there is a distinction at all between discovery and hypothesis formation, discovery is often described as more opportunistic search in a less well-defined space, leading to a psychological element of surprise. The earliest demonstration of self-directed, opportunistic discovery was Doug Lenat's program, AM (Lenat 1982). It was a successful demonstration of AI methods for discovery in a formal domain characterized by axioms (set theory) or rules (games). AM used an agenda-based framework and heuristics to evaluate existing concepts and then create new concepts from the existing concepts. It continued creating and examining concepts until the "interestingness" of operating on new or existing concepts (determined using some of AM'S heuristics) dropped below a threshold. Although some generalization and follow-up research with AM was performed (Lenat 1983), this research was limited to discovery in axiomatic domains (Haase 1990; Shen 1990; Sims 1987).

    Our long-range goal is to develop an autonomous discovery system for discovery in empirical domains, namely, a program that peruses large collections of data to find hypotheses that are interesting enough to warrant the expenditure of laboratory resources and subsequent publication. Even longer range, we envision a scientific discovery system to be the generator of plausible hypotheses for a completely automated science laboratory in which the hypotheses can be verified experimentally by a robot that plans and executes new experiments, interprets their results, and maintains careful laboratory records with the new data.

    Currently, machine learning and knowledge discovery systems require manual intervention to adjust one or more parameters, inspect hypotheses to identify interesting ones, and plan and execute new experiments. The more autonomous a discovery system becomes, the more it can save time, eliminate human error, follow multiple discovery strategies, and examine orders-of-magnitude more hypotheses in the search for interesting discoveries (Zytkow 1993).

    AI research on experimental planning systems has produced numerous successful techniques that can be used in an automated laboratory. For example, Dan Hennessy has developed an experiment planner for the protein crystallization problem discussed later that uses a combination of Bayesian and case-based reasoning (Hennessy et al. 2000). Because the number of possibly interesting discoveries to be made in any large collection of data is open ended, a program needs strong heuristics to guide the selection of lines of investigation.

    No published system completely combines all phases of the empirical discovery process, although planning systems for knowledge discovery in databases (KDD), such as the frame work presented in Engels (1996), perform sequences of tasks for a discovery goal provided by a user. Similarly, multistrategy systems such as that developed by Klosgen (1996) perform multiple discovery operations, but again, the discovery goals are provided by a user, as is evaluation of the discovered patterns. The research presented here describes and evaluates an agenda- and justification-based framework for autonomous discovery, coupled with heuristics for deciding which of many tasks are most likely to lead to interesting discoveries.

    A Framework for Discovery

    It is essential that a discovery program be able to reason about its priorities because there are many lines of investigation that it could pursue at any time and many considerations in its selection of one. Keeping an explicit agenda allows examination of the open tasks, and keeping explicit reasons why each task is interesting allows comparing relative levels of interest. We use an agenda- and justification-based framework, which is similar to the framework of the AM and EURISKO programs (Lenat 1983, 1982): It consists of an agenda of tasks prioritized by their plausibility. As in AM, a task on the agenda can be a call to a hypothesis generator to produce more hypotheses or explore some of the properties of hypotheses (or objects mentioned in them) already formed. Items are the objects or hypothesis (and sets of these) examined by the discovery program, and a task is an operation on zero or more items. For example, one task might be to find patterns (using an induction engine) in a subset of the data that have an interesting common property, such as being counterexamples to a well-supported rule. Although Lenat's programs discovered interesting conjectures in axiomatic domains such as set theory and games, those programs also contained general, domain-independent heuristics of the same sort used in empirical domains.

    To evaluate our framework, we developed the prototype discovery program HAMB (Livingston 2001) that finds interesting, new relationships in collections of empirical data. (2) A key feature of HAMB is its domain-independent heuristics that guide the program's choice of relationships in data that are potentially interesting. HAMB'S primary generator of plausible hypotheses is an inductive generalization program that finds patterns in the data; in our case, it is the rule-induction program RL (Provost and Buchanan 1995). RL is an inductive generalization program that looks for general rules in a collection of data, where each rule is a conditional sentence of the form

    IF [f.sub.1] and [f.sub.2] and ... and fn THEN class = K (with CF = c)

    Each feature (f) relates an attribute (a variable) of a case to a named value, and a degree of certainty (CF) is attached to each rule as a measure of evidential support in the data; for example:

    IF SEX = male and AGE

    The conditional rule, which is easily understood by anyone who knows the meanings of the variable names, thus says that if a case matches all the antecedent conditions, then it is likely to be a member of the named class (K). Thus, the items in hamb's ontology are attributes, cases, rule conjuncts, and roles, plus sets of these. The cases and the attributes used to describe them are taken directly from the database.

    On each cycle, heuristics can create tasks that result in new items or hypotheses, or tasks that examine some of the properties of those items or hypotheses. Each task must have accompanying text justifications for performing it, which are called reasons, qualitative descriptions of why a task might be worth performing (for example, sets of exceptions to general rules are likely to be interesting), and each reason must have an assigned strength, which is a relative measure of the reason's merit.

    A task's plausibility is an estimate of the likelihood that performing the task will lead to interesting discoveries, and it is calculated as the product of the sum of the interestingness of the items involved in the task and the sum of the strengths corresponding to the reasons assigned to the tasks, as illustrated in the following equation:

    Plausibility(T) = ([summation] [R.sub.T]) * {[summation] Interestingness(IT)}

    where T is a task, ([R.sub.T]) is the set of the strengths of T's reasons, and {Interestingness(IT)} represents the sum of the estimated interestingness of T's items.

    Tasks are performed using heuristics and, when executed, create new items for further exploration and place new tasks on the agenda. When proposing a new task, a heuristic must also provide reasons and corresponding strengths for performing the task. If new reasons are given for performing a task already on the agenda, then they are attached to the existing task, increasing its plausibility. Therefore, the framework provides three additional properties that Lenat (1982) identified as desirable when selecting the next task to perform:

    First, the plausibility of a task monotonically increases with the strength of its reasons. Therefore, with all else being equal, a task with two reasons will have a greater plausibility than a task with only one of those reasons. If a new supporting reason is found, the task's value is increased. (3) The better that new reason, the bigger the increase.

    Second, if a task is reproposed for the same reason(s), its plausibility is not increased.

    Third, the plausibility of a task involving an item C should increase monotonically with the estimated interestingness of C. Two similar tasks dealing with two different concepts, each supported by the same list of reasons and strengths of reasons, should be ordered by the interestingness of those two concepts.

    Thus, the top-level control of the framework is a simple loop: (1) calculate the plausibilities of the tasks; (2) select the task with the greatest plausibility; and (3) perform the task, possibly resulting in the creation or examination of items, the evaluation of relationships between items, and the proposal of new tasks. At the end of each iteration of this loop (called a discovery cycle), a stopping condition is checked to determine if further exploration is warranted. In our prototype program, HAMB, the stopping condition is that either the plausibility of all tasks on the agenda fails below a user-specified threshold (that is, no task is interesting enough), or the number of completed discovery cycles exceeds a user-defined threshold. In cases of repeated consideration of the same task, the system detects the possible deadlock...

    NOTE: All illustrations and photos have been removed from this article.

    Automated Scientific Discovery, the holy goal of AI? from AAAI


    "[Bruce] Buchanan was trained as a philosopher of science at a time when the profession was dominated by Popper's (1965) view that there is no logic of discovery. Buchanan stated the new research program:

    'The traditional problem of finding an effective method for formulating true hypotheses that best explain phenomena has been transformed into finding heuristic methods that generate plausible explanations. The problem of giving rules for producing true scientific statements has been replaced by the problem of finding efficient heuristic rules for culling the reasonable candidates for an explanation from an appropriate set of possible candidates' [and finding methods for constructing the candidates].'"

    -from Recent Work in Computational Scientific Discovery

    Good Places to Start

    2020 Computing: Exceeding human limits. Scientists are turning to automated processes and technologies in a bid to cope with ever higher volumes of data. But automation offers so much more to the future of science than just data handling. By Stephen H. Muggleton. Nature 440, 409-410 (23 March 2006). "During the twenty-first century, it is clear that computers will continue to play an increasingly central role in supporting the testing, and even formulation, of scientific hypotheses. This traditionally human activity has already become unsustainable in many sciences without the aid of computers. This is not only because of the scale of the data involved but also because scientists are unable to conceptualize the breadth and depth of the relationships between relevant databases without computational support. The potential benefits to science of such computerization are high -- knowledge derived from large-scale scientific data could well pave the way to new technologies, ranging from personalized medicines to methods for dealing with and avoiding climate change. [fn: Towards 2020 Science (Microsoft, 2006)]. ... Meanwhile, machine-learning techniques from computer science (including neural nets and genetic algorithms) are being used to automate the generation of scientific hypotheses from data. Some of the more advanced forms of machine learning enable new hypotheses, in the form of logical rules and principles, to be extracted relative to predefined background knowledge. ... One exciting development that we might expect in the next ten years is the construction of the first microfluidic robot scientist, which would combine active learning and autonomous experimentation with microfluidic technology."

    'Knowledge discovery'. California Computer News (October 20, 2004). "In the recent science-fiction thriller 'Minority Report,' Tom Cruise plays a detective who solves future crimes by being immersed in a 'data cave,' where he rapidly accesses all the relevant information about the identity, location and associates of the potential victim. A team at Purdue University currently is developing a similar 'data-rich' environment for scientific discovery that uses high-performance computing and artificial intelligence software to display information and interact with researchers in the language of their specific disciplines. 'If you were a chemist, you could walk right up to this display and move molecules and atoms around to see how the changes would affect a formulation or a material's properties,' said James Caruthers, a professor of chemical engineering at Purdue. The method represents a fundamental shift from more conventional techniques in computer-aided scientific discovery. 'Most current approaches to computer-aided discovery center on mining data in a process that assumes there is a nugget of gold that needs to be found in a sea of irrelevant information,' Caruthers said. 'This data-mining approach is appropriate for some scientific discovery problems, but scientific understanding often proceeds through a different method, a 'knowledge discovery' approach. 'Instead of mining for a nugget of gold, knowledge discovery is more like sifting through a warehouse filled with small gears, levers, etc., none of which is particularly valuable by itself. After appropriate assembly, however, a Rolex watch emerges from the disparate parts.' ... Discovery informatics depends on a two-part repeating cycle made up of a 'forward model' and an 'inverse process' and two types of artificial intelligence software: hybrid neural networks and genetic algorithms."

    Iridescent Software Illuminates Research Data. By Mike Martin. Sci-Tech Today (January 27, 2004). "Bioinformatics researchers at the University of Texas (UT) Southwestern Medical Center have developed Iridescent, a software program that helps scientists easily identify obscure commonalities in research data and directly relate them to their own work, saving money and speeding the process of discovery. 'This work is about teaching computers to 'read' the literature and make relevant associations so they can be summarized and scored for their potential relevance,' said Dr. Jonathan Wren, a researcher in the department of botany and microbiology at the University of Oklahoma. 'For humans to answer the same questions objectively and comprehensively could entail reading tens of thousands of papers.' ... Iridescent is unveiled in the current issue of the journal Bioinformatics"

    • Shared relationship analysis: ranking set cohesion and commonalities within a literature-derived relationship network. Wren JD, Garner HR. Bioinformatics. 2004 Jan 22;20(2):191-8. [Abstract]

    Toward Automated Discovery in the Biological Sciences. By Bruce G. Buchanan and Gary R. Livingston. AI Magazine 25(1): Spring 2004, 69-84. "The end point of scientific discovery is a concept or hypothesis that is interesting and new (Buchanan 1966). Insofar as there is a distinction at all between discovery and hypothesis formation, discovery is often described as more opportunistic search in a less well-defined space, leading to a psychological element of surprise. The earliest demonstration of self-directed, opportunistic discovery was Doug Lenat's program, AM (Lenat 1982). It was a successful demonstration of AI methods for discovery in a formal domain characterized by axioms (set theory) or rules (games). AM used an agenda-based framework and heuristics to evaluate existing concepts and then create new concepts from the existing concepts. It continued creating and examining concepts until the 'nterestingness' of operating on new or existing concepts (determined using some of AM'S heuristics) dropped below a threshold. Although some generalization and follow-up research with AM was performed (Lenat 1983), this research was limited to discovery in axiomatic domains (Haase 1990; Shen 1990; Sims 1987). Our long-range goal is to develop an autonomous discovery system for discovery in empirical domains, namely, a program that peruses large collections of data to find hypotheses that are interesting enough to warrant the expenditure of laboratory resources and subsequent publication. Even longer range, we envision a scientific discovery system to be the generator of plausible hypotheses for a completely automated science laboratory in which the hypotheses can be verified experimentally by a robot that plans and executes new experiments, interprets their results, and maintains careful laboratory records with the new data."

    A Machine With a Mind of Its Own - Ross King wanted a research assistant who would work 24/7 without sleep or food. So he built one. By Oliver Morton. Wired Magazine (August 2004, Issue 12.08). "The 'robot scientist' (King has resisted the temptation of a jazzy acronym) may look like a mere labor-saving gizmo, shuttling back and forth ad nauseam, but it's much more than that. Biology is full of tools with which to make discoveries. Here's a tool that can make discoveries on its own. ... It wasn't until he moved to Aberystwyth in the mid-'90s that King found comrades who fully appreciated the potential of AI and machine learning. One of the first people he encountered there was Douglas Kell, a voluble, handlebar-mustached biologist with a clear view of where his field was headed. ... Stephen Muggleton argues that the life sciences are peculiarly well suited to machine learning. 'There's an inherent structure in biological problems that lends itself to computational approaches,' he says. In other words, biology reveals the machinelike substructure of the living world; it's not surprising that machines are showing an aptitude for it."

    • Also see this related article: Mark of time. The Engineer Online (September 18, 2006). "A pioneering study at Manchester University is using a 'robot scientist' to examine blood samples for biological markers that may diagnose Alzheimer's disease. ... The robot scientist combines the automatic operation of a blood analysis technique called GCGC-MS with artificial intelligence to determine which experiment to carry out next. ... Douglas Kell, a professor of bioanalytical science at Manchester, was one of the developers of the robot scientist. 'The original idea was to automate the process of scientific discovery,' said Kell. 'There is a model by which we alternate the world of ideas with the world of experience. We carry out an experiment then revise our hypothesis in a cyclic loop. The robot scientist can combine working out what experiment is best to do next with actually carrying it out.' ... The robot uses Inductive Logic Programming, a machine learning process. The scientists give it the background knowledge about the experiment, called the domain. It then decides which hypothesis to follow using the available data."

    Herbert A. Simon: Scientific Discovery. One of Professor Simon's departmental web pages (2001) at Carnegie Mellon University's Department of Psychology. "Understanding the processes scientists use to discover new laws and to test hypotheses has been an active domain of cognitive research and AI modeling for several decades, and was one of Herb Simon's chief areas of research activity. Scientific discovery is an interesting and important task domain because it involves highly ill-structured problems that call on the whole range of human cognitive resources, and thereby provides deep insights into complex and creative human thinking. ... Thus, research on scientific discovery requires one to address fundamental problems in cognitive psychology (the processes of discovery), in the philosophy of science (the relation between the discovery and validation, or disconfirmation, of hypotheses), and in computer science (languages for discovery, heuristic search in discovery environments)."

    Readings Online

    A robot that likes to play with test tubes. By David Akin. The Globe & Mail (January 17, 2004). "[The Robot Scientist] probably will become a vital tool for researchers, particularly in biological fields, to advance human knowledge. That is because in many scientific areas, such as nanotechnology, molecular genetics and the exploration of space, information is being generated too fast for humans to analyze it effectively. 'Biology is in a great data-gathering phase at the moment, a bit like it was in the 19th century,' said Stephen Oliver, a professor and genomics researcher at the University of Manchester in England and another of the eight researchers. The Human Genome Project, the monster science project that identified and explained the function of the genes in a human being, made great use of computers and sophisticated software programs to automate the scientific discovery progress. Indeed, there is now a branch of artificial intelligence research devoted to scientific discovery."

    Robo-scientist goes it alone. BBC News (January 14, 2004). "The world's first 'robot scientist' that can interpret experiments without any human help has been developed by scientists at the University of Wales, Aberystwyth. It generates a set of hypotheses from what it knows about biochemistry, and then designs experiments to test them. ... Although artificial intelligence has made a number of significant contributions to scientific discovery during the last 30 years, its general impact on experimental science has been limited. But this may be about to change with the increased use of automation in scientific research."

    Undergraduate Projects in the Application of Artificial Intelligence to Chemistry. II Self-organizing Maps. By Hugh Cartwright. (2000). The Chemical Educator, Volume 5, Issue 4; 196-206. "The determination of relationships among samples is a task to which Artificial Intelligence is increasingly being applied. In this paper we investigate the Self-Organizing Map (SOM), whose role is to perform just this kind of task; in other words, to cluster data samples so as to reveal the relationships that exist among them."

    • More resources are available from Dr H.M. Cartwright's home page and research group page at the Physical & Theoretical Chemistry Laboratory, University of Oxford.

    Artificial Intelligence and Scientific Creativity. By Simon Colton and Graham Steel, Division of Informatics, University of Edinburgh. "Papers presented at the [the 1999 AISB Symposium on AI and Scientific Creativity, which took place in Edinburgh, Scotland] addressed the theoretical aspects of and computational possibilities for machine creativity. They also reported on systems implemented to achieve automated discovery in science. The intention of the symposium was that that the papers proposing models of scientific creativity would help researchers concerned with implementing discovery programs, and the papers discussing the successes and techniques employed in working systems will help researchers extract general frameworks for scientific machine discovery. This note is a survey of current research on creativity in science, and in particular the automation of discovery tasks in science."

    Recent Work in Computational Scientific Discovery. By Lindley Darden. In Proceedings of the Nineteenth Annual Conference of the Cognitive Science Society. Michael Shafto and Pat Langley (Eds.). Mahwah, New Jersey: Lawrence Erlbaum, 1997, pp. 161-166. "The study of computational scientific discovery emerged from the view that science is a problem solving activity, that heuristics for problem solving can be applied to the study of scientific discovery in either historical or contemporary cases, and that methods in artificial intelligence provide techniques for building computational systems. Pioneers in this work are Bruce Buchanan (e.g., 1982) and Herbert Simon (e.g., 1977)."

    • Also by Lindley Darden (1998): Anomaly-Driven Theory Redesign: Computational Philosophy of Science Experiments. In T.W. Bynum and J.H. Moor, The Digital Phoenix: How Computers are Changing Philosophy. New York: Blackwell Publishers, pp. 62-78. " I have been asked to discuss how computers have affected my work in philosophy. This paper discusses the use of artificial intelligence (AI) models to investigate both the representation of scientific knowledge and reasoning strategies for scientific change. The focus is on the reasoning strategies used to revise a theory, given an anomaly, which is a failed prediction of the theory."

    The computer revolution in science: Steps towards the realization of computer-supported discovery environments. By H. de Jong and A. Rip. (1997). Artificial Intelligence, 91(2). "The tools that scientists use in their search processes together form so-called discovery environments. The promise of artificial intelligence and other branches of computer science is to radically transform conventional discovery environments by equipping scientists with a range of powerful computer tools including large-scale, shared knowledge bases and discovery programs." -from the Abstract.

    The Computer-Aided Discovery of Scientific Knowledge. By Pat Langley. 1998. Proceedings of the First International Conference on Discovery Science. "In this paper, we review AI research on computational discovery and its recent application to the discovery of new scientific knowledge. ... As evidence for the advantages of such human-computer cooperation, we report seven examples of novel, computer-aided discoveries that have appeared in the scientific literature...."

    • More of Pat Langley's publications can be found in his Computational Scientific Discovery collection which begins with this historical note: "I became fascinated with the nature of scientific discovery as an undergraduate at TCU, and the interest has remained to this day. My dissertation work at CMU focused on Bacon, an AI system that rediscovered numeric laws from the history of physics. Herbert Simon served as my advisor and contributed many ideas to the effort. Gary Bradshaw and I extended the system to handle additional laws, including ones from the history of chemistry. After Jan Zytkow joined our group, we developed new systems (Stahl and Dalton) that dealt with the discovery of qualitative laws and structural models. This CMU work forms the basis of my early publications on scientific discovery...."

    Towards 2020 Science. Produced under the aegis of Microsoft Research Cambridge (2006). "In the summer of 2005, an international expert group was brought together for a workshop to define and produce a new vision and roadmap of the evolution, challenges and potential of computer science and computing in scientific research in the next fifteen years. The resulting document, Towards 2020 Science, sets out the challenges and opportunities arising from the increasing synthesis of computing and the sciences." In addition to the report and the roadmap, be sure to see the related, special issue of Nature.

    Introducing robo-scientist - Could robots take over from graduate students in the lab? By Mark Peplow. Nature (January 15, 2004). "A robot scientist has been unveiled that can formulate theories, carry out experiments and interpret results - all more cheaply than its human counterparts. As far as artificial newspaper intelligence goes, the Robot Scientist - designed by Ross King of the University of Wales in Aberystwyth, UK, and his colleagues - isn't as smart as other computers, such as those that compete in international chess competitions. But combining the smarts of a computer with the agility of a robot wasn't trivial. ... Geneticist Stephen Oliver of the University of Manchester, UK, who helped to select the robot's research project, says there is potential for the robot to more than just drudgery. 'The next big step is to make our robot discover something completely new,' says Oliver, 'perhaps by applying it to drug discovery.'"

    • The journal article: Oliver, S. G. et al. Functional genomic hypothesis generation and experimentation by a robot scientist. Nature, 427, 247 - 252, doi:10.1038/nature02236 (2004).
    • And consider this: A Robot Scientist - As ye sow... A machine can now do science. The Economist (January 15, 2004). "One question is, if their robot does make an important discovery, will it be eligible to win a Nobel prize?"

    Editorial: Scientific Discovery and Simplicity of Method. By Herbert A. Simon, Raul E. Valdes-Perez and Derek H. Sleeman. (1997). Artificial Intelligence, 91(2):177-181. ""[C]omplexity of programs or of their outputs is not a measure of their 'intelligence'. Given very complex tasks, complex algorithms may be a necessity, but they are clearly not a virtue. A critical lesson of artificial intelligence, and of computing in general, is that if a task domain has strong structure and if sufficient domain information can be obtained, either a priori or in the course of computation, then rather simple programs may suffice."

    Systematic Methods of Scientific Discovery: Papers from the 1995 Spring Symposium, ed. Raul Valdes-Perez. Technical Report SS-95-03. American Association for Artificial Intelligence, Menlo Park, California. Here are just some of the papers you'll find in this collection:

    • Herbert A. Simon's What is a Systematic Method of Scientific Discovery?
    • Pat Langley's Stages in the Process of Scientific Discovery.
    • Joshua S. Lederberg's Notes on Systematic Hypothesis Generation, and Application to Disciplined Brainstorming.

    Some Recent Human-Computer Discoveries in Science and What Accounts for Them. By Raul E. Valdes-Perez. AI Magazine 16(3): Fall 1995, 37-44. "My collaborators and I have recently reported in domain science journals several human-computer discoveries in biology, chemistry, and physics. One might ask what accounts for these findings, for example, whether they share a common pattern. My conclusion is that each finding involves a new representation of the scientific task: The problem spaces searched were unlike previous task problem spaces. Such new representations need not be wholly new to the history of science; rather, they can draw on useful representational pieces from elsewhere in natural or computer science. This account contrasts with earlier explanations of machine discovery based on the expert system view. My analysis also suggests a broader potential role for (AI) computer scientists in the practice of natural science."

    Neural Networks Meet CombiChem. By Emil Venere. Bio.com (January 22, 2002). "The different types of software work together in a repeating two-phase cycle of discovery. First, hybrid neural networks analyze the formulas of the numerous catalysts, or other materials, created by the parallel technique. The neural networks determine the properties of the materials, based on their chemical structures. In the second phase, genetic algorithms cull the best materials and eliminate the poor performers, just like survival of the fittest. The algorithms also generate 'mutations' of the best materials to create even better versions, and the software determines the chemical structures of those mutations. The resulting formulas are returned to the neural network software, and the cycle starts over again, progressively creating better and better materials, said Venkat Venkatasubramanian, a professor of chemical engineering who has been working with Caruthers to develop the software for more than a decade. [James M.] Caruthers said he observes how formulation chemists come up with new ideas. Then he models their trains of thought in software programs."

    Text-Based Discovery in Biomedicine: The Architecture of the DAD-system. By M. Weeber, H. Klein, A. R. Aronson, J. G. Mork, L. Jong-van den Berg, and R. Vos. Presented at The American Medical Informatics Association 2000 Symposium. "Current scientific research takes place in highly specialized contexts with poor communication between disciplines as a likely consequence. Knowledge from one discipline may be useful for the other without researchers knowing it. As scientific publications are a condensation of this knowledge, literature-based discovery tools may help the individual scientist to explore new useful domains. We report on the development of the DAD-system, a concept-based Natural Language Processing system for PubMed citations that provides the biomedical researcher such a tool."

    Related Web Sites

    "ARROWSMITH is interactive software that extends the power of a MEDLINE search. It operates on the output of a conventional search in a way that helps the user see new relationships and form and assess novel scientific hypotheses. It is based on the premise that information developed in one area of research can be of value in another without anyone being aware of the fact." At this site, which is maintained by Don R. Swanson at The University of Chicago, you'll find articles and manuals that show you how it works.

    Imperial College Computational Bioinformatics Laboratory (CBL):

    Related Pages

    More Readings

    Scientific Discovery - Computational Explorations of the Creative Processes. By Pat Langley, Herbert A. Simon, Gary L. Bradshaw and Jan M. Zytkow. The MIT Press (February 1987). "Using the methods and concepts of contemporary information-processing psychology (or cognitive science) the authors develop a series of artificial-intelligence programs that can simulate the human thought processes used to discover scientific laws. The programs - BACON, DALTON, GLAUBER, and STAHL - are all largely data-driven, that is, when presented with series of chemical or physical measurements they search for uniformities and linking elements, generating and checking hypotheses and creating new concepts as they go along."

    Molecular Treasure Hunt - A software tool elicits previously undiscovered gene or protein pathways by combing through hundreds of thousands of journal articles. By Gary Stix. Scientific American (May 2005; subscription req'd.). "When Andrey Rzhetsky arrived at Columbia University as a research scientist in 1996, the first project he collaborated on involved a literature search to try to understand why white blood cells called lymphocytes do not die in chronic lymphocytic leukemia. The mathematician-biologist found a few hundred articles on apoptosis (programmed cell death) and the cancer.... The experience led him to an idea that would have made his job on that first project much easier: an automated search tool that could supplant the mind-numbing task of finding and reading all the literature. But it also might do much more; it could even let a machine conduct research on its own, discovering the patterns among the data much as a human would do...."

    FYI: As explained in this announcement, on March 1, 2007 AAAI changed its name from the American Association for Artificial Intelligence to the Association for the Advancement of Artificial Intelligence.

    Automated Discovery research by Dr. Zytkow at UNCC

    Publications


    Asterisks mark papers published in languages other than English.
    Titles of those papers are translated into English.

    Ras, Z. & Zytkow, J.M. 2000. Mining for Attribute Definitions
    in a Distributed Two-Layered DB System, Journal of
    Intelligent Information Systems, 14, p.115-130

    Ossowski, A., & Zytkow, J.M. 2000. Geometrical
    Approach to a Coherent Set of Operational Definitions, in
    M.Klopotek, M.Michalewicz & S.Wierzchon eds. Intelligent
    Information Systems, Proceedings of the IIS'2000 Symposium,
    Bystra, Poland, June 12-16, 2000, Physica-Verlag, p.109-117.

    Zytkow, J.M. 2000. Granularity refined by knowledge: contingency
    tables and rough sets as tools of discovery, in B.Dasarathy ed.
    Data Mining and Knowledge Discovery: Theory, Tools, and
    Technology II, SPIE, p.82-91.

    Zytkow, J.M. 2000. Automated Discovery: A Fusion of
    Multidisciplinary Principles, in H.Hamilton, ed.
    Advances in Artificial Intelligence, Springer,
    p. 443-448.

    Suzuki, E. & Zytkow, J.M. 2000. Unified Algorithm for
    Undirected Discovery of Exception Rules, in D.Zighed,
    J.Komorowski & J.Zytkow, eds. Principles and Practice
    of Data Mining and Knowledge Discovery, Springer, p.169-180

    Zytkow, J.M., Tsumoto, S. & Takabayashi, K. 2000.
    Medical (Thrombosis) Data Description, in A.Siebes & P.Berka
    eds. Discovery Challenge, PKDD-2000.

    Zytkow, J.M. & Ras, Z. 2000. Foundations and Discovery of
    Operational Definitions, in Z.Ras & S.Ohsuga, eds.
    Foundations of Intelligent Systems, Springer, p.582-590.

    Zytkow, J.M. 1999. The Melting Pot of Automated Discovery:
    Principles for a New Science, in S.Arikawa & K.Furukawa eds.
    Discovery Science; Second International Conference; Tokyo,
    Dec 1999, Springer, p.1-12.

    Zytkow, J.M. 1999. Robot-Discoverer: a Role Model for Any
    Intelligent Agent, in J.Liu & N.Zhong eds. Intelligent
    Agent Technology; Proceedings of the 1st Asia-Pacific
    Conference, World Scientific, p.9-14.

    Zytkow, J.M. 1999. Scientific Modeling: A Multilevel Feedback
    Process, in L.Magnani, N.Nersessian, & P.Thagard eds.
    Model-Based Reasoning in Scientific Discovery}, Kluwer,
    p.311-325.

    Zytkow, J.M. 1999. Knowledge-Driven Discovery of Operational
    Definitions, in N.Zhong, A.Skowron & S.Ohsuga eds. New
    Directions in Rough Sets, Data Mining, and Granular-Soft
    Computing, Springer, p.395-404

    El-Mouadib, F.A., Koronacki, J. & Zytkow, J.M. 1999.
    Taxonomy formation by Approximate equivalence relations,
    Revisited, in: J.Zytkow & J.Rauch eds. Principles of Data
    Mining and Knowledge Discovery, Springer, p.71-79.

    Przybyszewski, A.W. & Zytkow, J.M. 1999. Extracting knowledge
    from data recorded in primates visual system, in M.Klopotek &
    M.Michalewicz eds. Intelligent Information Systems,
    Proceedings of the Workshop held in Ustron, Poland, 14-18 June
    1999. Instytut Podstaw Informatyki PAN. p.92-96.

    Zytkow, J.M. 1999. Discovery of Concept Expansions, in
    M.Klopotek & M.Michalewicz eds. Intelligent Information
    Systems, Proceedings of the Workshop held in Ustron, Poland,
    14-18 June 1999. Instytut Podstaw Informatyki PAN. p.117-126

    Cwik J., Koronacki J. & Zytkow, J.M. 1999. On discovering
    functional relationships when observations are noisy: 2D case,
    in Ras Z. & Skowron A. Foundations of Intelligent
    Systems, Springer-Verlag, p.547-555.

    Zytkow, J.M. 1999. Dualism: consciousness evades scientific and
    computer reductionism, in: E.Lucius & S.Ural eds. Artificial
    Intelligence, Language and Thought, The ISIS Press, Istanbul,
    p.95-110.

    Zytkow, J. 1999. Model construction: elements of a computational
    mechanism, in Proceedings of the AISB'99 Symposium on
    Scientific Creativity, Edinburgh, April 1999, p.65-71.

    Ras, Z.W. & Zytkow, J.M. 1999. Discovery of Equations and the
    Shared Operational Semantics in Distributed Autonomous
    Databases, in N.Zhong and L.Zhou eds. Proceedings of Third
    Pacific-Asia Conference on Knowledge Discovery in Databases,
    Beijing, Springer-Verlag, p. 453-463.

    Zytkow, J.M. & Ras Z. 1999. Mining Distributed Databases for
    Attribute Definitions, in B.Dasarathy ed. Data Mining
    and Knowledge Discovery: Theory, Tools, and Technology,
    SPIE, p.171-178.

    Consciousness Defies Scientific and Computational Reduction,
    Foundations of Science, 4, 1998.

    Proving that consciousness defies reduction; a response to
    Steve Horst, Foundations of Science, 4, 1998.

    Modeling the Business Process by Mining Multiple Databases,
    Principles of Data Mining and Knowledge Discovery,
    Springer-Verlag, 1998. p.432-440.

    Robot-discoverer: a new platform to expand methodology
    (abstract), International Congress on Discovery and
    Creativity, Univ. of Ghent, 1998. p. 174-175.

    Measuring the unknown: knowledge-driven discovery of concept
    expansions, Machine Discovery; ECAI-98 Workshop,
    1998. p.30-36.

    Business Process Understanding: Mining Many Datasets (with
    A. Sanjeev), Rough Sets and Current Trends in Computing,
    Springer-Verlag, 1998. p. 239-246.

    Robot-discoverer: artificial intelligent agent who searches for
    knowledge, Proceedings of High Technology Symposium
    Yamaguchi '97, 1997, p.11-18.

    Discovering Empirical Equations from Robot-Collected Data, (with
    K-M. Huang), in Ras Z. & Skowron A., Foundations of
    Intelligent Systems, Springer-Verlag, 1997. p. 287-297.

    Techniques and Applications of KDD, (abstract, with
    W. Kloesgen), in: Komorowski J. & Zytkow J. eds.
    Principles of Data Mining and Knowledge Discovery,
    Springer-Verlag, 1997, p.394.

    Limited view of knowledge in database discovery, in: Proceedings
    of the Fifth International Workshop on Rough Sets and Soft
    Computing, 1997. 6 pages.

    Automated Search for Knowledge on Machinery Diagnostics,
    (with W. Moczulski) in: Proceedings of Sixth Symposium on
    Intelligent Information Systems, 1997.

    Creating a Discoverer: Autonomous Knowledge Seeking Agent, in:
    J. Zytkow ed. Machine Discovery, Kluwer, 1997, p.253-283.

    Knowledge = Concepts: a harmful equation, in Proceedings of
    KDD-97, AAAI Press, 1997. p. 104-109.

    Theories That Combine Many Equivalence and Subset Relations
    (with Zembowicz R.) in: T.Y. Lin and N. Cercone eds. Rough
    Sets and Data Mining : Analysis of Imprecise Data, Kluwer Publ.
    1997. p.323-336.

    Contingency Tables as the Foundation for Concepts, Concept
    Hierarchies and Rules, (with R. Zembowicz),
    Fundamenta Informatica, 1997.

    Robotic Discovery: the dilemmas of empirical equations (with
    K-M. Huang), Proceedings of the Fourth International
    Workshop on Rough Sets, Fuzzy Sets, and Machine Discovery, The
    Univ. of Tokyo, Nov. 1996, p. 217-224.

    Knowledge Discovery in Databases: 49er's Approach,
    Proceedings of the Fourth International Workshop on Rough Sets,
    Fuzzy Sets, and Machine Discovery, The Univ. of Tokyo,
    Nov. 1996, p. 445-446.

    Systematic Generation of Constituent Models of Particle Families
    (with R.E.Valdes-Perez), Physical Review E, Vol.54,
    p.2102-2110, 1996.

    From Statistical Regularities to Concepts, Hierarchies, Equation
    Clusters and Rules, in: Dowe D., Korb K.B., Oliver J.J. eds.
    Information, Statistics and Induction in Science,
    World Scientific, 1996. p.269-280.

    A Study of Enrollment and Retention in a University Database
    (with Arun Sanjeev), Journal of the Mid-America Association
    of Educational Opportunity Program Personnel, Vol.8, p. 24-41,
    1996.

    Empirical discovery: linking data and physical processes,
    in: Proceedings of the 5th International
    Symposium on Intelligent Information
    Systems, Deblin, Poland, June 1996.

    Automated pattern mining with a scale dimension (with
    R.Zembowicz), Proceedings of 2nd International Conference
    on Knowledge Discovery and Data Mining, Aug.1996. p.158-163.

    Incremental Discovery of Hidden Structure: Application to
    Elementary Particles, (with P. Fischer),
    Proceedings of the Fourteenth National Conference
    on Artificial Intelligence, The AAAI Press, p.750-756. 1996.

    Search for Patterns at Each Scale in Massive Data, (with
    R. Zembowicz), in Ras Z. & Michalewicz M. ed. Foundations
    of Intelligent Systems, Springer-Verlag, p. 139-148. 1996.

    Approximate knowledge of many agents and discovery systems,
    in Bulletin of the Section of Logic, Polish
    Academy of Sciences, 1996, p.185-9. 1996.

    Automated Discovery of Empirical Laws, Fundamenta
    Informatica, 27, p. 299-318, 1996.

    From contingency tables to various forms of knowledge in
    databases (with R.Zembowicz), in: Fayyad U., Piatetsky-Shapiro G.,
    Smyth P. & Uthurosamy eds. Advances in Knowledge Discovery
    and Data Mining, AAAI Press, 1996. pp.329-349.

    Machine Discovery Terminology (with W. Kloesgen), in:
    Fayyad, U., Piatetsky-Shapiro G., Smith P. & Uthurusamy R. eds.
    Advances in Knowledge Discovery and Data mining. AAAI Press,
    1996, pp. 573-592.

    * Scientific and Computer Reductionism and the Contents of
    Consciousness, Filozofia Nauki (Philosophy of
    Science) (in Polish), p. 147-160. 1995.

    * Automation of Discovery: a New Methodology and Philosophy of
    Science (in Polish), in Pietruska-Madej E. & Strawinski W. eds.
    On the Problems of Contemporary Theory of Knowledge, University
    of Warsaw, p.43-71, 1995.

    Discovering Patterns at Different Scale in Massive Data,
    (with R. Zembowicz), in Proceedings of
    the 4th International Symposium on Intelligent Information
    Systems, Augustow, Poland, June 1995.

    Creating a Discoverer: Autonomous Knowledge Seeking Agent,
    Foundations of Science, 1, 253-283. 1995.

    Discovering Enrollment Knowledge in University Databases
    (with A. Sanjeev), Proc. of 1st Intern. Conference on Knowledge
    Discovery and Data Mining, Montreal, Aug. 1995, 6 pages.

    Tolerance Based Rough Sets (with L. Polkowski A. Skowron), in
    Lin T.Y. & A.M.Wilddberger eds. Rough Sets and Soft
    Computing, 1995. p.55-59.

    Mining Student Databases for Enrollment Patterns (with A.
    Sanjeev), ed. Y.Kodratoff, G.Nakhaeizadeh, Ch.Taylor eds.
    MLnet Workshop on Statistics, Machine Learning and Knowledge
    Discovery in Databases, 1995. pp. 204-209.

    Scientific modeling: round trips to many destinations,
    AAAI-95 Spring Symposium on Systematic Methods
    of Scientific Discovery, 1995, p.175-180.

    Geobotanical database exploration (with I.Moraczewski and
    R.Zembowicz), AAAI-95 Spring Symposium on Systematic
    Methods of Scientific Discovery, 1995, p. 76-80.

    Theories that combine many equivalence and subset relations
    (with R.Zembowicz), in: T.Y.Lin ed. CSC'95 Workshop on
    Rough Sets and Database Mining, 1995, 8 pages

    Law-Like Knowledge as the Foundation for Concepts, Concept
    Hierarchies and Rules (with R. Zembowicz) in Proceedings of
    the 3rd International Symposium on Intelligent Information
    Systems, Wigry, Poland, June 1994. Published in 1995.

    Abstract Intelligent Agents: Basic Concepts and Problems (with
    Gadomski, A.), in Proceedings of the Round Table on Abstract
    Intelligent Agents, ENEA, Rome, Italy. 9 pages, Published in
    1995.

    Searching for the unknown: machine discoverer assesses its
    state of knowledge, in Gadomski A. ed. Proceedings of the
    Round Table on Abstract Intelligent Agents, Rome, Feb.1994,
    Published in 1995.

    Rough Foundations for Rough Sets (with L. Polkowski A. Skowron)
    in Proceedings of the 3rd International Workshop on Rough Sets,
    1994, p.142-149.

    Concept Hierarchies: a Restricted Form of Knowledge Derived
    from Regularities (with M.Troxell, K.Swarm, and R.Zembowicz),
    in: Zemankova M. and Ras Z. eds. Methodologies for Intelligent
    Systems, Springer-Verlag, 1994, p.437-447.

    Machine discoverers: transforming the spaces they explore,
    Behavioral and Brain Science, Vol.17, 1994, p.557-558.

    Discovery of Hidden Objects and Structures, in Omyla M. ed.
    Nauka i Jezyk (Science and Language), Polish Semiotic Society,
    Warsaw, 1994, p. 475-495.

    From Law-Like Knowledge to Concept Hierarchies in Data
    (with M.Troxell, K.Swarm, and R.Zembowicz), in: Fayyad U.
    and Uthurusamy, R. eds. Proceedings of AAAI-94 Workshop on
    Knowledge Discovery in Databases, 1994, p.193-204.

    Report: Foundations for testing the SWOE models. WSU/FIMTSI-SWOE
    Research Group, July 1994. 69 pages.

    Control of Automated Empirical Discovery by Diagrammatic
    Representation of Theory (with Zhu, J.), in Narayanan H. ed.
    Reasoning with Diagrammatic Representations, AAAI Press,
    1994. (reprinted from Working Notes of the AAAI
    Spring Symposium on Reasoning with Diagrammatic Representations).

    Experimentation Guided by a Knowledge Graph (with Zhu, J.)
    in: Learning Action Models, Shen W. ed. AAAI Press, 1994.
    (reprinted from Proceedings of the AAAI-93 Workshop on Learning
    Action Models).

    Automated discovery of empirical laws in a science laboratory,
    in: Rough Sets, Fuzzy Sets and Knowledge Discovery, Ziarko
    W. ed. Workshops in Computing, Springer-Verlag, 1994,
    p.119-129.

    Putting Together a Machine Discoverer: Basic Building Blocks,
    in MLnet NEWS, Vol.2 No.3, 1994, p.19--22.

    * Machine Discovery: Current State and Perspectives,
    Filozofia Nauki, 1, No.4, 1993, p.37-54.

    Scientific Discovery (with Langley, P., Simon, H., and
    Bradshaw, G.), in Readings in Philosophy and Cognitive Science,
    Goldman A. ed. MIT Press, 1993, p.185--207 (reprinted part of
    the book: Scientific Discovery, 1987).

    Feedback between knowledge and method in automated discovery,
    in Proceedings of 2nd Intelligent Information Systems
    Workshop, IPI PAN, Warsaw, 1993, p.282--298.

    Discovery of Theorems in Plane Geometry (with V. Shanbhogue,
    R. Bagai, S.-C. Chou) Proceedings of the 10th Brazilian
    Symposium on Artificial Intelligence, Porto Alegre, Brazil,
    October 1993, p.155--167.

    Machine Discoverers: Autonomous Intelligent Agents who Explore
    Their Environments, in Proceedings of the Abstract Intelligent
    Agents Round Table, Rome, ENEA, 1993, p.27--32.

    Testing the existence of Functional Relationships in Data (with
    Zembowicz, R.) in Piatetsky-Shapiro G. ed. Proceedings of the
    AAAI-93 Workshop on Knowledge Discovery in Databases, 1993,
    p.102--111.

    Experimentation guided by a knowledge graph (with Zhu, J.);
    in Shen W. ed. Proceedings of the AAAI-93 Workshop on Learning
    Action Models, 1993, p.23--27.

    Scientific Model-Building as Search in Matrix Spaces (with
    Valdes-Perez, R. and Simon, H.) in Proceedings of the Eleventh
    National Conference on Artificial Intelligence, The AAAI Press,
    1993, p.472--478.

    Automatic theorem generation in plane geometry (with Bagai, R.,
    Shanbhogue, V., Chou, S.C.), in Proceedings of 5th Intern. Conf. on
    Computing and Information, Sudbury, Ont. Canada, 1993, p.354--358.

    Recognition of functional dependencies in data (with Zembowicz,
    R.), in Komorowski J. and Ras Z. eds. Methodologies for
    Intelligent Systems, Springer-Verlag, 1993, p.632--641.

    Automatic theorem generation in plane geometry (with Bagai, R.,
    Shanbhogue, V., Chou, S.C.), in Komorowski J. and Ras Z. eds.
    Methodologies for Intelligent Systems, Springer-Verlag,
    1993, p.415--424.

    Cognitive Autonomy in Machine Discovery, Machine
    Learning, Vol.12, 1993, p.7--16.

    Database exploration in search of regularities (with Zembowicz,
    R.) Journal of Intelligent Information Systems,
    Vol.2, 1993, p.39--81.

    Incremental Generation and Exploration of Hidden Structure (with
    Fischer, P.), in Proceedings of the ML-92 Workshop on Machine
    Discovery, July 1992, Aberdeen, U.K., p.103--110.

    Discovery of Regularities in Databases (with Zembowicz, R.), in
    Proceedings of the ML-92 Workshop on Machine Discovery.
    July 1992, Aberdeen, U.K. p.18--27.

    A Graph Representation of an Empirical Theory: Guiding a Machine
    Discovery Process (with Zembowicz, R. and Zhu, J.), in Kishore S.
    ed. Proceedings of AAAI-92 Workshop on Communicating Scientific
    and Technical Knowledge, July 1992, p.81--88.

    Discovery of Equations: Experimental Evaluation of Convergence (with
    Zembowicz R.), Proceedings of the Tenth National Conference on
    Artificial Intelligence, The AAAI Press, 1992, p.70--75.

    Operational Definition Refinement: a Discovery Process (with Zhu,
    J., and Zembowicz R.), Proceedings of the Tenth National Conference
    on Artificial Intelligence, The AAAI Press, 1992, p.76--81.

    A Framework for Discovering Discrete Event Models (with Radiya, A.),
    in Machine Learning: Proceedings of the Ninth International
    Conference, July 1992, Aberdeen, United Kingdom. Morgan Kaufmann Publ.
    p.373--378.

    The First Phase of Real-World Discovery: Determining Repeatability
    and Error of Experiments (with Zhu, J. and Zembowicz, R.), in
    Machine Learning: Proceedings of the Ninth International Conference,
    July 1992, Aberdeen, United Kingdom. Morgan Kaufmann Publ. p.480--485.

    Control of Automated Empirical Discovery by Diagrammatic
    Representation of Theory (with Zhu, J.), in Narayanan H. ed.
    Working Notes of the AAAI Spring Symposium on Reasoning with
    Diagrammatic Representations, Palo Alto, CA, March 1992. p.176--179.

    Automated Discovery of Empirical Equations from Data (with
    Zembowicz, R), in Ras Z. and Zemankova M. eds. Methodologies for
    Intelligent Systems, Springer-Verlag, 1991, p.429--440.

    Constructing Models of Hidden Structure (with Fischer, P), in Ras Z.
    and Zemankova M. eds. Methodologies for Intelligent Systems,
    Springer-Verlag, 1991, p.441--449.

    Human Discovery of Laws and Concepts; An Experiment
    (with Zytkow, A), in Proceedings of the 13th Cognitive Science
    Conference, Lawrence Earlbaum Associates, 1991, p.617--622.

    Automated Empirical Discovery in a Numerical Space (with Zhu, J),
    in the Proceedings of the Third Annual Chinese Machine Learning
    Workshop, July 15--19, 1991, Harbin Institute of Technology, p.1--11.

    Cashing in on the Regularities Discovered in a Database (with
    Jafar, M), in Piatetsky-Shapiro, G. ed. Proceedings of the AAAI-91
    Workshop on Knowledge Discovery in Databases, Anaheim, July 1991,
    p.133--147.

    The KDD Land of Plenty, in Piatetsky-Shapiro, G. ed. Proceedings
    of the AAAI-91 Workshop on Knowledge Discovery in Databases,
    Anaheim, July 1991, p.iii--vi.

    Integration of Knowledge and Method in Real-World Discovery,
    SIGART, 1991.

    Interactive Mining of Regularities in Databases (with J. Baker),
    in Piatetsky-Shapiro, G. and Frawley, W. eds. Knowledge Discovery
    in Databases, The AAAI Press, Menlo Park, CA. 1991.

    An Architecture for Real-World Discovery, in Proceedings of
    the AAAI Spring Symposium on Integrated Intelligent Architectures,
    Palo Alto, CA, March 1991, p. p.170--174.

    Application of Empirical Discovery in Knowledge Acquisition
    (with Zhu, J), in Kodratoff, Y. ed. Proceedings of the European
    Workshop Session on Learning, Porto, Portugal. Berlin:
    Springer-Verlag, 1991, p.101--117.

    Data-driven Approaches to Empirical Discovery (with Pat Langley),
    in Carbonell, J. ed. Machine Learning: Paradigms and Methods,
    Boston: MIT Press, 1990.

    Database Analyzer; User Instruction, (with Baker, J., Jafar, M.,
    and Tjong M.), Technical Report, Computer Science Department,
    Wichita State University, 1990.

    Minds beyond brains and algorithms, Behavioral and Brain Sciences,
    December, 1990, p.691--692.

    Determining Repeatability and Error in Experimental Results by a
    Discovery System (with Zhu, J., and Hussam A.) , in: Ras Z.,
    Zemankova M., and Emrich M.L., eds. Methodologies for
    Intelligent Systems 5, Elsevier Science Publishing Co., Inc.,
    New York, NY, 1990, p.362--370.

    Discovering Quarks and Hidden Structure (with Fischer, P.), in:
    Ras Z., Zemankova M., and Emrich M.L., eds. Methodologies for
    Intelligent Systems 5, Elsevier Science Publishing Co., Inc.,
    New York, NY, 1990, p.362--370.

    Analytical Chemistry: the Science of Many Models(with A. Lewenstam)
    Fresenius Journal for Analytical Chemistry, Vol. 338, 1990,
    p.225--233.

    Automated Discovery in a Chemistry Laboratory (with Zhu, J., and
    Hussam A.) , In: Proceedings of the Eight National Conference
    on Artificial Intelligence, The AAAI Press, 1990, p.889--894.

    Real-World Knowledge Acquisition by Discovery, in A.Rappaport ed.
    Proceedings of the AAAI-90 Workshop on Knowledge Acquisition},
    Boston, July 1990.

    Deriving Laws by Analysis of Processes and Equations, in:
    Langley, P., and Shrager J. eds. Computational Models of
    Scientific Discovery and Theory Formation, Morgan Kaufmann, San
    Mateo:CA, 1990, p.129--156.

    * On scientific modeling (with A.Lewenstam), Studia Filozoficzne,
    Dec. 1989, p.173--179.

    * Artificial Intelligence, Common Sense, and Philosophy (with
    M.Bielecki), Studia Filozoficzne, Dec. 1989, p.11--24.

    Autonomous control that reasons about its own actions, in: Kanade T.,
    Groen F., and Hertzberger L. eds. Intelligent Autonomous Systems 2,
    Amsterdam, The Netherlands, Dec. 1989, p.864--874.

    AI insider's view on real-time systems, Proc.Fifth Annual
    Conference on Artificial Intelligence and Ada, Nov. 1989.

    Fusion of vision and touch for spatio-temporal reasoning in learning
    manipulation tasks (with Pachowicz P.) in: Proceedings of SPIE's
    Advances in Intelligent Robotic Systems, November 1989, p.404--415.

    Multisearch Systems with Hierarchical Control: a Tool for Solving
    Complex AI Problems (with Jankowski A.), in: Bourbakis N. ed.
    Proc. of IEEE Workshop on Tools for Artificial Intelligence,
    1989, p.132--137.

    A Multisearch Approach to Sequence Prediction, (with
    Stefanski, P.), in: Ras Z. ed. Methodologies for Intelligent
    Systems 4, Elsevier, New York, NY 1989, p.359--366.

    Hierarchical Control and Heuristics in Multisearch Systems,
    (with Jankowski A.), in: Ras Z. ed. Methodologies for
    Intelligent Systems 4, Elsevier, New York, NY 1989, p.86--93.

    Testing ECHO on historical data, Behavioral and Brain Sciences,
    September 1989, p.489--490.

    Overcoming FAHRENHEIT's experimentation habit: discovery system
    tackles a database assignment, in Piatetsky-Shapiro, G. and
    Frawley, W. eds. IJCAI Workshop on Knowledge Discovery in
    Databases, August 1989, p.398--406.

    Reasoning about own plans, goals and actions for an aircraft
    (with Swangwanna, S), in Shalin, V. and Boy, G. eds.
    Proceedings of IJCAI Workshop on Integrated Human-Machine
    Intelligence in Aerospace Systems, August 1989, p.31--40.

    Data-driven Approaches to Empirical Discovery (with Pat Langley),
    Artificial Intelligence, Vol.40, 1989, p.283--312.

    Model Based Science of Ion-Selective Electrodes --- Methodological
    Remarks (with Lewenstam, A), in: Pungor, E. ed. Ion-Selective
    electrodes, 5; Pergamon Press, Oxford 1989, p.297--304.

    Real-time Decision Making for Autonomous Flight Control (with
    Sulyuth Swangwanna), Proceedings of SAE General Aviation Meeting,
    April, 1989. 7p.

    Improving the Tactical Decisions by an Automated Pilot; Search
    Mechanism Guided by Experience and Prediction, in: Proceedings
    of Techfest XV, Wichita, KS, Nov. 10--12, 1988, 12p.

    Data-driven Approaches to Empirical Discovery (with Pat Langley),
    Technical Report 88-24, University of California, Irvine,
    1988, 28p.

    A Methodology for Multisearch Systems (with Andrzej Jankowski),
    in: Ras Z. and Saitta L. eds. Methodologies for Intelligent
    Systems 3, North-Holland, New York, NY, 1988, p.343--352.

    Utilizing Experience for Improving the Tactical Manager (with
    Michael Erickson), in: John Laird ed. Proceedings of the Fifth
    International Conference on Machine Learning, Ann Arbor, Michigan,
    June 1988, Morgan Kaufmann Publ., p.444--450.

    Normative Systems of Discovery and Logic of Search (with H.A.Simon),
    Synthese, Vol.74, January 1988, p.65--90.

    * Computer System of Discovery STAHL (with P.Langley and H.A.Simon),
    Zagadnienia Naukoznawstwa, Vol.23, 3--4, 1987, p.518--536.

    Tactical Manager in a Simulated Environment (with Michael Erickson),
    in: Ras Z. and Zemankova M. eds. Methodologies for Intelligent
    Systems, Elsevier Science Publ., 1987, p.139--147.

    Combining many searches in the FAHRENHEIT discovery system,
    Proceedings of Fourth International Workshop on Machine Learning,
    June 22--25, 1987, Morgan Kaufmann Publ., Los Altos, California,
    p.281--287.

    Is Analytical Chemistry an Autonomous Field of Science? (with
    A.Lewenstam), Fresenius Journal for Analytical Chemistry, Vol.326,
    1987, p.308--311.

    * Operationalism, in: Philosophy and Science, Encyclopedia,
    Ossolineum, 1987, p.451--455.

    Experimenting and Theorizing in Theory Formation (with Bruce Koehn),
    in: Ras Z. and Zemankova M. eds. Proceedings of the International
    Symposium on Methodologies for Intelligent Systems, Knoxville, TN,
    Oct. 1986, ACM SIGART Press, p.296--307.

    Numerical and Symbolic Computing in the Applications of Law-like
    Knowledge (with A.Zytkow), in: Coupling Symbolic and Numerical
    Computing in Expert Systems, J.S.Kowalik ed. North-Holland,
    Amsterdam, The Netherlands, 1986, p.147--159.

    A Theory of Historical Discovery: The Construction of Componential
    Models (with H.A.Simon), Machine Learning, Vol.1,
    March 1986, p.107--137.

    What revisions does bootstrap testing need, Philosophy of Science,
    Vol. 53, March 1986, p.101--109.

    The search for regularity: four aspects of scientific discovery
    (with P.Langley, H.A.Simon, G.Bradshaw), in: R.Michalski,
    J.Carbonell, and T.Mitchell eds. Machine Learning, Vol.2,
    Palo Alto, California: Morgan Kaufmann Publ., 1986, p.425--469.

    A Survey of Methods for Object Detection from Images (with X.Li),
    Technical Report WSU-CS-85-2, CS Wichita State University,
    1985.

    Discovering Qualitative Empirical Laws (with P.Langley, H.A.Simon,
    D.H.Fisher), Technical Report 85-18, ICS University of California at
    Irvine, 1985.

    Numbers and Symbols in the Laws of Science, Proceedings of the
    Coupling Symbolic and Numerical Computing in Expert Systems, Bellevue,
    Washington, August p.27--29, 1985.

    INGA --- Intelligent Network Generator (with Z.Czajkiewicz),
    Proceedings of the IASTED International Symposium: Robotics
    and Automation, Lugano, June 24--26, 1985, Acta Press, p.244--247.

    A model of early chemical reasoning (with P.Langley, H.A.Simon)
    Proceedings of the Sixth Annual Conference of the Cognitive
    Science Society, Boulder, Colorado, June 1984, p.378--381

    * Relations between genetically connected theories, in:
    Zakonomiernosti razwitija estestvoznanija,
    Kupcow W., Majewski Z. eds. Moscow,
    1984, IN RUSSIAN.

    Partial definitions in science compared to meaning families in natural
    language, in: Sign, System, and Function, Sebeok T. et al. eds.
    The Hague-Paris, Mouton, 1984, p.479--492.

    Three facets of scientific discovery (with P.Langley, H.A.Simon,
    G.Bradshaw), Proceedings of the IJCAI-1983.

    Mechanisms for qualitative and quantitative discovery (with P.Langley,
    H.A.Simon, G.Bradshaw), in: Proceedings of the International
    Machine Learning Workshop, Michalski R.S. ed. Univ.of Illinois at
    Urbana-Champaign 1983, p.121--132.

    * Was the Oxygen Theory of Lavoisier better than the Theory of Phlogiston?
    A contribution to the analysis of the scientific revolution (with
    A.Lewenstam), Studia Filozoficzne No 9--10, 1982, p.39--65.

    Difficulties with the reduction of classical to relativistic mechanics
    (with M.Czarnocka), in: Polish Essays in the Philosophy of
    the Natural Sciences, Krajewski W. ed. Boston Studies in the
    Philosophy of Science, Vol.68, Reidel 1982, p.319--332.

    An interpretation of a concept in science by a set of operational
    procedures, in: Polish Essays in the Philosophy of the Natural
    Sciences, Krajewski W. ed. Boston Studies in the Philosophy of
    Science, Vol.68, Reidel 1982, p.169--185.

    * Polish political system decay under martial law,
    Underground Bulletin, 1982. (pseudonym Jacek Kaleta).

    * Deductive theory in science and its relation to reality,
    Czlowiek i Swiatopoglad No 9, 1980, p.85--105.

    * Orbituary Notice: Tadeusz Nadel-Turonski (with W.Krajewski and
    T. Nowaczyk Ruch Filozoficzny, Vol.38, 1980, p.203-6.

    * A coherent set of operational procedures as the interpretation of an
    empirical term, in two parts: Studia Filozoficzne
    No 6, 1979, p.95--112,
    and Studia Filozoficzne No 7, 1979, p.25--38.

    * Determinism, indeterminism, free will, in: Technologia nauki,
    Matuszewski R. ed. Plock 1979, p.43--50.

    * On the reduction of the classical mechanics to the relativistic one
    (with M.Czarnocka), Studia Filozoficzne No 8--9, 1978, p.185--196.

    * On intersubjective communicability and verification of knowledge,
    in: Technologia nauki --- specyfika wiedzy naukowej, Matuszewski R.
    ed. Plock 1978, p.74--78.

    * On the translability of languages of theories divided by a scientific
    revolution (with A.Lewenstam), in: Relacje miedzy teoriami a rozwoj
    nauki, Krajewski W. et al. eds. Wroclaw--Warszawa 1978, p.81--100.

    * Models of atom --- remarks on the growth of knowledge (with
    A.Lewenstam), in: Relacje miedzy teoriami a rozwoj nauki, Krajewski W. et al.
    eds. Wroclaw--Warszawa 1978, p.27--45.

    * On the concept of relative truth in empirical sciences, Studia
    Filozoficzne No 6, 1977, p.33--37.

    * On cumulative and revolutionary schemes of scientific growth,
    Czlowiek i Swiatopoglad No 1, 1977, p.91--106.

    * Entanglement of terms in a theory. The question of the vicious circle,
    in: Technologia nauki --- redukcjonizm, Matuszewski R. ed. Plock
    1976, p.72--76.

    Intertheory relations on the formal and semantical level, in:
    Formal Methods in the Methodology of Empirical Sciences, Przelecki M. et al.
    eds. Wroclaw--Warszawa 1976, p.450--457.

    * Remarks on microreduction, Czlowiek i Swiatopoglad No 12,
    1974, p.74--88.

    * The structure of theory in physics; on reduction and correspondence
    relations, in: Zasada korespondencji w fizyce a rozwoj nauki,
    Krajewski W. et al. eds. Warszawa 1974, p.233--279.

    * The concept of model in formal and in empirical sciences, Studia
    Filozoficzne No 7--8, 1972, p.87--96.

    * On the visuality of knowledge in science, Czlowiek i Swiatopoglad
    No 10, 1971, p.87--100.

    BOOK: Scientific Discovery; Computational Explorations of the
    Creative Processes (with P.Langley, H.A.Simon, and
    G.L.Bradshaw), 1987, MIT Press, 357 pages.

    CO-EDITOR (with Krajewski W. and Pietruska-Madej E.) OF A MONOGRAPH:
    Intertheory Relations and the Growth of Science (in Polish),
    Wroclaw--Warszawa 1978.

    CO-EDITOR (with Diaz-Herrera J.) of Conference Proceedings:
    Proceedings of the Fourth Artificial Intelligence and Ada Conference,
    George Mason University, Fairfax, VA. Nov.1989.

    EDITOR of Conference Proceedings: Proceedings of the ML-92 Workshop
    on Machine Discovery (MD-92), Aberdeen, U.K., July 1992.

    EDITOR of the special issue on machine discovery of Machine Learning
    journal, August 1993.

    EDITOR of the special issue on automated discovery of Foundations of
    Science journal, 1995.

    EDITOR of the book (collection of papers) Machine Discovery, Kluwer,
    1997.

    Co-EDITOR (with J. Komorowski) Principles of Data Mining and
    Knowledge Discovery, Springer-Verlag, 1997.

    Co-EDITOR (with Willi Kloesgen) Handbook of Data Mining and Knowledge
    Discovery; Oxford University Press, 1997-9.