Introduction

In current attempts to reform science education, it is often argued that students should be given more opportunities to learn actively, in particular to be involved in research-like or inquiry activities (e.g., De Vos & Genseberger, 2000; Millar, Lubben, Gott, & Duggan, 1994; NRC (National Research Council), 2000). Moreover, curriculum innovations focusing on the public understanding of science (e.g., De Vos & Reiding, 1999; NEAB (Northern Examinations and Assessment Board), 1998) have drawn attention to reflection on science, that is, increasing students’ awareness of how scientific knowledge is constructed and applied, rather than focusing exclusively on the content of scientific ideas. In this context, as models rank among the main products of science (Gilbert, Boulter, & Elmer, 2000; Harrison & Treagust, 2000) and science can be seen as a process of model building, “an understanding of the nature of models and model building is an integral component of science literacy.” (Gilbert, 1991, p. 78). Thus it becomes essential that students learn to use models in classroom activities (e.g., design and test their own models, compare and discuss models). If we want to improve students’ understanding of models and modelling, it is important that we, as teachers, textbook authors, science education researchers, and curriculum developers, know how models are actually used by present-day scientists in their research endeavours: how do scientists perceive the nature of the models they construct, test, and adapt? To answer this question, the literature on models and modelling in science was studied, resulting in a tentative description of common characteristics of models and their use in science. To probe the value of this description for current scientific practice, a pragmatic approach was chosen. Scientists who recently published research papers in which models play a central role were asked to what extent the description applied to their work. De Vos and Van der Valk (2000) have given a preliminary report of this study. Recently, Schwartz and Lederman (2005) wrote a paper on the same issue, based on a survey study in a group of practicing scientists (n = 24), finding that these scientists emphasised “that models are used to explain or organise observations, then predict and test through further observations” (Schwartz & Lederman, 2005, p. 13).

It is the aim of this article to arrive at a comprehensive description of common features of scientific models and aspects of modelling that is congruent with the views of present-day scientific researchers. The results of the present study are relevant for curriculum developers, textbook authors, teacher educators and teachers, and can be used, for instance, as a starting point for the design of educational activities aiming at the improvement of students’ understanding of the nature of models and their modelling abilities.

Theoretical Background

Models are used in research in all scientific disciplines (Gilbert, 1991). Obviously, such models differ in terms of content, appearance and function, and can be categorised accordingly (see e.g., Black, 1962; Boulter & Buckley, 2000; Giere, 1991; Gilbert & Boulter, 1997; Harrison & Treagust, 2000). In spite of differences between various models, it has been suggested that there are general features that are common to all models and to the ways they are used in scientific research. In particular, in some of our earlier work (De Vos, 1985; Van Driel & Verloop, 1999), we identified several common characteristics of scientific models on the basis of an analysis of the available literature. This literature consisted of publications from various domains, mainly the history and philosophy of science (e.g., Bertels & Nauta, 1969; Black, 1962; Giere, 1991; Hesse, 1966; Rothbart, 2004), and science education (e.g., Duit & Glynn, 1996; Gilbert, 1991; Gilbert & Boulter, 1997; Van Oers, 1988). In empirical studies, we have used these common features to develop and test educational materials, aimed at understanding the role and the nature of models in chemistry (De Vos, 1985; Van Hoeve-Brouwer, 1996), and to probe teacher’s understanding of models and modelling in science (Van Driel & Verloop, 1999).

The first two general features of scientific models describe the nature and functions of a model.

  1. 1.

    A model is always related to a target (Duit & Glynn, 1996; Gilbert, 1991; Gilbert & Boulter, 1997) and is designed for a special purpose (Bullock & Trombley, 1999). The target, or subject (cf. Rothbart, 2004), is the actual object of research. The target may be an object, a phenomenon, an event, a process, a system, or an idea. A model is always a representation of the target, but the way in which the target is represented in the model (e.g., three dimensional model, mathematical equation) may be quite different, mostly depending on the purpose of the model. Nevertheless, it should always be possible to identify both the model and the target, and to distinguish between the two.

  2. 2.

    A model serves as a research tool that is used to obtain information about the target which itself cannot be easily observed or measured directly (Mayer, 1992). A model is used to learn about the less known (i.e., the target) by comparing it to something that is more familiar (Bertels & Nauta, 1969; cf. Kuhn, 1970, on ‘heuristic models’). The purpose of a model in scientific research is mostly to predict or to explain (Bullock & Trombley, 1999).

    The next two features refer to the criteria a model must fulfil:

  3. 3.a

    A model bears some analogies to the target. From analogies between the target and the model, certain aspects may be explained (cf. ‘analogical models,’ Hempel, 1965).

  4. 3.b

    These analogies enable the researcher to reach the purpose of the model; in particular to derive hypotheses from the model or to make predictions, which may be tested while studying the target (Hesse, 1966). Regardless of its purpose, every model maps elements of the target (Bullock & Trombley, 1999). Studying the model should enable the researcher to obtain information and to reformulate it into a hypothesis or a prediction that refers to the target under consideration. In this way the researcher knows what to look for and where and when to look for it (Black, 1962).

  5. 4.

    A model differs in certain respects from the target. The differences make the model more accessible for research than the target (Woody, 1995). If a model were exactly like its target, it would not be a model but a copy. The differences between model and target depend on the use to which the model is put (Rothbart, 2004). In general, a model is kept as simple as possible (Bertels & Nauta, 1969). Depending on the specific research interests, some elements of the target are deliberately excluded from the model (Black, 1962). Other elements may be included although they are not similar to or shared with the target (cf. ‘negative analogies,’ Hesse, 1966; Rothbart, 2004). Usually, a model is much simpler than the target in order to make the target accessible for observation or other means of research. The target may be too small (an atom), too big (the universe) or too complex for direct observation, there may be ethical inhibitions (the human brain) or technical obstacles (the centre of the earth), or it may be difficult for other reasons to directly examine the target. The model is supposed to offer an alternative way of obtaining information about such an inaccessible target.

    The last four features describe the selection and development of a model.

  6. 5.

    Since having analogies (3.a) and being different (4.) lead to contradictory demands on the model, a model will always be the result of a compromise between these demands (Black, 1962; Van Hoeve-Brouwer, 1996). The researcher is usually confronted with a choice between a complex model that resembles the target in many ways, and a simpler model that is easier to handle. The choice will depend on the nature of the research problem, on the facilities such as time and money that are available, and on the personal preference of the researcher.

  7. 6.

    A model does not interact directly with the target it represents. Consequently, there is always an element of creativity involved in its design, related to its purpose (Schwartz & Lederman, 2005; Van Hoeve-Brouwer, 1996). This means that a photograph, a spectrum or another source of information that does not exist independently of the target, does not count as a model, even if it is very helpful in reaching the purpose of obtaining information about that target (De Vos, 1985). On the other hand, a model may contain elements that have been derived from the target, for instance through measurement, but it should also contain elements of interpretation, simplification, and the like.

  8. 7.

    Several consensus models may co-exist with respect to the same target (Van Oers, 1988). This is an implication of the previous point: depending on the specific context and purpose of the research, different choices may be made, resulting in the development or selection of different models. For instance, biochemists and theoretical chemists may use quite different models for the molecular structure of water as they ask different kinds of research questions.

  9. 8.

    As part of the research activities, a model can evolve through an iterative process (Van Hoeve-Brouwer, 1996). There will be aspects of the target that are not represented adequately in the initial model, or working model. Such aspects are the ‘growing point’ for further elaboration or exploration (Rothbart, 2004). Empirical data obtained from the target, in particular, may be used for a revision of the model, while the revised model will generate new hypotheses with respect to the target, thereby inducing new observations or experiments (Giere, 1991). When, for example, the planet Neptune was found to deviate from its calculated orbit, the model of the solar system was revised so as to include another, as yet unknown planet. In 1930, Tombaugh found this ninth planet, Pluto, near its predicted position.

Aim and Research Question

Working as science education researchers, curriculum developers, and teacher educators, our aim was to probe the tentative description of common features of models and modelling in science, which was based on our reading of the educational and philosophical literature in this area. In particular, we were interested in testing our description by confronting it with the way models are actually used in present-day scientific enquiry. To this end a study was conducted, which was guided by the following general research question: To what extent are the common features of scientific models, based on a study of the literature, recognised and acknowledged by practising scientists, who work with models in their research?

Materials and Methods

Instrument

To develop an instrument for our study, first a pilot study was conducted. We selected a small sample of five papers, written by colleagues working in the Chemistry Department of the first author’s university, which carried the word ‘model’ in the title. With these colleagues, informal interviews were held about the way they used and perceived models. During these interviews, the aforementioned features of models were discussed. Based on these interviews, a questionnaire was developed consisting of ten statements. Each of the first nine statements referred to one of the features. A tenth statement was added, because a certain aspect of models (‘elegance’), which was not included in our tentative description of features, was mentioned in one of the interviews. Statements were formulated in a way, which, hopefully, challenged the ideas of the respondents about the respective features. For instance, in the first statement, related to Feature 1. (A model is always related to a target), the difference between model and target was emphasised: ‘... a model is a model of something else, something which it represents...’ Moreover, to ensure that respondents would react to the statements on the basis of experiences in their own research, all statements incorporated a phrase, such as ‘in your research work,’ ‘in your paper,’ and so on. Each statement had to be marked as ‘correct’ or ‘incorrect,’ followed by an explanation or comment. The questionnaire can be found in Appendix 1. In Table 1, an overview is given of the relations between the statements in the questionnaire and the aforementioned features of models.

Table 1 The relation between general features of model and questionnaire statements

Sample

For the main study, a sample of 77 research articles published in ‘hard science’ journals was drawn from the natural sciences libraries of the first author’s university. In that institution, a broad range of natural science disciplines are taught. Many of these disciplines were included in the sample, incorporating physics, astronomy, biology, chemistry, pharmacology, meteorology, geology, and interdisciplinary sciences such as biochemistry and geophysics. Note that mathematics and computer science were not part of the sample. Only journals, which had adopted a blind peer review system, were selected; most journals were ranked in the Scientific Citation Index (SCI). The first criterion to select articles from journal issues that had appeared between mid-1998 to mid-1999, was the appearance of the word ‘model(s)’ or ‘model(l)ing’ in the title.Footnote 1 Next, the article was read to check whether (most of) the common model features that were addressed in the statements of our questionnaire could be recognised in the use of models and modelling, as it was reported in the article. If this was the case, the questionnaire was sent to the first author of the selected articles. Note that in spite of the broad range of disciplines in our sample, all authors received the same questionnaire. In an accompanying letter the purpose of our study was explained. The overall response was 27 (35%). In three cases, however, it was indicated that the addressee did not have time to complete the questionnaire (“the questionnaire is a bit lengthy and, given time constraints, I am unable to complete it”) or had left for another institution. Thus, the useful response was 24 (31%).

An overview of articles of the respondents is compiled in Appendix 2. Based on our own interpretation of each article, this overview includes the target and the purpose of the model under consideration, as well as an indication of the way the model is developed. The latter distinguishes between the construction of a new model, or the application, refinement or evaluation of one or more existing models. We will use the numbers of the articles in Appendix 2 to refer to the respondents’ answers to the questionnaire.

As can be seen from this list, the articles cover a large variety of domains. We believe that, although the response was rather low in quantitative terms, this set of articles is adequate for the purpose of our investigation. That is, given the different disciplines which are covered, we expect a large variety in the models which are used, enabling us to explore the common model features across scientific disciplines.

Data Analysis

First, the response was analysed in terms of the numbers of respondents agreeing or disagreeing with the ten statements. Next, the explanations and comments of the respondents were analysed in a qualitative manner. Although the respondents represented a large variety of science disciplines, it appeared that nearly all their comments or explanations were relevant in the sense that they were related to the model features we intended to probe. We conclude therefore that our (general) questionnaire has been useful with respect to our aim.

The process of data analysis consisted of a multi-step procedure, in which a series of within-case analyses were followed by cross-case analyses (Miles & Huberman, 1994). As a first step in this procedure, the written comments to the statements were analysed for each respondent by the first and second author individually. Next, by comparing and discussing our individual analyses (investigator triangulation; Janesick, 2000), consensus was sought on the interpretation of the content of all answers and explanation of each individual Respondent. Following this, all responses were ordered by statement, so as to enable an analysis in terms of the tentative common features of models (see Table 1). In the light of our general research question (To what extent are the common features of scientific models [...] recognised and acknowledged by practising scientists...), the interpretation of these responses was focused on falsification: finding arguments whether or not to reformulate, nuance or even refute the provisional common features. After an individual attempt at finding such arguments, the first and second author compared and discussed the arguments they found for each feature before agreeing about the need to adapt a feature, or leave as it was formulated originally.

This procedure can be illustrated by our interpretations of the comments given by Respondent [2] (see Appendix 2), a toxicologist studying non-polar narcotics. This respondent agreed with only two of the ten statements in our questionnaire. S/he found Statement 5 (The model has things in common with what it represents; see Appendix 1) correct because: “1-octanol, the chemical used in the experiments, is a model for other compounds fitting the criteria of non-polar narcotic.” This respondent also agreed with Statement 7 (From the model in your paper, one can derive hypotheses or predictions with respect to that which the model represents), explaining: “Hypothesis: other non-polar narcotics with similar physico-chemical properties will cause the same biological effects. We are testing this!” Analysing these explanations, the first author concluded that the ‘thing in common’ the model and the target have in this case, is being a non-polar narcotic. The second author had drawn the same conclusion, but when discussing their analyses, the authors realised, cued by the word ‘other’ in both explanations cited above, that this means that 1-octanol is part of the class it represents, in other words, it is a model of itself. Through this awareness, we came to understand why Respondent [2], as the only one, did not agree with Statement 1 (A model is a model of something else), although s/he did not provide an explanation for this disagreement. Eventually, on the basis of the above analysis, it was concluded that it was necessary to reformulate our provisional model Feature 1.

Results

Overview of Results

Table 2 gives an overview of the numbers of respondents agreeing or disagreeing with the statements in the questionnaire. The number of respondents that used the open answer space to write comments or explanations is given between brackets.

Table 2 The number of respondents agreeing or disagreeing with the statements in the questionnaire (n = 24)

From this table, it follows that, in total, 84% of the answers given were “agree with the statement”, or the statement was evaluated as “correct.” Eight Respondents agreed with all ten statements. Respondent [2], however, disagreed with all but two statements. In total, 62% of the answers was illustrated or commented upon. Four Respondents ([14], [18], [22], [24]) did not add any explanation, example, or comment to their answers; four Respondents ([9], [13], [20], [23]) illustrated all 10 items with a written comment. Moreover, four respondents provided a final comment or a comment in their accompanying letter. These respondents appeared to be very interested in the concept of models and/or in science education, for example, “As a teacher, I feel that this kind of study is terribly missing in earth sciences and I am surprised of how your statements agree with my personal experience I have not yet so precisely felt” [13]; “teaching the proper use of modelling is a primary objective within our education program” [23].

Qualitative Results per Statement

In this subsection we start the description of results with an item from the questionnaire (Appendix 1) by citing the statement, giving some relevant answers and ending with a summary of arguments either supporting the statement, or not.

Statement 1: In your research paper, a model is a model of something else, something which it represents, e.g., the model simulates a process. In your research work, a distinction was made between the model and what it represents.

All but one respondent (i.e., Respondent [2]) found this statement ‘correct.’ The illustrations provided with the answers confirmed several aspects of general Feature 1. A diversity of targets were mentioned: objects, such as the 3D-structure of a protein [19], phenomena, like a chicken disease [15] or physical behaviour of metals in a natural system [23], processes, like the radiative processes in the atmosphere [9]; ideas, such as (incorrect) causal relations from statistical data sets [7]; and economic goals, for instance, improving tomato crop growing in a green house [10].

The targets were represented by models in various ways, such as a mathematical model [23]; a computer programme [10]; a pharmacokinetic model [5], or an animal model: “the mouse mutant [model] carries a specific defect in a gene (XPD) which is exactly the same as a defect found in a human patient” [8].

Most illustrations confirmed the distinction between a model and its target. Two respondents explicitly did so, one by giving an example: “a mathematical model of ‘time perception’ is neither time nor perception” [4].

In summary

All but one respondent supported Statement 1, implying that their models were distinct from the targets that were represented.

Statement 2: That which is represented by the model (or models) in your paper is the actual object of research. The model is, ultimately, a tool in that research, not an aim in itself.

Nineteen respondents agreed with this statement. Fifteen of them identified the object of their research, for instance, “to evaluate new anti-parkinson agents” [1]. Three added something about the character of their model, such as a numerical model [11], or an ‘idealised’ model [9]. One researcher gave a warning: “because we are dealing with statistical models, we are wary of potential counterproductive misinterpretations” [7].

Four of the five respondents who did not fully agree with the statement, or not at all, illustrated their answer. They argued that constructing a model was, or could have been, the aim of their research. A respondent using a “theoretical model” wrote: “the object was both understanding the physical system represented by the model and the formulation of a model which is consistent in itself” [12]. Respondent [15] described the construction of an animal model to study diseases in broiler chickens. Respondent [23] referred to technology by saying “modelling efforts are often aimed at producing a model as an end product to be used in the design or management of the thing the model represents.” This was confirmed by Respondent [3]: “Project goal was to develop a model (of toxicity of compounds in marine sediments) which could be used to set regulatory limits. In technology, medicine and society, models are used to make decisions based on scientific knowledge.”

In summary

Statement 2 was not fully supported by the respondents. A model can be an aim in itself because it can be constructed for the purpose of experimenting with it. Furthermore, a model is a tool not only in research, but also in making decisions in technology or society.

Statement 3: In your research work, two or even more different models can represent – and often do represent – the same object.

Eighteen respondents ticked this statement ‘correct.’ Thirteen of them added comments, most of them giving alternative models, or illustrating why they had chosen the model they used, for example:

  • “Indeed, I analysed several alternative forms of the basic model and other papers have used different models” [21].

  • “Several models include the representation of the radiative processes in the atmosphere. However, our model is a simple one which is inexpensive” [9].

Respondent [13] ticked ‘correct’ as well as ‘incorrect’ to this statement, explaining “the type of models has been chosen amongst a wide range of possible models,” and that “afterwards, the analysis was restricted to one model.” This means that researchers may choose one specific model and have good reasons for doing so: some models are better or more appropriate for a specific purpose than others. Respondent [7] agreed “in the real world, two or more models can represent the same object because we may not know the object well enough to identify the right model.” Moreover, [7] warned “imperfect or incomplete models can lead to erroneous conclusions.” This answer suggests that, in some cases, there may be only one ‘right’ model. Respondent [23] pointed at the relation between model and research question: “the specific form of a model depends of on the specific question asked about the real object”. The question and its context specify the criteria a model has to meet. If these criteria in a research are very precise, there may be no alternative models present. This may be the reason why five respondents did not agree with Statement 3. Unfortunately, none of them added a comment to their answer.

In Summary the above, the respondents indicated that, in accordance with Statement 3, two or even more different models can in principle represent the same target, but in practice there may be reasons, related to preciseness or to the research question at stake, to decide that only one of the available models is good enough.

Statement 4: The model in your paper differs from whatever it represents in that it is more accessible. If it were not more accessible, it would not be a useful model.

Twenty respondents agreed, 15 of whom gave an illustration. One respondent used a computer model to investigate convection processes in the atmosphere and pointed out “this is not possible by performing measurements inside the real atmosphere” [17]. Respondent [20] mentioned that computer models require a special kind of expertise: “If accessibility means easier to manipulate, I agree. Some people find it easier to go and measure stuff in the lake than to use a complicated computer code.” Physics Respondent [12], though agreeing, implicitly disputed the differences between model and target, saying: “a certain very advanced model from electron dynamics in metals might almost be called a model which is identical with what it represents. But it is far beyond the grasp of an experimentalist, both in understanding and in using” [12].

Four respondents did not agree with Statement 4. Two of them did not illustrate their answer. One respondent who did not agree, wrote: “It is not only more accessible, it is simpler, it describes the fundamental properties rather than the total complexity of the object” [4].

In summary

In accordance with Statement 4, most respondents agreed that models are usually chosen because of their accessibility. However, there may be other reasons as well, for instance, because the model is a simple one, or focuses on a specific and interesting aspect of the target.

Statement 5: The model in your paper has things in common with what it represents. If it had nothing in common, it would not be a useful model.

Twenty-two respondents agreed with this statement. Some of them identified common properties. For instance: “The physical laws which are believed to control the dynamics of fluids are incorporated in the model” [17]. Respondent [19], who used a well known protein as a model for another, less known one, summed up: “Sequence similarity, observable structure similarity.”

Respondent [7], investigating confounding in epidemiological models, did not tick an alternative and explained why: “Cannot tell: [...] without corroborative evidence, it is difficult to tell if the model has things in common with what it represents. Of course, then the model is useless and may be counterproductive. But we may not know it.” We think that this is typically a problem of using statistical models, as epidemiologists do. Respondent [4], studying animal behaviour and also using statistical models, disagreed with this statement because of a similar problem. This respondent wondered what was meant with “common” in the statement, and did not see “fundamental properties or laws underlying the model as a common ground of both model and target.”

In summary

We found that Statement 5 was supported by most respondents and that, consequently, a model should share common aspects with the target it represents. However, when statistical models are used, the common grounds of target and model can be questionable, or unknown.

Statement 6: The model in your paper is a compromise between (a) accessibility and (b) correspondence with the object it represents. Models can therefore be situated in a kind of spectrum, with simple, easily accessible models on one side and more advanced, complicated models on the other side.

Twenty-one respondents agreed, eight of whom illustrated their answer. Two respondents emphasised that a model has to be understandable and should be used with simple means: “instead of a supercomputer for a 3-D model we run the 2-D model on a microcomputer” [9]. Respondent [11] mentioned a tendency towards more complicated models: “As we learn about the process we are modelling, the models tend to become more advanced (and usually more complicated).”

Three respondents did not agree. One of them opposed the tendency mentioned by saying: “A successful modeller never seeks this as a useful goal” [23] elaborating that “a ‘good’ model has the correct boundaries and level of detail to most effectively answer the question being asked about the real system.” Respondent [7] used more or less the same argument: “Complicated models need not be more advanced, or more correct, or less accessible.”

In summary

A model is considered, in agreement with Statement 6, as a compromise in the sense that scientists have to balance between opposing tendencies. Important criteria for the balance are (1) the aim of the study, made concrete in the research question or in design specifications, and (2) the desired preciseness of predictions, tests or explanations.

Statement 7: From the model in your paper, one can derive hypotheses or predictions with respect to what the model represents. Such a hypothesis or prediction requires testing before it can be accepted as correct.

This was the only statement all respondents agreed with. Half of them added a comment. For example, “This is part of the scientific method,” a geologist wrote [11], “Just as Popper liked it” [4] somebody added. A researcher of cancer, who worked with laboratory animals, mentioned very concretely and with some pride: “We predicted increased susceptibility to cancer, which turned out to be true” [8]. But there were also some side remarks: “Testing is often difficult in earth sciences” [13] and “attention should be paid to the validity of the model. Its application besides the limits can cause grave errors” [6].

In summary

Statement 7 was not disputed.

Statement 8: The model in your paper did not follow directly and automatically from measurements or other observations, or from other models. Instead, there was an element of creativity involved.

Eighteen respondents agreed with this statement, and 12 of them provided elements of creativity in their work, for example: “Proper scaling of the model” [20] and “think how a process can work; make a model according to your thoughts” [10]. Five respondents pointed at the important role of existing models. They explained that existing models had to be adapted, for example, “although some expressions are obtained from other models, the coupling of these expressions in our simple model is an element of creativity involved in our work” [9].

In total six respondents opposed, two of whom illustrated their answer. Respondent [5] did not agree because his model was derived from previous simpler models. Respondent [15], a veterinarian who was using a model consisting of broiler chickens which were infected, explained: “The model followed from other workers’ findings and observations.” This suggests that it is possible for a model to have aspects that follow directly from the target.

In summary

Although most respondents supported Statement 8, some respondents did not, because their model followed from other models.

Statement 9: As part of a research cycle, the model in your paper suggests new experiments or observations to be carried out. These, in turn, may lead to newer and more advanced models to be developed, et cetera.

Twenty-two respondents ticked ‘correct,’ 16 of them illustrated their answer. For instance, an ecologist using a mathematical model, noted with some pride: “Yes, the model’s prediction can be tested. As I did in the paper with a field experiment” [21]. A geologist using a computer model explained the cyclic character of model formation by writing: “Our approach is to begin with a simplified model and then enhance the model as physical parameters and processes become better understood” [11].

Two respondents disagreed with this statement. The only one providing an explanation was a theoretical physicist, who wrote: “The model is not used to suggest new experiments, but to improve agreement between outcomes of theoretical calculations and measured values” [12]. This, however, puts emphasis on the second part of Statement 9: results of experiments can ask for more advanced models, without the implication that new experiments should be done.

In summary

The responses indicated that Statement 9 applied to most studies.

Statement 10: Elegance is in your experience, somehow a consideration in designing or selecting a model.

Only 14 respondents agreed with this statement, nine of them added a comment. A typical argument supporting Statement 10 was “that models must be simple and elegant so as to reflect the lawful beauty of reality” [4]. However, someone else added: “It is not a main determinant: the degree to which a model reflects its template is most important” [8]. Respondent [6] noted: “Elegance is often achieved later on by epigones.” And [13] also agreed with the statement but regretted that elegance is valued so much that better, but less elegant models often have been ignored.

Nine respondents did not agree, four of whom provided an illustration to their answer. Respondent [11] using a computer model wrote quite straightforwardly: “I don’t care about elegance. The model can work by ‘brute force’ as far as I am concerned, just as long as it is correct”.

In summary

Opinions differ on Statement 10, more than on any other statement. Elegance and simplicity may appeal to the aesthetic feelings of scholars, which may lead to preferences for one or another (Dieks, 1999), but it is apparently not a main criterion in the modelling process. We therefore conclude that there is no need to formulate a new feature that includes elegance.

Discussion and Conclusions

We conclude that, on the whole, our initial list of common features of models, based on the literature, was recognised and acknowledged by the respondents in the empirical study. However, some features were perceived differently by some respondents, or formulated in different terms. This result aligns with the results of Schwartz and Lederman (2005), who suggested that “conceptions of scientific models and their use in science may differ with context of scientific practice” (p.14).

The studies about models in the historical and philosophical literature we used to make the list of features, dated mainly from the 1960s and early 1970s, and focused on the use of models in fundamental research. From the 1980s onwards, however, rapid changes have taken place in science, in strong interaction with new technologies (among others computers, DNA, microchemistry techniques). Moreover, models are increasingly used in technological design and in making decisions in society. Apparently, given some of the comments of our respondents, this has implications for the use of models and the way scientists perceive models and modelling. Although it is impossible to include all nuances made by our respondents, we will now discuss consequences for the formulation of the general features of models, and, if necessary, propose changes of formulations.

Feature 1

Feature 1 states that a model is always related to a target and is designed for a special purpose. In science education, Feature 1 is commonly interpreted as ‘there is a strict distinction between model and target.’ This, however, has appeared not to be necessarily true for two reasons. To illustrate the first reason, we use the case of Respondent [2], a toxicologist who disagreed with Statement 1, however, without illustrating why. As explained earlier, we came to understand this disagreement because in the research of this respondent, 1-octanol, was used as a model to represent a class of chemical substances: non-polar toxics. As 1-octanol is a non-polar toxic itself, his model substance represents, among others, itself.

The second reason concerns the relationship between model and theory. Respondent [12] suggested that an ideal model (of electrodynamics in physics) is almost identical to what it represents. Is the electron the target? Or (part of) a model? Or part of the theory? The suggestion of Respondent [12] made us realise that a strict distinction between target and model may reflect a positivist view of the world in which the target exists independently of the observer and his theoretical preoccupations. In the past, molecules and atoms were not observable and only could be thought of by using models such as the Bohr model. But nowadays, we can ‘see’ atoms and even manipulate them individually with complicated equipment. In fact, we are observing ‘theory-laden,’ with models and theories not only in our minds, but also made operational in our instruments.

We conclude that there are no objections against Feature 1. However, there are objections against inferring from this feature that model and target can always be distinguished from each other. In cases where model and target are the same, or very similar, the model will mainly have a heuristic and practical value (Rothbart, 2004).

Feature 2

This feature states: ‘A model serves as a research tool that is used to obtain information which itself cannot be easily observed or measured directly.’ This function of a model was recognised by our respondents. The information can be obtained through several methods: by computer simulation, by visualising, by experimenting with the model. Constructing such models can be perceived by scientists as an aim in itself. We found that models in science can be used for another purpose as well: As a representation of scientific knowledge that facilitates making decisions in technology and in society (e.g., global warming). Therefore, we suggest reformulating Feature 2 as follows:

A model serves as:

  • A research tool that is used to obtain information about the target which itself cannot be easily observed or measured directly;

  • A representation of scientific knowledge about the target, to be used to facilitate making decisions about issues (in technology, medicine, society, ...).

Features 3.a and 3.b

From our results (Statement 5), we have learnt that the criterion for bearing ‘some analogies’ (Feature 3.a) should be explicitly related to the realm of the model’s valid use. In statistical models, however, this might be a problem, since the existence of statistical relationships does not necessarily indicate theoretical similarities between model and target.

As Feature 3.b. was not disputed (see results concerning Statement 7), we propose a revision for 3.a. only:

3.a. Within the realm of its valid use, a model bears some analogies to the target.

Feature 4

This feature states that a model differs in certain respects from the target and that these differences make the model more accessible for research than the target. From the respondents’ answers to Statement 4, it has appeared that in most cases, but not in all, a model is more easily accessible than its target. Therefore, we propose to substitute the word ‘accessible’ in Feature 4 by the word ‘attractive’:

A model differs in certain respects from the target. The differences make the model more attractive for research than the target.

Feature 5

The results (cf. answers to Statement 6) do not provide arguments for a revision of this feature, which states that a model is always the result of a compromise between conflicting demands. It could be added that the compromise is derived from (1) the aim of the study, as indicated by the research question or in design specifications, and (2) the desired preciseness of predictions.

Feature 6

This feature states that a model does not interact directly with its target, and that there is always an element of creativity involved in its design so as to serve its purpose. The element of creativity was widely recognised by our respondents (cf. responses to Statement 8), except for those who borrowed their model from other researchers.

We have found some examples of models, however, that did interact directly with their target. A clear example is already mentioned in the discussion of Feature 1: The model used by Respondent [2] concerned a substance that was representative of a group of chemical substances. Therefore, it could be said that, in this case, the model was part of the target. Nevertheless, in that case creativity was needed to formulate requirements for the selection of a suitable model-substance. Another example may be the broiler chickens model of Respondent [15]: Chickens are the target and also make up part of the model. Therefore, we propose to skip the first part of the original Feature 6 and reformulate this feature as follows:

The construction of a model requires creativity, among others in finding a compromise between ‘having analogies with’ and ‘being different from’ the target, so as to optimally serve its purpose.

Feature 7

From the responses to Statement 3, we conclude that, in principle, two or even more different models can represent the same target. However, in practice, scientists may work with only one model. Therefore, Feature 7 has to be extended as follows:

Several consensus models may co-exist with respect to the same target. However, depending on the precision requested (e.g., the precision of the predictions based on the model; the design specifications), one model can be the best, at least for the time being.

Feature 8

We did not find any objections against this feature, stating ‘As part of the research activities, a model can evolve through an iterative process.’ However, we would like to stipulate that ‘research’ should not be regarded in the narrow sense of empirical research. Advances in theoretical research as well as policy making and technological design activities can also add to the iterative process.

Implications for Science Education

Our results are relevant for gearing science education towards recent developments in science research. That is, if we want students to “learn science in a way that reflects how science actually works,” (NRC, 1996, p. 214) we can derive some implications from the findings of the present study for secondary science education.

Most models incorporated in secondary science education curricula to date aim at understanding fundamental scientific concepts. In the last decades, however, inquiry tasks including computer simulation models have entered the curricula and have become subject of educational research (De Jong & Van Joolingen, 1998). In addition, because of the tendency to pay more attention to technology in science education (e.g., NRC, 1996), models aimed at technological design are increasingly present in national curricula, for instance, in the UK (NEAB (Northern Examinations and Assessment Board), 1998) and in the Netherlands (OW&C (Ministry of Education, Culture and Sciences), 1998).

It has been suggested (Erduran, 2001) that in textbooks for secondary science education, models are often presented as static facts, or as final versions of our knowledge. Features such as the relation between model and target (Feature 1), possible limitations of a model (Feature 5), or the way in which models are developed (Feature 8), are seldom addressed. Moreover, textbooks rarely include modeling assignments inviting secondary students to actively construct or test models (Erduran, 2001). Research on teacher knowledge in the domain of models and modeling has repeatedly indicated that teachers have very limited knowledge of the nature of models and the act of modeling (e.g., Justi & Gilbert, 2002, 2003; Van Driel & Verloop, 1999). We assume that these shortcomings could be an important cause of the well-known problem in science education that many students consider a model to be a copy of the target (Grosslight, Unger, Jay, & Smith, 1991). To broaden textbook authors’ and science teachers’ conceptions of models and modeling, we think it is important to make them aware of the fact that models do not always explain empirical observations, and that not all models refer to a simplified representation of an abstract concept. Moreover, the results of the present study provide arguments for paying attention to the role of models in technological design as well as in problem solving and decision-making in society.

Finally, the increasing attention for inquiry in science classrooms (NRC, 2000) implies that not only the content of models, but also modelling activities should play a role in secondary science classes. In our view, in the context of such activities, teachers and educational materials should support students in recognising the similarities between the various models that are involved. We think the common features of models may assist teachers and textbooks to achieve this goal. In this way, such activities may contribute to students’ awareness of the nature of scientific research, and their understanding of the development of scientific knowledge (cf. Gilbert & Boulter, 1997).